id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2305.08058 | Superconducting phase above room temperature in lutetium-beryllium
hydrides at high pressures | High-pressure structural search was performed on the hydrogen-rich compound
LuBeH$_8$ at pressures up to 200 GPa. We found a $Fm\overline{3}m$ structure
that exhibits stability and superconductivity above 100 GPa. Our phonon
dispersion, electronic band structure, and superconductivity analyses in the
100-200 GPa pressure range reveal a strong electron-phonon coupling in
LuBeH$_8$. While $T_{c}$ shows a decreasing trend as the pressure increases,
with a superconducting critical temperature $T_c$ of 255 K at 200 GPa and a
maximum $T_c$ of 355 k at 100 GPa. Our research has demonstrated the
room-temperature superconductivity in $Fm\overline{3}m$-LuBeH$_8$, thus
enriching the family of ternary hydrides. These findings provide valuable
guidance for identifying new high-temperature superconducting hydrides. | Bin Li, Yeqian Yang, Yuxiang Fan, Cong Zhu, Shengli Liu, Zhixiang Shi | 2023-05-14T03:52:29Z | http://arxiv.org/abs/2305.08058v1 | # Superconducting phase above room temperature in lutetium-beryllium hydrides at high pressures
###### Abstract
High-pressure structural search was performed on the hydrogen-rich compound LuBeH\({}_{8}\) at pressures up to 200 GPa. We found a \(Fm\overline{3}m\) structure that exhibits stability and superconductivity above 100 GPa. Our phonon dispersion, electronic band structure, and superconductivity analyses in the 100-200 GPa pressure range reveal a strong electron-phonon coupling in LuBeH\({}_{8}\). While \(T_{c}\) shows a decreasing trend as the pressure increases, with a superconducting critical temperature \(T_{c}\) of 255 K at 200 GPa and a maximum \(T_{c}\) of 355 k at 100 GPa. Our research has demonstrated the room-temperature superconductivity in \(Fm\overline{3}m\)-LuBeH\({}_{8}\), thus enriching the family of ternary hydrides. These findings provide valuable guidance for identifying new high-temperature superconducting hydrides.
**Keywords:**_Superconductivity, Hydride, High pressures, Room temperature_
## 1 Introduction
The search for room-temperature superconducting materials is widely regarded as the "holy grail" of condensed matter physics [1]. According to Bardeen-Cooper-Schrieffer (BCS) theory [2], the superconducting critical temperature (\(T_{c}\)) is proportional to the Debye temperature, which is inversely proportional to its mass. Metallic hydrogen, being the lightest element, has a high Debye temperature and strong electron-phonon (_e-ph_) coupling, which can lead to high-temperature superconductivity. However, due to the extremely high pressure required for metallic hydrogen synthesis, it is technically difficult to achieve. Therefore, researchers focused on metal hydrides instead. The metallization of hydrides can be achieved at lower pressures due to the "chemical pre-compression" effect of heavier elements [3].
The search for high-temperature superconductors in hydrogen-rich compounds began after the theory of "chemical precompression" was proposed. Initially, researchers focused on natural binary hydrides, such as SiH\({}_{4}\), AlH\({}_{3}\), etc. [4, 5]. These studies were followed by the investigation of binary hydrides with new proportions, such as H\({}_{3}\)S (maximum \(T_{c}\) is 203 K [6]), CaH\({}_{6}\) (\(T_{c}\) is 210 K at 170 GPa [7]), YH\({}_{6}\) (\(T_{c}\) is 220 K at 183 GPa [8]) and LaH\({}_{10}\) (\(T_{c}\) is 250-260 K at 170-200 GPa [9, 10]), and so on. After exploring almost all binary hydrides, research shifted to ternary hydrides. Ternary hydrides greatly expand the variety of phases by providing more element ratios and leading to the discovery of higher superconducting transition temperatures. For example, CaYH\({}_{12}\) (\(T_{c}\) = 258 K [11] at 200 GPa), Li\({}_{2}\)MgH\({}_{16}\) (\(T_{c}\) = 473 K at 250 GPa [12]), LaBH\({}_{8}\) (\(T_{c}\) = 126-156 K at 50-55 GPa [13, 14]).
The most prominent high-pressure high-\(T_{c}\) compounds are known as "superhydrides". Superhydrides have enveloping cage-shaped hydrogen-based lattices, wrapped in positively charged metal atoms. The most prominent metal atoms are rare earth elements including lanthanum, yttrium, and cerium. However, the lutetium hydride has not received good attention [15, 16]. Lutetium and lanthanum have similar electronegativity, and can dissociate hydrogen molecules into atoms, the \(f\)-shell filled with lutetium superhydride is expected to carry high \(T_{c}\). Up until a recent study, superconducting properties were observed around room temperature (\(T_{c}\) = 294 K) in nitrogen-doped lutetium hydride at mild pressure of 10 Kbar [17]. Unfortunately, despite the use of various methods, such as X-ray diffraction (XRD), elemental analysis, Raman spectroscopy, etc., its composition and structure have not be clarified. Furthermore, recent experimental and theoretical endeavors have reported the absence of near-ambient superconductivity in nitrogen-doped lutetium hydrides [18, 19, 20, 21, 22], which is contrary to the original work by Dasenbrock [17]. The existence of superconductivity in nitrogen-lutetium hydrides is a topic of debate.
In this letter, we predict a new ternary room-temperature superconductor, LuBeH\({}_{8}\) (spacegroup: \(Fm\overline{3}m\)), by searching the stable structures of lutetium-beryllium-hydrogen system. We studied its phonon dispersion, electronic band structure, electron-phonon couplings, and superconducting critical temperatures. Its high symmetry facilitates
its good superconductivity. Through our calculations, we found that LuBeH\({}_{8}\) remains stable at 100 GPa, and the superconducting critical temperature is as high as 355 K, which is already far beyond the room temperature.
## 2 Methods
We used in-house developed machine-learning-based crystal structure prediction package CRYSTREE[23, 24] to search the stable crystal structure of the Lu-Be-H (element ratio of 1:1:8) system at 100, 150 and 200 GPa. The results are verified by the graph theory assisted universal structure searcher MAGUS[25]. We then re-optimized the structures using the \(ab\)\(initio\) calculation of the Quantum Espresso(QE) package[26], and calculate the phonon spectrum at different pressures using density functional perturbation theory (DFPT)[27]. The charge density and the wave function cutoff values are 600 Ry and 60 Ry, respectively. Electronic structure calculations were performed by using the method of full-potential linearization enhanced plane wave (FP-LAPW)[28] with Perdew Burke Ernzerhof (PBE) functional. VESTA[29] was used to visualize the crystal structure. Fermi surfaces were visualized using Fermisurfer[30]. A 4\(\times\)4\(\times\)4 q grid and a 12\(\times\)12\(\times\)12 k point grid were selected to calculate electron phonon coupling and integration in the Brillouin zone. Dense 24\(\times\)24\(\times\)24 grids are used to evaluate precise electron-phonon interaction matrices. Finally, \(T_{c}\) was calculated using the Allen-Dynes modified McMillan equation[31].
## 3 Results and discussion
The crystal structure of LuBeH\({}_{8}\) is shown in Figure 1. The yellow, blue and white balls represent lutetium, beryllium and hydrogen atoms, respectively, with Lu, Be and hydrogen occupying positions \(4a\), \(4b\), and \(32f\) Wyckoff positions. The H atoms form a polyhedron surrounding the Lu atom. Be atoms are inserted between the polyhedra. \(Fm\overline{3}m\)-LuBeH\({}_{8}\) is structurally similar to sodium hydrides, such as LaH\({}_{10}\), where guest atoms such as La act as scaffolds and can apply mechanical pressure to the lattice[31, 16], a mechanism commonly referred to as chemical precompression. By linking this mechanism to LuBeH\({}_{8}\), the Be atoms occupy the sites between the next closest Lu atoms, effectively filling the remaining interspace in the whole structure. The denser Lu-Be scaffold is then formed, which firmly binds the highly symmetrical metal hydrogen lattice, allowing it to remain stable at lower pressures.
The phonon properties of \(Fm\overline{3}m\)-LuBeH\({}_{8}\) were calculated using DFPT scheme. The calculation determined that the lower pressure limit of LuBeH\({}_{8}\) is 100 GPa, above which no imaginary branches of the phonon spectrum exist. We show the phonon dispersion curve and phonon state density (PHDOS) of LuBeH\({}_{8}\) at 100 GPa in Figure 2. It can be seen from the figure that there is no imaginary vibration in the entire Brillouin zone, indicating that the structure is dynamically stable at this pressure, and the vibration of phonons is mainly distributed in the middle and low frequencies in the
entire frequency range. From the PHDOS diagram, it can be seen that the vibration in the low frequency region is mainly from the Lu atom, and there is a significant phonon peak located at 100 cm\({}^{-1}\), and the Be and H atoms in this range are barely vibrating. The vibration in the middle and high frequency range (100 cm\({}^{-1}\) and above) mainly comes from the H and Be atoms. We also show the integration of the Eliashberg function \(\alpha^{2}F(\omega)\) and electron-phonon coupling \(\lambda(\omega)\) in the panel on the far right. By integrating the Eliashberg function \(\alpha^{2}F(\omega)\), we can get \(\lambda=2\int\alpha^{2}F(\omega)\omega^{-1}d\omega\) and logarithmic mean phonon frequency \(\omega_{ln}=exp[2\lambda^{-1}\int d\omega\alpha^{2}F(\omega)\omega^{-1}log\omega]\). According to the coupling curve, it is not difficult to find that the coupling integral below 1200 cm\({}^{-1}\) accounts for most of the total coupling contribution.
In Figure 3, we show the electronic band structure of \(Fm\overline{3}m\)-LuBeH\({}_{8}\) at 100 GPa and the atomic projection density of state (DOS) in eV\({}^{-1}\)/f.u. It shows that the structure exhibits metallic behavior, as evidenced by more than one band crossing the Fermi level. Near the Fermi level, DOS is dominated by Lu and H atoms. From the electronic energy band, it can be seen that there is a Dirac-cone like band crossing at point \(W\) at \(\sim\) -3.2 eV. The corresponding DOS curve around -6 eV has a very high peak, and the Lu atom provides a very large density states due to the \(f\) orbital contribution.
Figure 4 shows the Fermi surfaces of \(Fm\overline{3}m\)-LuBeH\({}_{8}\), shadowed by the distributions of Fermi velocity, which change color from blue to red to indicate an increase in Fermi velocity. At 100 GPa, the Fermi surface of \(Fm\overline{3}m\)-LuBeH\({}_{8}\) consists mainly of three parts
Figure 1: The crystal structure of \(Fm\overline{3}m\)-LuBeH\({}_{8}\). The yellow, blue, and white balls represent the lutetium, beryllium, and hydrogen atoms, respectively.
(Figure 4(b-d)), six semi-oval hollow pockets are regularly distributed in the Brillouin area, each pocket is wrapped by a four-leaf clover-shaped sheet with four sharp corners, a large electron sphere around the \(\Gamma\) point, and eight small dots regularly around this electron sphere.
In order to estimate the superconducting critical temperature of LuBeH\({}_{8}\) under pressure and to find the maximum \(T_{c}\), we performed a linear response calculation for the _e-ph_ coupling (EPC), based on the calculated Eliashberg spectral function \(\alpha^{2}F(\omega)\),
\[\alpha^{2}\mathrm{F}(\omega)= \frac{1}{2\pi N(0)}\sum_{Qv}\frac{\gamma Qv}{\omega Qv}\delta( \omega-\omega Qv), \tag{1}\]
the EPC constant \(\lambda\) is obtained by:
\[\lambda{=}2\int_{0}^{\infty}\frac{\alpha^{2}\mathrm{F}(\omega)}{\omega}d\omega. \tag{2}\]
In addition, the critical temperature is calculated by the Allen-Dynes modified Mc-Millan formula [32]
\[T_{c}=f1f2\frac{\omega_{log}}{1.2}\mathrm{exp}\left[-\frac{1.04(1+\lambda)}{ \lambda-\mu^{*}(1+0.62\lambda)}\right]. \tag{3}\]
Figure 2: \(Fm\overline{3}m\)-LuBeH\({}_{8}\) phonon dispersion, phonon density of states (PHDOS), Eliashberg spectral function \(\alpha^{2}F(\omega)\) and _el-ph_ coupling \(\lambda(\omega)\) at 100 GPa. The PHDOS projections on Lu, Be, and H are color-coded in red, green, and blue, respectively, to aid in visual interpretation.
Figure 4: The Fermi surfaces of \(Fm\overline{3}m\)-LuBeH\({}_{8}\) at 100 GPa, highlighting the Fermi velocity with a color gradient from blue to red to indicate relative velocity. The overall Fermi surface is shown in (a), and the three distinct parts of the surface are shown in (b)-(d). Each image is color-coded to indicate changes in Fermi velocity.
Figure 3: \(Fm\overline{3}m\)-LuBeH\({}_{8}\) electronic band structure and partial density of states (DOS) at 100 GPa. The unit of DOS is eV\({}^{-1}\)/f.u., and is color-coded by element, with Lu, Be, and H represented in red, green, and blue, respectively. The Fermi level serves as the zero point of the energy scale.
where \(\lambda\) is the EPC intensity, \(\omega_{log}\) is the logarithmic mean phonon frequency, and the coulomb pseudopotential parameter \(\mu^{*}\) is set to 0.1. \(\omega_{log}\) is defined as
\[\omega_{\rm log}=\exp[\frac{2}{\lambda}\int_{0}^{\infty}\frac{d\omega}{\omega} \alpha^{2}\rm{F}(\omega)\ln\omega]. \tag{4}\]
The factor \(f_{1}\), \(f_{2}\) depend on \(\lambda\), \(\mu^{*}\),\(\omega_{log}\), and mean square frequency \(\overline{\omega_{2}}\),
\[f_{1}f_{2}=\sqrt[3]{1+(\frac{\lambda}{2.46(1+3.8\mu^{*})})^{\frac{3}{2}}}(1- \frac{\lambda^{2}(1-\overline{\omega_{2}}/\omega_{\rm{log}})}{\lambda^{2}+3. 312(1+6.3\mu^{*})^{2}}). \tag{5}\]
The detailed calculation results are shown in Table 1.
The calculation results show that \(Fm\overline{3}m\)-LuBeH\({}_{8}\) exhibits metallic behavior while maintaining kinetic stability at 100 GPa, and its superconducting critical temperature is as high as 355 K, which is far beyond the temperature required for room temperature. As the pressure increases, the overall superconducting critical temperature tends to decrease, which may be attributed to the hardening of phonon branches that impede electron-phonon coupling and superconductivity.
## 4 Conclusion
In conclusion, we discovered the superconducting phase \(Fm\overline{3}m\)-LuBeH\({}_{8}\) of the novel superhydride through first-principles calculations and crystal structure prediction. LuBeH\({}_{8}\) remains dynamically stable above 100 GPa while reaching a maximum superconducting critical temperature of 355 K. Excellent superconducting properties result from high structural symmetry and the efficient stacking of beryllium in the lattice, which allows stable mechanical pressure to be applied. \(T_{c}\) shows a decreasing tendency with increasing pressure. The increasing pressure will harden the phonon branches and inhibit the electron-phonon coupling of the structure as well as superconductivity. Our study confirms the validity and accuracy of machine-learning-based crystal structure searches and offers a promising approach for discovering other hydride superconductors. Our findings represent a step towards achieving room-temperature superconductivity.
\begin{table}
\begin{tabular}{c c c c c} \hline Structure & Pressure (GPa) & \(T_{c}\) (K) & \(\lambda\) & \(\omega_{log}\) (K) \\ \hline \multirow{3}{*}{\(Fm\overline{3}m\)-LuBeH\({}_{8}\)} & 100 & 355 & 7.0 & 785 \\ & 120 & 293 & 4.2 & 900 \\ & 150 & 269 & 3.0 & 1100 \\ & 180 & 274.8 & 2.5 & 1160 \\ & 200 & 255 & 2.4 & 1265 \\ \hline \end{tabular}
\end{table}
Table 1: The main superconductivity performance of \(Fm\overline{3}m\)-LuBeH\({}_{8}\) at different pressures from 100 to 200 GPa.
This work is supported by the National Key R&D Program of China (Grant No. 2018YFA0704300), the National Natural Science Foundation of China (Grants No. U1932217), and NUPTSF (Grant No. NY219087, NY220038). Some of the calculations were performed on the supercomputer in the Big Data Computing Center (BDCC) of Southeast University. We thank Prof. Haihu Wen for valuable discussions.
|
2306.11059 | Geodesic complexity of a tetrahedron | We prove that the geodesic complexity of a regular tetrahedron exceeds its
topological complexity by 1 or 2. The proof involves a careful analysis of
minimal geodesics on the tetrahedron. | Donald M. Davis | 2023-06-19T16:48:19Z | http://arxiv.org/abs/2306.11059v1 | # Geodesic complexity of a tetrahedron
###### Abstract.
We prove that the geodesic complexity of a regular tetrahedron exceeds its topological complexity by \(1\) or \(2\). The proof involves a careful analysis of minimal geodesics on the tetrahedron.
Key words and phrases:Geodesic complexity, topological robotics, geodesics, tetrahedron. 2000 _Mathematics Subject Classification_: 53C22, 52B10, 55M30.
## 1. Introduction
In [3], Farber introduced the concept of the _topological complexity_, \(\operatorname{TC}(X)\), of a topological space \(X\), which is the minimal number \(k\) such that there is a partition
\[X\times X=E_{0}\sqcup E_{1}\sqcup\cdots\sqcup E_{k}\]
with each \(E_{i}\) being locally compact and admitting a continuous function \(\phi_{i}:E_{i}\to P(X)\) such that \(\phi_{i}(x_{0},x_{1})\) is a path from \(x_{0}\) to \(x_{1}\).1 Here \(P(X)\) is the space of paths in \(X\), and each \(\phi_{i}\) is called a motion-planning rule. If \(X\) is the space of configurations of one or more robots, this models the number of rules required to program the robots to move between any two configurations.
Footnote 1: Farber’s original definition involved partitions into \(k\) sets rather than \(k+1\), but for technical reasons the definition here has become more common.
In [4], Recio-Mitter suggested that if \(X\) is a metric space, then we require that the paths \(\phi_{i}(x_{0},x_{1})\) be minimal geodesics from \(x_{0}\) to \(x_{1}\), and defined the _geodesic complexity_, \(\operatorname{GC}(X)\), to be the smallest number \(k\) such that there is a partition
\[X\times X=E_{0}\sqcup E_{1}\sqcup\cdots\sqcup E_{k}\]
with each \(E_{i}\) being locally compact and admitting a continuous function \(\phi_{i}:E_{i}\to P(X)\) such that \(\phi_{i}(x_{0},x_{1})\) is a minimal geodesic from \(x_{0}\) to \(x_{1}\). Each function \(\phi_{i}\) is called a _geodesic motion-planning rule_ (GMPR).
One example discussed in [4] was when \(X\) is (the surface of) a cube. It is well-known that here \(\operatorname{TC}(X)=\operatorname{TC}(S^{2})=2\), and he showed that \(\operatorname{GC}(X)\geq 3\).
In this paper, we let \(X\) be a regular tetrahedron \(T\), and prove
**Theorem 1.1**.: \(\operatorname{GC}(T)=3\) _or \(4\)._
Again, for comparison, \(\operatorname{TC}(T)=\operatorname{TC}(S^{2})=2\).
In Section 2, we introduce what we call the _expanded cut locus_ in order to study the geodesics on \(T\). In Section 3, we prove \(\operatorname{GC}(T)\leq 4\), and in Section 4, we prove \(\operatorname{GC}(T)\geq 3\). Despite considerable effort, we have been unable to establish the precise value of \(\operatorname{GC}(T)\).
## 2. Expanded cut locus
The _cut locus_ of a point \(P\) on a convex polyhedron is the set of points \(Q\) such that there is more than one shortest path from \(P\) to \(Q\). For the regular tetrahedron \(T\), this is conveniently sketched on a flat model of \(T\). For \(P\in T\), we define the _expanded cut locus_ of \(P\) to be the set of terminal points of equal shortest paths from \(P\) to versions of cut-locus points \(Q\) in a flat model of \(T\), expanded so that the same face may appear more than once.
In Figure 2.1 we illustrate the expanded cut locus of a point \(P\). The open segments \(a\,U_{0}\) and \(a\,U_{-}\) depict the same set of points in the tetrahedron, and the segments from \(P\) to points on each at equal distance from \(a\) depict equal shortest segments from \(P\) to a point \(Q\) in \(T\). A similar situation holds for segments from \(d\) to two \(U\)-points, from \(c\) to two \(L\)-points, and from \(b\) to two \(L\)-points. Also the small open segments \(U_{-}\,L_{-}\) and \(U_{+}\,L_{+}\) are part of the expanded cut locus of \(P\), as they represent the same points in \(T\), and segments from \(P\) to points at equal height on the two lines are equal minimal geodesics. The three \(U\)-points represent the same point in \(T\); the paths from \(P\) to them are equal shortest paths in \(T\). Similarly for the three \(L\)-points. Thus the expanded cut locus of \(P\) is the entire red polygon in Figure 2.1 minus the points \(a\), \(b\), \(c\), and \(d\).
The actual cut locus for this point \(P\) is shown in Figure 2.2, which is a flat version of part of \(T\), but does not contain multiple versions of points.
**Figure 2.1. An expanded cut locus.**
The expanded cut locus of any point \(P\) in the interior of triangle \(aCM\) in Figure 2.1, where \(C\) is the centroid and \(M\) the midpoint of \(ac\), has a form similar to the one depicted there. We make this precise in Theorem 2.3.
**Theorem 2.3**.: _Suppose that in Figure 2.1 the coordinates of \(a\), \(b\), and \(c\) are, respectively, \((0,\sqrt{3})\), \((-1,0)\), and \((1,0)\), and \(P=(x,\alpha\sqrt{3})\) with \(0<x<\frac{1}{2}\) and \(\frac{1}{3}+\frac{1}{3}x<\alpha<1-x\). Then the expanded cut locus of \(P\) is as depicted in Figure
2.1 and described above with_
\[U_{\pm} = (\pm 2+x,\sqrt{3}(1-\frac{x(2-x)}{3(1-\alpha)}))\] \[U_{0} = (2-x,\sqrt{3}(1+\frac{x(2-x)}{3(1-\alpha)}))\] \[L_{\pm} = (\pm 2+x,\sqrt{3}\ \frac{1-x^{2}}{3\alpha}) \tag{2.4}\] \[L_{0} = (-x,\sqrt{3}\ \frac{x^{2}-1}{3\alpha})\]
Proof.: Since
\[\langle x,\sqrt{3}(\alpha-1)\rangle\cdot\langle 2-x,\sqrt{3}\ \frac{x(2-x)}{3(1- \alpha)}\rangle=0,\]
\(\overrightarrow{aP}\perp\overrightarrow{aPU_{0}}\). Similarly the red lines through \(b\), \(c\), and \(d\) are perpendicular to the segments from \(P\) to those points. Another easy verification is that \(\frac{1}{2}(U_{0}+U_{-})=a\), and similarly for \(b\), \(c\), and \(d\). So \(Pa\) is the perpendicular bisector of \(U_{0}U_{-}\). That \(\frac{1}{2}(U_{0}+U_{+})=d\) shows that \(U_{0}\) and \(U_{+}\) lie in the same relative position in triangle \(bcd\). One readily sees that the region inside the red polygon in Figure 2.1 exactly covers the four triangles that comprise the tetrahedron.
This slick verification hides the way in which the formulas (2.4) were obtained. We initially used the method of star unfolding and Voronoi diagrams developed in [1], and applied to the cube in [2], using perpendicular bisectors.
The triangle \(abc\) in Figure 2.1 is divided into six congruent subtriangles. The formulas (2.4) only apply to points \(P\) in the interior of the upper right subtriangle \(aCM\), but the expanded cut locus of points in the other five subtriangles can be obtained by obvious rotations and reflections. We now consider the form of the expanded cut locus for points on the boundary of triangle \(aCM\).
As \(P\) approaches the edge \(aM\), \(L_{\pm}\) approaches \(U_{\pm}\). When \(P\) is on the edge, they coincide, and the two multiplicity-3 points in the cut locus become a single multiplicity-4 point, which we will later call \(B\), for "both." In Figure 2.5, we depict the two extreme cases, \(P=a\) and \(P=M\). The continuum between them should be clear. We label the left one \(P\approx a\), because when \(P=a\), the line passing through \(a\) is not part of
the expanded cut locus, since the line connecting \(P\) with points on the lines at equal distance from \(a\) in each direction are actually the same line in \(T\). But for points \(P\) arbitrarily close to \(a\), the lines from \(P\) to points on the line are not the same line in \(T\).
**Figure 2.5. \(P\) on an edge.**
\(P\approx a\)\(a\)\(a\)\(a\)\(P=M
\(P=a\). Even accounting for the fact that when \(P=a\), the line emanating from \(a\) is not part of the expanded cut locus, the diagrams still differ in that one has a vertical line on the left side, whereas the other has a vertical line in the upper right. The explanation is that paths from \(a\) to corresponding points on those lines are exactly the same path on \(T\).
In Figure 2.7 we show the expanded cut locus when \(P\) is at the centroid \(C\) of \(abc\), which is the case \(L=d\) in Figure 2.6.
**Figure 2.7. \(P\) at the centroid.**
\(U=L\)\(U=L\)\(U=L\)\(U=L\)
Finally, if \(P\) is on the segment \(CM\), \(U_{\pm}=L_{\pm}(=B)\), and they lie on edge \(bd\). This is depicted in Figure 2.8. As \(P\) moves from \(C\) to \(M\), \(B\) moves from \(d\) to the midpoint of \(bd\).
**Figure 2.8. \(P\) on the segment \(CM\).**
## 3. Upper bound
**Theorem 3.1**.: _There is a decomposition_
\[T\times T=E_{1}\sqcup E_{2}\sqcup E_{3}\sqcup E_{4}\sqcup E_{5}\]
_with a GMPR \(\phi_{i}\) on \(E_{i}\)._
Proof.: Let \(G_{P}\) denote the polygon associated to the point \(P\) sketched in red in any of the figures of Section 2. More precisely, one must, of course, use the formulas (2.4) to determine the vertices of the polygon, and if \(P\) is reflected across the line \(x=0\) in Figure 2.1, then one must modify the formulas to give the reflection of the polygon. If \(P\) is at a vertex, there are two choices for \(G_{P}\), either as in Figure 2.5 or 2.6. It doesn't matter, but let's choose 2.6.
The set \(E_{1}\) is the complement of the total cut locus of \(T\). It consists of pairs \((P,Q)\) such that \(Q\) is interior to the polygon \(G_{P}\), together with those for which \(Q\) is a vertex of \(T\), except for cases such as \((P,d)\) in Figure 2.6. (The only cases when a vertex \(V\) is in the cut locus of a point \(P\) is when \(P\) lies on a segment connecting the centroid of the face opposite \(V\) with one of the other vertices, including the centroid, but not the vertices.) Here \(\phi_{1}(P,Q)\) is the straight line from \(P\) to \(Q\) in our expanded cut locus diagram.
The set \(E_{2}\) consists of pairs \((P,Q)\) where \(P\) is not a vertex and \(Q\) lies in the interior of a cut-locus segment from a vertex \(V\) to a \(U\) or \(L\) point, excluding cases in which \(P\) lies on a segment from a vertex of face \(abc\) to the centroid \(C\) of \(abc\), and \(V=d\). We choose \(\phi_{2}(P,Q)\) to be the path from \(P\) to the appropriate point on the right side of the vector from \(P\) to \(V\). For example, in Figures 2.1 and 2.2, \(E_{2}\) contains \((P,Q)\) for all \(Q\) in the open segments \(aU\), \(bL\), \(cL\), and \(dU\) in 2.2, and in 2.1 we choose the segments connecting \(P\) with points on \(aU_{0}\), \(bL_{-}\), \(cL_{0}\), and \(dU_{+}\). To maintain continuity of \(\phi_{2}\), we had to exclude points \((P,Q)\) with \(P\) on the segment \(aC\) and \(Q\) on \(dU\) because shortest paths from the point \(P\) in Figure 2.1 to \(dU\) must pass through side \(ac\), whereas for points \(P\) on the left side of \(aC\) the diagram is reflected and the shortest paths from \(P\) to \(dU\) will pass through side \(ab\).
This requires some care because, for example, if \(P\) is in face \(abc\), the cut-locus line out from vertex \(d\) plays a different role than the others. Because we have excluded points with \(P\) on segments from a vertex to a centroid, we can consider the domain
of points \(P\) for which \(Q\) is on a cut-locus line from vertex \(d\) as three topologically disjoint sets \(aCbd\), \(adcC\), and \(bCcd\), as pictured in Figure 3.2.
**Figure 3.2. \(P\)-domains for lines through \(d\).**
The continuity of \(\phi_{2}\) on each of these domains should be fairly clear, but because of the different roles played by points in face \(abc\) and the other points, Figure 3.3 should make it clearer. What is pictured here is a breakdown of the region \(aCcd\) in Figure 3.2 into subregions together with, for each subregion, the endpoints of the cut-locus segments out of vertex \(d\) corresponding to points \(P\) in the subregion. For example, output region 2 is points \(U_{+}\) in Figure 2.1 corresponding to points in input region 2, and output region 6 is points \(U_{0}\) in a rotated version of Figure 2.1 corresponding to points in input region 6. The entire segment between input regions 5 and 6 maps to output point \(b\). The dashed boundary of output regions 5 and 6 are not in the image. We call the points \(Q_{\max}\) in Figure 3.3 because they are the \(Q\) farthest from \(d\) for a point \(P\).
**Figure 3.3. Largest \(Q\) for varying \(P\).**
\(a\)Input \(P\)\(d\)\(b\)\(c\)Output \(Q_{\max}\)
The set \(E_{3}\) consists of points \((P,Q)\) of two types. Type (1) has \(P\) in sets \(\mathcal{I}\) defined as the interior of the set of points in a face which are closer to one vertex than to the others. For example, in Figure 2.1, one such region would be the interior of the quadrilateral in the upper third of triangle \(abc\). The points \(Q\) associated to \(P\) are the closed interval \(UL\). Type (2) has \(P\) all points on segments connecting a vertex \(V\) of a face \(abc\) with its centroid \(C\), including \(V\) but not \(C\), and \(Q\) in the closed segment connecting the other vertex \(d\) with the point \(L\) associated with \(P\) as in Figure 2.6. Note that this can be considered as a \(UL\) segment, too.
For \(P\in\mathcal{I}\) and \(Q\) in the closed interval \(UL\), we can choose \(\phi_{3}(P,Q)\) to be the appropriate point in \(U_{+}L_{+}\), using rotations of Figure 2.1. Then in Figure 2.6, we would choose as \(\phi_{3}(P,Q)\) the path that goes to the right from a point \(P\) on \(aC\) to the appropriate \(Q\) on \(dL\).
The rest is easy. Let \(E_{4}\) consist of pairs \((P,Q)\) such that \(P\) is a vertex and \(Q\) the centroid of the opposite face, or \(P\) is a centroid and \(Q\) the opposite vertex. Since this is a discrete set, \(\phi_{4}\) can be chosen arbitrarily.
Let \(E_{5}\) be the set of \((P,Q)\) such that \(P\) lies in one of six topologically disjoint sets, each of which is the union of lines from the midpoint \(M\) of an edge of \(T\) to the adjacent vertices and centroids, including \(M\) but not the vertices or centroids. See Figure 3.4. A unique point \(Q=B\) is associated to each point \(P\). Recall that when \(U=L\), we call it \(B\). These are points of multiplicity 4, as in Figures 2.5 and 2.8. As long as one chooses \(\phi_{5}\) continuously on a set such as Figure 3.4, it can be chosen arbitrarily.
**Figure 3.4. Typical set for \(E_{5}\).**
## 4. Lower bound
**Theorem 4.1**.: _The space \(T\times T\) cannot be partitioned as \(E_{1}\sqcup E_{2}\sqcup E_{3}\) with a GMPR on each \(E_{1}\)._
Proof.: Let \(M\) be the midpoint of \(ac\) in Figure 2.1, and \(P^{\prime}\) a point on the segment connecting \(M\) and \(P\) in that figure. The expanded cut locus for \(P^{\prime}\) is as in the figure, and as \(P^{\prime}\) approaches \(M\), \(L_{\pm}\) approaches \(U_{\pm}\), and they and \(U_{0}\) and \(L_{0}\) approach the midpoint of \(bd\). We call this point \(B\).
Suppose \((M,B)\in E_{1}\), and \(\phi_{1}(M,B)\) is the path which goes down (toward the limit of \(L_{0}\) in Figure 2.1). (Going up is handled similarly, reversing the roles of \(U\) and \(L\). We will consider later how to handle it when \(\phi_{1}(M,B)\) goes left or right.) We cannot have a sequence of \(P^{\prime}\) as in the figure with \(P^{\prime}\to M\) and \((P^{\prime},U_{P^{\prime}})\in E_{1}\) because that would imply \(\phi_{1}(P^{\prime},U_{P^{\prime}})\to\phi_{1}(M,B)\), which is impossible since \(\phi(P^{\prime},U_{P^{\prime}})\) must go either left, right, or up. There is a sequence of such \(P^{\prime}_{n}\) all in the same \(E_{i}\), which we call \(E_{2}\), and, restricting more, all \(\phi_{2}(P^{\prime}_{n},U_{P^{\prime}_{n}})\) going in the same direction, which we will suppose is left. We will consider later the minor modifications required if \(\phi_{2}(P^{\prime}_{n},U_{P^{\prime}_{n}})\) goes right or up.
For each such \(P^{\prime}_{n}\), there is an interval of \(Q\)'s in the cut locus of \(P^{\prime}_{n}\) abutting \(U_{P^{\prime}_{n}}\) along the segment from \(d\) to \(U_{P^{\prime}_{n}}\). (It is close to \(U_{0}\) and \(U_{+}\) in Figure 2.1.) There cannot be infinitely many of these with \((P^{\prime}_{n},Q)\in E_{2}\) since \(\phi(P^{\prime}_{n},Q)\) must go right or up, but \(\phi_{2}(P^{\prime}_{n},U_{P^{\prime}_{n}})\) goes left. If there were, for infinitely many \(n\), a sequence \(Q_{n,m}\) approaching \(U_{P^{\prime}_{n}}\) with \((P_{n},Q_{n,m})\in E_{1}\), then the sequence \((P_{n},Q_{n,n})\) would approach \((M,B)\), but \(\phi_{1}(P_{n},Q_{n,n})\) cannot approach \(\phi_{1}(M,B)\), since the possible directions differ. Thus there exist sequences \(Q_{n,m}\to U_{P^{\prime}_{n}}\) with \((P^{\prime}_{n},Q_{n,m})\) in a new set \(E_{3}\), and we may assume that \(\phi_{3}(P^{\prime}_{n},Q_{n,m})\) all have the same direction, which we may assume to be "up," i.e., toward the vicinity of \(U_{0}\).
For each \((n,m)\), there exists a sequence \(Q_{n,m,\ell}\to Q_{n,m}\) such that the unique minimal geodesic from \(P^{\prime}_{n}\) to \(Q_{n,m,\ell}\) goes to the right, i.e., in the vicinity of \(U_{+}\). For each \((n,m)\), there cannot be infinitely many \(\ell\) with \((P^{\prime}_{n},Q_{n,m,\ell})\in E_{3}\), since \(\phi_{3}(P^{\prime}_{n},Q_{n,m})\) and \(\phi(P^{\prime}_{n},Q_{n,m,\ell})\) have different directions. We restrict now to, for each \((n,m)\), an infinite sequence of \(\ell\) such that \((P^{\prime}_{n},Q_{n,m,\ell})\not\in E_{3}\). Taking a diagonal limit on \(m\) and \(\ell\), \((P^{\prime}_{n},Q_{n,m,\ell})\to(P^{\prime}_{n},U_{P^{\prime}_{n}})\); since \(\phi_{2}(P^{\prime}_{n},U_{P^{\prime}_{n}})\) and \(\phi(P^{\prime}_{n},Q_{n,m,\ell})\) have opposite
directions, \((P^{\prime}_{n},Q_{n,m,\ell})\not\in E_{2}\) for an infinite sequence of \(m\)'s and all \(\ell\geq L_{m}\) for an increasing sequence of integers \(L_{m}\). Now taking a diagonal limit over \(n\), \(m\), and \(\ell\), we approach \((M,B)\). Since the directions of \(\phi_{1}(M,B)\) and \(\phi(P^{\prime}_{n},Q_{n,m,\ell})\) differ, there must be an infinite sequence of \((P^{\prime}_{n},Q_{n,m,\ell})\) not in \(E_{1}\). So it requires a fourth set \(E_{4}\).
Now we discuss the minor changes for other cases to which we alluded above. If \(\phi_{2}(P^{\prime}_{n},U_{P^{\prime}_{n}})\) went right, instead of left, then the \(Q\)'s will be chosen on the segment from vertex \(a\) to \(U_{P^{\prime}_{n}}\), close to \(U_{0}\) and \(U_{-}\), and the rest of the argument proceeds similarly.
If instead of going down or up, \(\phi_{1}(M,B)\) goes left, then we consider \(P^{\prime}\) on a little segment going sharply down and left from \(M\). The expanded cut locus will be similar to that in Figure 2.1, but with \(U_{+}L_{+}\) and \(U_{0}\) interchanged (and moved slightly to the other side of line \(bdb\)), and similarly for \(U_{-}L_{-}\) and \(L_{0}\). These \(P^{\prime}\) have \(\phi(P^{\prime},U_{P^{\prime}})\) going up, down, or right, and an argument like the one above works. \(\blacksquare\)
|
2306.09616 | Probing the Anomalous Hall Transport and Magnetic Reversal of
Chiral-Lattice Antiferromagnet Co$_{1/3}$NbS$_2$ | Antiferromagnets exhibiting giant anomalous Hall effect (AHE) and anomalous
Nernst effect (ANE) have recently aroused broad interest, not only for their
potential applications in future electronic devices, but also because of the
rich physics arising from the Berry curvature near the Fermi level.
$\rm{Co_{1/3}NbS_2}$, by intercalating $\rm{Co^{2+}}$ ions between $\rm{NbS_2}$
layers, is a quasi-two-dimensional layered antiferromagnet with a chiral
lattice. A large AHE has been observed in $\rm{Co_{1/3}NbS_2}$, but its origin
is under debate. In this letter, we report the large AHE and ANE in exfoliated
$\rm{Co_{1/3}NbS_2}$ flakes. By analyzing the thermoelectric data via the Mott
relation, we determined that the observed large AHE and ANE primarily result
from the intrinsic Berry curvature. We also observed the magnetic domains in
$\rm{Co_{1/3}NbS_2}$ by reflective magnetic circular dichroism measurements.
Combined with electrical transport measurements, we confirmed that the magnetic
reversal in $\rm{Co_{1/3}NbS_2}$ is determined by domain wall motion, and the
critical field ($H_c$) exhibits a memory effect of consecutive magnetic sweeps.
Our work provides insight into the topological properties of
$\rm{Co_{1/3}NbS_2}$ and paves the way to studying the spin configuration and
magnetic domain dynamics in this fascinating antiferromagnet. | Pingfan Gu, Yuxuan Peng, Shiqi Yang, Huan Wang, Shenyong Ye, Hanwen Wang, Yanping Li, Tianlong Xia, Jinbo Yang, Yu Ye | 2023-06-16T04:09:24Z | http://arxiv.org/abs/2306.09616v1 | Probing the Anomalous Hall Transport and Magnetic Reversal of Chiral-Lattice Antiferromagnet Co\({}_{1/3}\)NbS\({}_{2}\)
###### Abstract
Antiferromagnets exhibiting giant anomalous Hall effect (AHE) and anomalous Nernst effect (ANE) have recently aroused broad interest, not only for their potential applications in future electronic devices, but also because of the rich physics arising from the Berry curvature near the Fermi level. Co\({}_{1/3}\)NbS\({}_{2}\), by intercalating Co\({}^{2+}\) ions between NbS\({}_{2}\) layers, is a quasi-two-dimensional layered antiferromagnet with a chiral lattice. A large AHE has been observed in Co\({}_{1/3}\)NbS\({}_{2}\), but its origin is under debate. In this letter, we report the large AHE and ANE in exfoliated Co\({}_{1/3}\)NbS\({}_{2}\) flakes. By analyzing the thermoelectric data _via_ the Mott relation, we determined that the observed large AHE and ANE primarily result from the intrinsic Berry curvature. We also observed the magnetic domains in Co\({}_{1/3}\)NbS\({}_{2}\) by reflective magnetic circular dichroism measurements. Combined with electrical transport measurements, we confirmed that the magnetic reversal in Co\({}_{1/3}\)NbS\({}_{2}\) is determined by domain wall motion, and the critical field (\(H_{c}\)) exhibits a memory effect of consecutive magnetic sweeps. Our work provides insight into the topological properties of Co\({}_{1/3}\)NbS\({}_{2}\) and paves the way to studying the spin configuration and magnetic domain dynamics in this fascinating antiferromagnet.
+
Footnote †: preprint: APS/123-QED
## I Introduction
The spontaneous Hall effect, one of the most preferred methods for reading out spin polarization in metallic ferromagnets, has been discovered for over a century and is considered a hallmark of long-range ferromagnetism. However, it was only recently that scientists recognized the physical origin of the anomalous Hall value, specifically the Berry curvature and broken time-reversal symmetry[1]. The profound understanding naturally predicts anomalous Hall effect (AHE) in antiferromagnets with nontrivial spin textures[2; 3; 4; 5; 6], where zero net magnetization is a tight demand for realizing next-generation ultra-compact spintronic devices. Soon after, AHE was observed experimentally in antiferromagnetic manganese compounds such as Mn\({}_{3}\)Sn[7], Mn\({}_{3}\)Ge[8], Mn\({}_{3}\)Ga[9] and Mn\({}_{5}\)Si\({}_{3}\)[10]. In these hexagonal materials, the cluster magnetic octupole serves as the order parameter to break the time-reversal symmetry and stabilize the Weyl fermions near the Fermi level[11], which is also reversible by an external field resembling the magnetic moment.
Even more fascinating, is the extension of AHE studies to two-dimensional (2D) antiferromagnets to explore topologically nontrivial energy bands[12] and spin textures[13]. Recently, the intercalating \(3d\) magnetic atoms between layers of van der Waals transition metal dichalcogenide (TMDC) has opened up a new route to construct quasi-2D metals with divergent spin configurations and modified band structures. For example, a one-dimensional chiral magnetic soliton lattice is reported in Cr\({}_{1/3}\)NbS\({}_{2}\)[14]. Electrical switching[15] and exchange bias[16] are reported in Fe\({}_{1/3}\)NbS\({}_{2}\), where an
tiferromagnetic and frustrated spin-glass orders coexist and are evidenced to be coupled[16]. Among these intercalated compounds, Co\({}_{x}\)NbS\({}_{2}\) is well-known for its unexpectedly large AHE value, approaching the quantized conductance value of \(e^{2}/h\) per layer[17; 18]. The intercalated Co\({}^{2+}\) cations serve to break the inversion symmetry and shift the Fermi level of NbS\({}_{2}\), resulting in extra electronic bands that possibly contribute to AHE[19; 20; 21]. The anomalous transport behavior is, therefore, sensitively determined by the stoichiometric composition \(x\) with an idealized value of 1/3[22].
Despite extensive studies on this promising material, the spin configuration of Co\({}_{1/3}\)NbS\({}_{2}\) and the origin of AHE remain controversial. The earliest neutron diffraction results were fitted with multi-domain structures of collinear single \(q\), with six symmetry-related in-plane \(q\) sharing equal weights[23]. However, this collinear structure is generally incompatible with the large AHE observed afterward. To account for the AHE value, Zhang et al. attributed it to the large hidden Berry curvature due to the chiral Dirac-like fermions [24]. On the other hand, Smejkal, et al. proposed a picture of crystal Hall effect [25], where magnetic orbitals rather than ordered spins break time-reversal symmetry, giving rise to spontaneous Hall signals. Lu, et al. further pointed out that the spins of Co\({}_{1/3}\)NbS\({}_{2}\) in the \(bc\)-plane are nearly anti-parallel, but the spins in adjacent \(ab\)-planes are alternately titled[26], which can quantitatively reproduce the AHE value. Meanwhile, Tenasini, et al.[18] validated the previous neutron scattering results but proposed a non-coplanar single-domain multi-\(q\) magnetic structure to fit the data. Uncompensated Berry curvature in noncoplanar structure can explain the large AHE and is supported by first-principles calculations[27; 28]. Furthermore, Takagi et al. performed polarized neutron scattering experiments and determined an all-in-all-out type non-coplanar magnetic order, and the AHE can be explained in terms of the topological Hall effect originating from a fictitious magnetic field associated with the scalar spin chirality[29].
Taken together, the key factor hindering the determination of the actual magnetic order is the experimentally observed weak spontaneous net magnetization \(\Delta M\), which is permissible in both configurations. The presence of \(\Delta M\) is required for AHE to appear in a single-\(q\) multi-domain configuration, but \(\Delta M\) can theoretically be absent in a multi-\(q\) single-domain configuration. Consequently, it is difficult to explicitly understand the AHE in Co\({}_{1/3}\)NbS\({}_{2}\) only by unraveling the complicated spin texture. In this work, we step over the specific spin configuration, but demonstrate that uniquely sensitive transport and optical measurements may provide even more crucial information. We performed magnetic, transport, thermoelectric, and reflective magnetic circular dichroism (RMCD) measurements on Co\({}_{1/3}\)NbS\({}_{2}\). By analyzing thermoelectric and transport data _via_ the Mott relation, we evidenced that the large AHE in Co\({}_{1/3}\)NbS\({}_{2}\) results from the intrinsic Berry curvature. In the exfoliated Co\({}_{1/3}\)NbS\({}_{2}\) below the Neel temperature a large RMCD signal comparable to that of ferromagnetic materials appears, verifying the large Berry curvature from the optical standpoint. Through RMCD mapping, we directly observed magnetic domains in Co\({}_{1/3}\)NbS\({}_{2}\), which will provide crucial information for the determination of the exact magnetic order. In combination with transport measurements, we confirmed that the magnetic reversal in Co\({}_{1/3}\)NbS\({}_{2}\) is dominated by domain wall motion, and the critical field (\(H_{c}\)) exhibits a memory effect of consecutive magnetic field sweeps. The underlying mechanism of this memory effect remains unclear but is certainly related to domain wall motion energy. Our work provides a more phenomenological and clearer understanding of the anomalous Hall and the magnetic reversal behavior in Co\({}_{1/3}\)NbS\({}_{2}\), which is essential for future applications and detailed studies of this enigmatic material.
## II Results and discussion
Co\({}_{1/3}\)NbS\({}_{2}\) single crystals were synthesized _via_ the chemical vapor transport (CVT) technique (see Methods), and Fig. 1a shows the crystal structure viewed along the axis perpendicular to the \(bc\) plane. The Co\({}^{2+}\) cations ideally reside only at the \(2c\) Wyckoff site, resulting in a \(\sqrt{3}\)-type superstructure lacking inversion symmetry with a space group of \(P6_{3}22\). Figure 1b shows
Figure 1: Basic characterizations of Co\({}_{1/3}\)NbS\({}_{2}\). (a), The crystal structure of Co\({}_{1/3}\)NbS\({}_{2}\) viewed along the axis perpendicular to the \(bc\) plane. (b), The measured temperature-dependent out-of-plane susceptibility of single crystal Co\({}_{1/3}\)NbS\({}_{2}\) bulk with ZFC, 0.1 T FC, and 3 T FC. (c), RMCD signal _versus_ temperature of an exfoliated Co\({}_{1/3}\)NbS\({}_{2}\) flake with a thickness of \(\sim\)130 nm. Red hatching highlights the phase transition, which is used to guide the eye. (d), Transverse resistivity \(\rho_{xy}\)_versus_ out-of-plane magnetic field of device 1 at different temperatures, where the anomalous and ordinary Hall coefficients can be extracted.
the out-of-plane magnetization of bulk crystals. The apparent inflections at 29 K signify the Neel temperature (\(T_{N}\)=29 K), consistent with previous reports[30, 17]. The magnetization of Co\({}_{1/3}\)NbS\({}_{2}\) below \(T_{N}\) is composed of a tiny rectangular hysteresis loop and a linear canting background (see Supplementary Information Fig. S1). The tiny ferromagnetic component along the \(c\)-axis results in the abrupt increase of susceptibility at \(T_{N}\) in the 0.1 T field cooling (FC) curve, while the linear background is responsible for nearly the same trend in the zero field cooling (ZFC) and 3 T FC curves (Fig. 1b). The measured magnetic moment is more than three orders of magnitudes smaller than the spin moment of Co\({}^{2+}\) ions (3.87 \(\mu_{\rm B}\))[23], indicating that Co\({}_{1/3}\)NbS\({}_{2}\) established a long-range antiferromagnetic ground state below \(T_{N}\).
Considering that another possible order parameter rather than the magnetic moment can break the time-reversal symmetry and induce a non-zero Berry curvature[31], we performed RMCD measurements on the exfoliated Co\({}_{1/3}\)NbS\({}_{2}\) flakes. After 6 T FC, the exfoliated flake exhibits a distinct RMCD signal below \(T_{N}\) by \(\sim\)0.12% (Fig. 1d), comparable to most ferromagnetic materials but with a negligible net magnetic moment. In general, the RMCD signal is proportional to the optical transverse conductivity, and thus to the Berry curvature[32, 33]. Such a large RMCD value optically verifies the large Berry curvature in Co\({}_{1/3}\)NbS\({}_{2}\) and may resolve hidden order parameters coupled with Berry curvature. Figure 1d shows the temperature-dependent Hall measurements of an exfoliated Co\({}_{1/3}\)NbS\({}_{2}\) flake with a thickness of \(\sim\)125 nm, labeled as device 1. A large AHE can be observed, and the critical field \(H_{c}\) increases sharply with decreasing temperature, indicating an extremely large magnetic anisotropy. Both the anomalous and ordinary Hall coefficients can be obtained from the measurements and will be discussed in detail later.
The temperature dependence of the longitudinal and transverse resistivity, \(\rho_{xx}\) and \(\rho_{xy}\), is shown in Fig. 2a. \(\rho_{xx}\) shows typical metallic behavior, as its value decrease with temperature and drops more sharply after an obvious kink at \(T_{N}\). This may be because electron scattering decreases in the long-range magnetic order, in other words, the relaxation time \(\tau\) becomes longer. On the contrary, \(\rho_{xy}\) exhibits a sharp increase upon establishing magnetic order and reaches a plateau afterward, showing no signs of decreasing with decreasing temperature. This is incompatible with the decrease of \(\rho_{xx}\) with temperature, because in a general sense, the transverse resistivity \(\rho_{xy}^{AF}\) scales with \(\rho_{xx}\) in a power law[1]:
\[\rho_{xy}^{AF}=\lambda M\rho_{xx}^{n} \tag{1}\]
where \(M\) is the magnetic moment, or the order parameter that induces Berry curvature, and \(\lambda\) is a temperature-independent scaling factor. \(n\) is the power constant depending on the AHE mechanism. The skew scattering mechanism, where \(\sigma_{xy}^{AF}\sim 1/\tau\), leads to \(n=1\), while the intrinsic Berry curvature mechanism, where \(\sigma_{xy}^{AF}\) is independent on \(\tau\), leads to \(n=2\)[34]. Apparently, the resistivity of Co\({}_{1/3}\)NbS\({}_{2}\) fails to conform to the positive correlation at low temperatures. The only possibility to explain this discrepancy is that the integrated Berry curvature continues to increase after the formation of the antiferromagnetic order. Consequently, the extracted Hall angle \(\theta_{H}=|\frac{\sigma_{xx}}{\sigma_{xx}}|\) keeps increasing below \(T_{N}\) and reaches 6 mrad at 2 K. Similar behavior is observed in all devices we measured (see Supplementary Information Fig. S2 and Table S1), with a maximum Hall angle of \(\sim\)0.014, comparable to other 2D ferromagnetic materials[35, 11].
To elucidate the scaling behavior between \(\rho_{xx}\) and \(\rho_{xy}\) and the origin of AHE, we measured the thermoelectric coefficients of Co\({}_{1/3}\)NbS\({}_{2}\). Distinct from charge flow transport, thermoelectric signals detect the flow of entropy and thus serve as sensitive probes of the electronic properties of Fermi surfaces, especially the Berry curvature[36, 37]. The inset of Fig. 2b shows the optical image of device 1. To generate a lateral temperature gradient \(\nabla T\), a heater at one side of the device is heated by driving an a.c. current. The actual temperature and temperature gradient are calibrated by four-probe resistance measurements of the temperature sensors placed on both sides of the sample, and the Seebeck and Nernst signals are obtained by measuring the \(2\omega\) voltage signals of the source-drain and Hall electrodes of the sample. The detailed process of temperature calibration is discussed
Figure 2: Electrical transport and thermoelectric measurements of Co\({}_{1/3}\)NbS\({}_{2}\). (a), Temperature-dependent longitudinal and transverse resistivity. (b), AHE angle \(\theta_{H}=|\frac{\sigma_{xx}}{\sigma_{xx}}|\)_versus_ temperature, extracted from (a). The inset shows the optical image of the device used for electrical transport and thermoelectric measurements. The scale bar is 20 \(\mu\)m. (c), Temperature-dependent Seebeck and Nernst coefficients of Co\({}_{1/3}\)NbS\({}_{2}\). Open circles are experimental data, and the solid line represents the best fit of the Mott relation to the Nernst coefficient. The best-fit parameters give \(n=2.2\). (d), Extracted thermoelectric conductivity _versus_ temperature. The blue curve is the best fit using the Mott relation, and the red curve is the best fit with \(n=1\).
in Supplementary Information Fig. S3 and S4. As shown in Fig. 2c, the Nernst coefficient \(S_{xy}\) becomes non-zero below \(T_{N}\) and reaches as large as 0.3 \(\mu\)V/K. The Seebeck coefficient, \(S_{xx}\), is negative at high temperatures and becomes positive below \(\sim\)24 K, in good accordance with the measurements for bulk crystals[38] and reproducible in all samples we measured (Supplementary Information Fig. S2).
We note that the sign change of \(S_{xx}\) should not be attributed to any phonon-drag effect, since the temperature is much lower than the expected value of \(\theta_{D}\)/5[38, 39, 40] (\(\theta_{D}\) is the Debye temperature). Neither the \(\rho_{xy}\) or \(S_{xy}\) signals exhibit any abnormal changes around 24 K, ruling out the possibility of a complete change in carrier type. Actually, we can obtain the effective carrier concentration by a linear fit of the ordinary Hall effect extracted from Fig. 1d (see Supplementary Information Fig. S5). The ordinary Hall effect indicates that the material is \(p\)-type conduction and the hole concentration decreases with decreasing temperature, which is consistent with previous reports[17, 18], but incompatible with the trend of \(S_{xx}\) turning from negative to positive. These observations can lead to the conclusion of the coexistence of hole and electron carriers in this material. The ordinary Hall effect is mainly contributed by hole carrier, which is the majority carrier provided by the NbS\({}_{2}\) bands[19]. In contrast, the \(S_{xx}\) signal is dominated by the electron band near the Fermi level, which is resulted from the intercalated Co\({}^{2+}\) cations. \(S_{xx}\) should approach zero at low temperatures as the entropy should vanish at \(T\to 0\), but the contribution of holes gradually constitutes a larger proportion and exceeds that of electrons, resulting in the sign change from negative to positive. Consequently, the sign change of \(S_{xx}\) may indicate a smooth electronic transition or a Fermi level shift, which is also manifested in the temperature-dependent carrier concentration (Supplementary Information Fig. S5). The overall hole concentration kinks between 20 K and 25 K and stops decreasing at lower temperatures, confirming our above conjecture. Differently, since the absence of sign change in \(\rho_{xy}\) and \(S_{xy}\), we can conclude that only one type energy band, presumably the electron band, is non-trivial and contributes to AHE and anomalous Nernst effect (ANE). The sign change of \(S_{xx}\) and finite \(S_{xy}\) naturally lead to a divergent Nernst angle by \(\theta_{N}=\frac{S_{yx}}{S_{xx}}\). As a comparison, the maximum Nernst angle at the lowest measurement temperature of the devices reaches 0.12, and the ratio between anomalous Nernst coefficient to the spontaneous magnetization \(S_{yx}/M\) reaches \(10^{3}\) (\(\mu\)V \(\cdot\) K\({}^{-1}\cdot(\mu_{\rm{B}}\)f.u.\({}^{-1})^{-1}\)) (see Supplementary Information Table S1). These values are much larger than the common FM metals[41, 42, 43] and comparable to other topological materials[44, 45, 46].
In addition to electrical transport, thermoelectric measurements provide us with another degree of freedom to explore the underlying mechanism of AHE. By definition, the Seebeck coefficient is related to other transport parameters by:
\[S_{yx}=\frac{1}{\sigma_{xx}}\left(\alpha_{yx}-\sigma_{yx}S_{xx}\right) \tag{2}\]
with the measured electrical conductivity and thermoelectric coefficient, we can obtain the transverse thermoelectric conductivity \(\alpha_{xy}\) by the above equation, as shown in Fig. 2d. Furthermore, the Mott relation describes the relationship between \(\alpha_{xy}\) and \(\sigma_{xy}\) by \(\alpha_{xy}=-\frac{\pi^{2}k_{B}^{2}T}{3e}\left(\frac{\partial\sigma_{xy}}{ \partial\varepsilon}\right)\bigg{|}_{\varepsilon_{F}}\). Substituting Eq. 1 into the Mott relation and combining Eq. 2, we can eliminate the common factor (order parameter \(M\)) and obtain the modified Mott relation containing only four transport parameters[47]:
\[S_{yx}=\frac{\rho_{xy}}{\rho_{xx}}\left(T\frac{\pi^{2}k_{B}^{2}}{3e}\frac{ \lambda^{\prime}}{\lambda}-(n-1)S_{xx}\right) \tag{3}\]
and
\[\alpha_{yx}=\frac{\rho_{xy}}{\rho_{xx}^{2}}\left(T\frac{\pi^{2}k_{B}^{2}}{3e} \frac{\lambda^{\prime}}{\lambda}-(n-2)S_{xx}\right) \tag{4}\]
where \(\lambda\) is the same parameter as in Eq. 1, and \(\lambda^{\prime}\) is the energy derivation of \(\lambda\). Both of them should be constants. These two equations verify the Mott relation and determine the exponent \(n\) without including the unknown temperature-dependent order parameter. The best-fit parameters give \(n=2.2\) (Fig. 2c and d), which is close to \(n=2\) rather than \(n=1\) (deviates significantly from the experimental data as shown by the red curve in Fig. 2d). This \(n\) value indicates that the anomalous Hall resistivity in Co\({}_{1/3}\)NbS\({}_{2}\) is independent of the relaxation time \(\tau\) and thus likely arises from the intrinsic Berry curvature, as suggested by the previous theories[24, 26, 27, 29]. More importantly, the fitting results verify that Eq. 1 remains valid in our samples, but there is a temperature-dependent term that causes the Berry curvature to increase at low temperatures, so although \(\rho_{xx}\) decreases, \(\rho_{xy}\) continues to increase. This term could arise from the large thermal fluctuations of the frustrated magnetic order, or from the aforementioned electronic transitions or Fermi level shifts. Generally, large magnetic anisotropy can lead to a rapid stabilization of the order parameter below \(T_{N}\), so the latter explanation seems more plausible in this system.
After understanding the underlying mechanism of the anomalous transport, we turn to magnetic domains and magnetic reversal behavior in Co\({}_{1/3}\)NbS\({}_{2}\). We have demonstrated in Fig. 1c that the order parameter in Co\({}_{1/3}\)NbS\({}_{2}\) coupled with Berry curvature can be resolved by the RMCD signal. Therefore, by scanning the entire flake with a laser spot (\(\sim\)1.5 \(\mu\)m), we can obtain the spatially resolved RMCD signals and therefore detect the domain structure of the exfoliated samples, as shown in Fig. 3. Optical and atomic force microscopy
height images of the same exfoliated flake, together with the single-point RMCD signal _versus_ temperature after \(\pm 6\) T FC can be seen in Supplementary Information Fig. S6. The main observations are summarized below. First, after ZFC (Fig. 3a), the RMCD signal exhibits clear spatial variation between \(\pm 0.5\%\), in accordance with the RMCD value after FC (see Supplementary Information Fig. S6). This mapping unambiguously demonstrates a multi-domain structure with a domain size of several \(\mu\)m, which is also manifested in the initial magnetization in AHE measurements (see Supplementary Information Fig. S7). After 6 T or \(-6\) T field cooling, the whole sample exhibits a homogeneous but opposite RMCD signal (Fig. 3b-c), showing a single domain structure. The extremely large values at the edges of the sample are artifacts due to the protrusion of the exfoliated sample and do not contain useful information, as indicated in Supplementary Information Fig. S7b. We then performed another ZFC measurement and the domains were completely redistributed (Fig. 3d), implying that the domain structure was randomly formed upon each cooling from a higher temperature, rather than determined by a series of pinning sites. Line cuts from the four mappings are shown in Supplementary Information Fig. S6. The domains after different cooling processes exhibit similar RMCD values, confirming the reliability of our measurements.
The domains observed in Co\({}_{1/3}\)NbS\({}_{2}\) may arise from the spin chirality of spatially distributed all-in-all-out domains as described in Ref. [29], or simply caused by domains with slightly canted spins towards the \(c\)-axi of a collinear structure as described in Ref. [26]. If the second scenario applies, there should also be three types of domains with symmetry-related in-plane \(q\) vectors and equal weights[18; 23], but unfortunately, this cannot be resolved by RMCD measurements. Nonetheless, direct observation of magnetic domains in micro-sized samples is a crucial step towards correctly determining spin configurations in Co\({}_{1/3}\)NbS\({}_{2}\), since the magnetic symmetry in individual domains can be further probed by second-harmonic generation signals[48; 49] or other sensitive techniques.
Additionally, the multi-domain structure also determines the magnetic reversal process in Co\({}_{1/3}\)NbS\({}_{2}\). To determine the magnetic reversal model, we measured the AHE hysteresis loops at different magnetic field angles, as shown in Fig. 4a. When the field is rotated from the \(z\) axis to an in-plane axis, the coercive field \(H_{c}\) increases monotonically and can be well fitted by the 1/cos\(\theta\) function (Fig. 4b), while the AHE value remains almost unchanged. The above phenomena indicate that the magnetic reversal in Co\({}_{1/3}\)NbS\({}_{2}\) follows the Kondorsky model[50] rather than the coherent Stoner-Wohlfarth model[51]. The energy of domain wall propagation is much lower than the magnetic anisotropy energy, so \(H_{c}\) is determined by the competition between Zeemann energy and domain wall energy. These are consistent with the observed sharp increase of \(H_{c}\) with decreasing temperature (Fig. 1d) and the multi-domain structure (Fig. 3). Moreover, the AHE reversal appears to be a slow ramping with the increasing external field instead of a steep step, and the slope becomes lower when the field
Figure 3: RMCD mapping of exfoliated 100 nm thick Co\({}_{1/3}\)NbS\({}_{2}\) flake at 2 K. (a), RMCD mapping after the first ZFC. (b, c), RMCD mappings after 6 T (b) and \(-6\) T (c) FC. (d), RMCD mapping after the second ZFC.
Figure 4: Transport behavior determined by domain motion. (a), Anomalous resistivity \(\rho_{xy}^{A}\)_versus_\(\mu_{0}H\) under different field angles measured at 29 K. The sample lies in the \(x-y\) plane, and the magnetic field \(H\) is rotated from the \(z\) axis to an in-plane axis. \(\theta\) is the angle between \(H\) and the \(z\) axis. Each curve is shifted by 0.15 \(\mu\Omega\cdot\)cm relative to the curve below. (b), Coercive field \(H_{c}\) as a function of \(\theta\). \(H_{c}\) is defined as the intercept of \(\rho_{xy}^{A}\) on the \(H\) axis extracted from (a). The dashed red line represents the fit of the Kondorsky model. (c), \(\rho_{xy}^{A}\) during several consecutive magnetic field sweeps at 27.5 K. \(H_{c}\) decreases as the number of sweeps increases. The green dashed line plots the decreasing \(H_{c}\)_versus_ the number of sweeps \(N\), while the yellow dashed line plots the \(H_{c}\) extracted from seven initial sweeps after cooling down from 50 K. The data were obtained from a 120 nm thick Co\({}_{1/3}\)NbS\({}_{2}\) device, labeled as device 2.
points to the in-plane axis, implying that the domain wall motion is almost steady and slow.
The domain-determined magnetic reversal also leads to another manifestation, that is, the decrease of \(H_{c}\) with consecutive sweeps of the magnetic field. Figure 4c shows the 2D plot of \(\rho_{xy}^{A}\) as a function of magnetic field and sweeping number \(N\) at 27.5 K. The green dashed line illustrates the extracted \(H_{c}\)_versus_\(N\), which drops sharply over the first few sweeps and gradually becomes steady. Seven consecutive sweeps reduce \(H_{c}\) in device 2 by 28%. However, \(H_{c}\) can be set back to the initial value by raising the temperature above \(T_{N}\) and cooling back to 27.5 K, which we call the initialization process. We raised the temperature to 50 K and then cooled it back, and then directly carried out the AHE measurement. The initialization process was repeated seven times, and the seven extracted \(H_{c}\) are plotted in the yellow dashed line in Fig. 4c, exhibiting consistent values with perfect reproducibility. Naturally, after initialization, \(H_{c}\) will follow the same trend of decreasing if we perform consecutive magnetic sweeps. In other words, the hysteresis loop can be manipulated by continuously sweeping the external field, and its \(H_{c}\) information will be memorized but can be erased after the temperature rises above \(T_{N}\) and cools down again. It is important to emphasize that this memory effect didn't appear by chance in a single device, but was observed in all the devices we measured. We also repeated the same measurements in device 1 (see Supplementary Information Fig. S8), and the results were in good agreement with device 2, indicating that the manipulation of \(H_{c}\) is an intrinsic property in exfoliated Co\({}_{1/3}\)NbS\({}_{2}\) flakes. Based on the established knowledge that the domain structure redistributes after each cooling (Fig. 3a and d), and the value of \(H_{c}\) is determined by the energy of domain wall motion (Fig. 4a-b), we can infer that the decrease in \(H_{c}\) upon consecutive sweeps is not a trivial consequence of the change in pinning site, but rather a modification in the domain wall propagation energy. Actually, the manipulation of \(H_{c}\) is likely to originate from the memory effect of chiral domain walls[52] or the intrinsic exchange bias effect[16] observed in similar systems, which requires confirmation by further experiments and theoretical calculations, and may provide a basis for the manipulation of non-collinear antiferromagnets.
## III Conclusion
In summary, we investigated the large AHE, ANE, and the magnetic domain-related behavior in exfoliated Co\({}_{1/3}\)NbS\({}_{2}\) flakes. The Mott relation between transport and thermoelectric coefficients is verified, and the fitting results unambiguously indicate an intrinsic large Berry curvature. A series of phenomena imply large magnetic fluctuations or electronic transitions at low temperatures. Furthermore, we observed a large RMCD signal and resolved magnetic domains by RMCD mapping, optically evidencing the large Berry curvature. Combined with transport measurements, we concluded that the magnetization reversal in Co\({}_{1/3}\)NbS\({}_{2}\) is dominated by the domain wall motion. The domain wall energy determined \(H_{c}\) decreases monotonically with consecutive field sweeps, which probably indicates a memory effect of domain walls. Our work probes the intrinsic Berry curvature and provides an explicit picture of AHE and magnetic reversal mechanism in Co\({}_{1/3}\)NbS\({}_{2}\) without including its controversial magnetic order. A detailed understanding of this antiferromagnet will provide useful information for future spin caloritronics applications.
## IV Methods
**Crystal growth and magnetic characterizations**. High-quality Co\({}_{1/3}\)NbS\({}_{2}\) single crystals were grown by the chemical vapor transport method. Co, Nb, and S powders with a ratio of 1:3:6 were sealed in a quartz tube, then put into a furnace, heated to 800 \({}^{\circ}\)C, and kept for 5 days to prepare polycrystalline precursor. The resulting powders (1 g) and transport agent iodine (15 mg/cm\({}^{3}\)) were then sealed in a quartz tube to grow single crystals with a temperature gradient set between 950 \({}^{\circ}\)C (source) and 850 \({}^{\circ}\)C (products) for 10 days. Finally, hexagonal plate-shaped single crystals were obtained, which are easy to exfoliate. The thickness of the ultrathin samples was verified by the atomic force microscopy characterization using an Oxford Cypher S system in tapping mode. Magnetization measurements were performed by standard modules of a Quantum Design PPMS.
**Electrical and thermoelectric measurements**. Metal contact electrodes of Cr/Au (10/80 nm) were defined using electron beam lithography, electron beam evaporation, and lift-off processes on the exfoliated flakes. The devices were then loaded into a physical property measurement system (Cryomagnetics) with a magnetic field up to 14 T. A.c. voltage measurements were performed with Stanford Research SR830 lock-in amplifiers using the standard four-point method. \(\rho_{xx}\) and \(\rho_{xy}\) were measured under an a.c. current of 10 \(\mu\)A at 17.777 Hz, while \(\lx@sectionsign_{xx}\) and \(\lx@sectionsign_{xy}\) were measured under an a.c. heater current of 0.2 mA at 3.777 Hz. Temperature perturbation was calibrated by four-probe resistances of the thermocouples. Details of thermoelectric measurements and temperature calibration are presented in Supplementary Information Fig. S3-4 and the following discussions.
**RMCD measurements**. The RMCD measurements were performed based on the Attocube closed-cycle cryostat (attoDRY2100) down to 1.6 K and up to 9 T in the out-of-plane direction. The linearly polarized light of the 633 nm HeNe laser was modulated between left and right circular polarization by a photoelastic modulator (PEM) and focused onto the sample through a high numerical aperture (0.82) objective. The reflected
light was detected by a photomultiplier tube (THORLABS PMT1001/M). The magnetic reversal under the external magnetic field was detected by the RMCD signal determined by the ratio of the a.c. component of PEM at 50.052 kHz and the a.c. component of the chopper at 779 Hz (detected by a two-channel lock-in amplifier Zurich HF2LI). RMCD mapping was implemented by moving the piezo sample stage.
###### Acknowledgements.
This work was supported by the National Natural Science Foundation of China (No. 12241401 and No. 12250007), the National Key R&D Program of China (Grants No. 2022YFA1203902 and No. 2018YFA0306900), and Beijing Natural Science Foundation (Grant No. JQ21018). T. X. acknowledges support from the National Key R&D Program of China (Grant No. 2019YFA0308602), and the National Natural Science Foundation of China (Grant Nos. 12074425 and 11874422).
|
2304.08371 | Apparent universality of $1/f$ spectra as an artifact of finite-size
effects | Power spectral density scaling with frequency $f$ as $1/f^\beta$ and $\beta
\approx 1$ is widely found in natural and socio-economic systems. Consequently,
it has been suggested that such self-similar spectra reflect the universal
dynamics of complex phenomena. Here, we show that for a superposition of
uncorrelated pulses with a power-law distribution of duration times the
estimated scaling exponents $\bar{\beta}$ depend on the system size. We derive
a parametrized, closed-form expression for the power spectral density, and
demonstrate that for $\beta \in [0,2]$ the estimated scaling exponents have a
bias towards $\bar{\beta}=1$. For $\beta=0$ and $\beta=2$ the explicit
logarithmic corrections to frequency scaling are derived. The bias is
particularly strong when the scale invariance spans less than four decades in
frequency. Since this is the case for the majority of empirical data, the
boundedness of systems well described by the superposition of uncorrelated
pulses may contribute to overemphasizing the universality of $1/f$. | M. A. Korzeniowska, A. Theodorsen, M. Rypdal, O. E. Garcia | 2023-04-17T15:34:30Z | http://arxiv.org/abs/2304.08371v3 | # Apparent universality of \(1/f\) spectra as an artifact of finite-size effects
###### Abstract
Power spectral density scaling with frequency \(f\) as \(1/f^{\beta}\) and \(\beta\approx 1\) is widely found in natural and socio-economic systems. Consequently, it has been suggested that such self-similar spectra reflect the universal dynamics of complex phenomena. Here, we show that for a superposition of uncorrelated pulses with a power-law distribution of duration times the estimated scaling exponents \(\beta\) depend on the system size. We derive a parametrized, closed-form expression for the power spectral density, and demonstrate that for \(\beta\in[0,2]\) the estimated scaling exponents have a bias towards \(\beta=1\). For \(\beta=0\) and \(\beta=2\) the explicit logarithmic corrections to frequency scaling are derived. The bias is particularly strong when the scale invariance spans less than four decades in frequency. Since this is the case for the majority of empirical data, the boundedness of systems well described by the superposition of uncorrelated pulses may contribute to overemphasizing the universality of \(1/f\).
_Introduction.--_ A wide range of complex systems display spatial or temporal scale invariance, fractality, and long-range dependence (LRD) [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16]. In particular, the emergence of self-similar frequency power spectral density scaling \(1/f^{\beta}\) has been of interest since the discovery of a \(1/f\)-type noise in vacuum tubes almost a century ago [17; 18]. Reports of scaling exponents \(\beta\) close to unity in various systems have led to questions about universality. Theoretical ideas such as self-organized criticality (SOC) have been put forward [19]. However, identifying a general mechanism for the observed variety of self-similar behavior has proved difficult [20; 21; 22; 23; 24; 25].
In this paper, we demonstrate that an apparent \(1/f\) universality arises in a generalized filtered Poisson process subject to finite-size effects [26; 27]. The shot-noise approach is canonical for the phenomenological modeling of LRD statistics of fluctuating systems, from background noise to violent bursts [28; 29; 30; 31; 32; 33]. We derive a closed-form expression for the parametrized power spectral density of a finite-size system and explore its scale invariance while varying the self-similarity range and the exponent \(\beta\in[0,2]\). We assess the finite-size effects by comparing the asymptotic scaling relations with the effective scaling of the analytical power spectral density. Our results show that the observed scaling is always biased towards \(\beta=1\) in the presence of finite-size effects, and the bias is most substantial when the scaling range is narrow.
_Filtered Poisson process.--_ Let us first introduce the theoretical framework for our analysis. Consider a stochastic process given by a superposition of \(K\) uncorrelated, independent and identically distributed pulses \(\phi(\theta)\), occurring as a random sequence in a time interval of duration \(T\)[34],
\[\Phi_{K}(t)=\sum_{k=1}^{K(T)}A_{k}\phi\left(\frac{t-t_{k}}{s_{k}}\right). \tag{1}\]
Each pulse labeled \(k\) is characterized by an amplitude \(A_{k}\), a duration time \(s_{k}\), and an arrival time \(t_{k}\) distributed uniformly on the interval \(T\). The pulse-duration times are assumed to be randomly distributed with probability density \(P_{\mathrm{z}}(s)\), and an average pulse-duration time \(\langle s\rangle=\int_{0}^{\infty}ds\,P_{\mathrm{z}}(s)\). Given the distribution of pulse amplitudes \(P_{\mathrm{z}}(A)\), we use Campbell's theorem to compute the moments and the autocorrelation function of the process (1) by averaging over all random variables for the case of exactly \(K\) pulses [34; 35], and subsequently averaging over the randomly distributed number of pulses \(K\). This yields the rigorous characteristics of the stationary process \(\Phi(t)\)[36]. The power spectral density follows directly as the Fourier transform of the autocorrelation function. For the standardized process \(\widetilde{\Phi}=(\Phi-\langle\Phi\rangle)/\Phi_{\mathrm{rms}}\), and with a normalized, dimensionless duration time \(\tau=s/\langle s\rangle\), the power spectral density is expressed in a non-dimensional form as
\[\Omega_{\widetilde{\Phi}}(\omega)=\int_{0}^{\infty}\mathrm{d}\tau\,\tau^{2}P_{ \mathrm{z}}(\tau)\ \varrho_{\phi}\ (\tau\omega), \tag{2}\]
where \(\omega=2\pi f(s)\) denotes the dimensionless angular frequency, \(\varrho_{\phi}\ (\tau\omega)=\int_{-\infty}^{\infty}\mathrm{d}\theta\,\rho_{\phi}( \theta)\exp(-i\tau\omega\theta)\) is the Fourier transform of the normalized autocorrelation function \(\rho_{\phi}\) of the pulse function \(\phi\), and \(P_{\mathrm{z}}(\tau)=\langle s\rangle\,P_{\mathrm{z}}(s)\) is the normalized probability density function for pulse durations [34].
_Pareto distributed durations.--_ Equation (2) holds for an arbitrary finite-mean distribution \(P_{\mathrm{z}}(\tau)\) of pulse durations. In particular, it holds for a bounded Pareto distribution with exponent \(\alpha\) and a finite support \(\left[\tau_{\downarrow},\tau_{\uparrow}\right]\), normalized by a factor \(\eta(\tau_{\downarrow},\tau_{\uparrow},\alpha)\) such that \(\int_{0}^{\infty}\mathrm{d}\tau\,P_{\mathrm{z}}(\tau)=1\),
\[P_{\mathrm{z}}(\tau;\tau_{\downarrow},\tau_{\uparrow},\alpha)=\begin{cases} \eta\,\tau^{-\alpha}&\text{if }\tau_{\downarrow}\leq\tau\leq\tau_{\uparrow},\\ 0&\text{otherwise}.\end{cases} \tag{3}\]
The normalization of \(P_{\mathrm{z}}\) and the inherent property of a normalized-variable mean \(\langle\tau\rangle=\int_{\tau_{\downarrow}}^{\tau_{\uparrow}}\mathrm{d}\tau\, \tau P_{\mathrm{z}}(\tau)=1\) put two constraints on the three parameters \(\left\{\tau_{\downarrow},\ \tau_{\uparrow},\ \alpha\right\}\) in Eq. (3). Defining a dimensionless ratio parameter \(\Delta=\tau_{\uparrow}/\tau_{\downarrow}\) and solving the resulting system of three constraints, we obtain \(\tau_{\downarrow},\ \tau_{\uparrow}\)
and \(\eta\) in terms of \(\alpha\) and \(\Delta\) as
\[\tau_{\downarrow}(\Delta,\alpha) =\frac{(\alpha-2)(1-\Delta^{-1}-\alpha)}{(\alpha-1)(1-\Delta^{2- \alpha})}, \tag{4a}\] \[\tau_{\uparrow}(\Delta,\alpha) =\Delta\tau_{\downarrow},\] (4b) \[\eta(\Delta,\alpha) =\frac{(\alpha-1)}{1-\Delta^{1-\alpha}}\tau_{\downarrow}^{\alpha -1}, \tag{4c}\]
with well-defined limits for \(\alpha\to 1\) and \(\alpha\to 2\). Given Eqs. (4), the probability distribution given by Eq. (3) is parametrized as \(P_{\tau}=P_{\tau}(\tau;\Delta,\alpha)\).
We note that a finite, nondivergent mean \(\langle\tau\rangle=1\) is a requirement for the stationarity of the process given by Eq. (1), and the well-defined normalization of the power spectral density given by Eq. (2). With the chosen parametrization \(P_{\tau}(\tau;\Delta,\alpha)\) and the condition \(\langle\tau\rangle=1\), the effect of the increase in \(\Delta\) on the boundaries \(\tau_{\downarrow}\) and \(\tau_{\uparrow}\) depends on the value of \(\alpha\). When \(\alpha<1\) the divergence \(\Delta\to\infty\) is driven by the decrease \(\tau_{\downarrow}\to 0\), rather than by the increase of \(\tau_{\uparrow}\), thus hindering long-range correlations. As \(\alpha\to 0\), \(P_{\tau}(\tau)\) given by Eq. (3) reduces to a uniform distribution, with finite mean and variance [34].
_Scale invariance.--_ In the unbounded limit, \(P_{\tau}(\tau)\) defined by Eq. (3) displays self-similar scaling
\[\lim_{\begin{subarray}{c}\tau_{\downarrow}\to 0\\ \tau_{\uparrow}\to\infty\end{subarray}}P_{\tau}(\lambda\,\tau)=\lim_{ \begin{subarray}{c}\tau_{\downarrow}\to 0\\ \tau_{\uparrow}\to\infty\end{subarray}}\lambda^{-\alpha}P_{\tau}(\tau), \tag{5}\]
which together with Eq. (2) implies a power-law scaling relation for the power spectral density,
\[\lim_{\begin{subarray}{c}\tau_{\downarrow}\to 0\\ \tau_{\uparrow}\to\infty\end{subarray}}\Omega_{\widetilde{\Phi}}(\lambda\, \omega)=\lim_{\begin{subarray}{c}\tau_{\downarrow}\to 0\\ \tau_{\uparrow}\to\infty\end{subarray}}\lambda^{\alpha-3}\,\Omega_{ \widetilde{\Phi}}(\omega). \tag{6}\]
Equation (6) suggests the existence of a universal \(1/\omega^{\beta}\) self-similarity of the power spectral density given by Eq. (2), with \(\beta(\alpha)=3-\alpha\). Strictly, the probability distribution given by Eq. (3) is not well defined in the asymptotic limit, but bounding \(\tau_{\downarrow}\) at an arbitrarily small value results in a finite variance of the process for \(\alpha>3\), and an infinite variance otherwise. In order to ensure a finite pulse-duration mean in the asymptotic limit, \(\alpha\geq 1\) is required. Thus, we conjecture that if \(\Omega_{\widetilde{\Phi}}\) displays a power-law signature in the limit when \(\Delta\to\infty\), then it does so for Pareto exponents \(1\leq\alpha\leq 3\). The resulting power spectral density scaling exponents range within \(0\leq\beta(\alpha)\leq 2\). Exponents \(\alpha=1\), \(\alpha=2\), and \(\alpha=3\) characterize Brownian, pink, and white noise signatures with \(\beta=2\), \(\beta=1\), and \(\beta=0\), respectively.
The spectral scale invariance of a finite-size system is confined to the frequency range limited by the cutoff values \(\omega\tau_{\uparrow}=1\) and \(\omega\tau_{\downarrow}=1\), ranging over \(\log_{10}\Delta\) decades in frequency. Outside this range the power spectral density assumes the shape determined by the power spectra of the pulse function \(\phi\), following a broken power law with the associated break points to and from the \(1/\omega^{\beta}\) scaling.
_Power-law spectra.--_ The asymptotic scaling relation \(\beta=3-\alpha\) is verified for a one-sided exponential pulse function \(\phi\),
\[\phi(\theta)=\begin{cases}\exp(-\theta)&\text{if }\theta\geq 0,\\ 0&\text{otherwise},\end{cases} \tag{7}\]
whose power spectral density follows to be a Lorentzian function \(\varrho_{\phi}\ (\vartheta)=2/(1+\vartheta^{2})\)[34]. For a constant pulse duration \(\tau=\langle\tau\rangle\) the power spectral density given by Eq. (2) inherits the Lorentzian shape \(\Omega_{\widetilde{\Phi}}(\omega)=2\langle\tau\rangle/(1+\langle\tau\rangle^{2 }\omega^{2})\), flat for low frequencies and with a \(1/\omega^{2}\) tail for high frequencies, consistent with \(\beta\to 0\) and \(\beta\to 2\), respectively. For distributed pulse durations, Eqs. (2), (3) and (4) yield an explicit, closed-form expression for the frequency power spectral density parametrized by \(\Delta\) and \(\alpha\):
\[\Omega_{\widetilde{\Phi}}(\omega;\Delta,\alpha)=\\ \begin{cases}\frac{1}{\ln\Delta\alpha^{\prime}\ln\left(\frac{(\Delta-1)^{2}+ \Delta^{2}\ln^{2}\Delta\omega^{2}}{(\Delta-1)^{2}+\ln^{2}\Delta\omega^{2}} \right)}&\text{if }\alpha=1,\\ \frac{2}{\ln\Delta\omega}\left[\arctan\left(\frac{(\Delta-1)\omega}{\ln\Delta} \right)-\arctan\left(\frac{(\Delta-1)\omega}{\Delta\ln\Delta}\right)\right] &\text{if }\alpha=2,\\ \frac{2}{(\Delta^{2}-\Delta)\omega^{2}}&\left[\Delta^{2}\,\,{}_{2}F_{1}\left(1,\frac{\alpha-1}{2},\frac{\alpha+1}{2};-\frac{1}{\tau_{\downarrow}^{2}\omega^ {2}}\right)\right.\\ &\left.-\,\Delta\,{}_{2}F_{1}\left(1,\frac{\alpha-1}{2},\frac{\alpha+1}{2};- \frac{1}{\tau_{\downarrow}^{2}\omega^{2}}\right)\right]&\text{otherwise},\end{cases} \tag{8}\]
where \({}_{2}F_{1}\) is a hypergeometric function defined by Gauss series [37]. The expected frequency scaling \(1/\omega^{3-\alpha}\) is manifested by considering the compensated spectra in the limit of an infinitely broad distribution of duration times. For several values of \(\alpha\) representing the LRD regime \(1\leq\alpha\leq 3\), the following Eqs. (9) present both the prefactors and the powers of \(\omega\) which together satisfy the compensation of the power spectral density \(\Omega_{\widetilde{\Phi}}(\omega;\Delta,\alpha)\) given by Eq. (8),
\[\lim_{\Delta\to\infty}\Omega_{\widetilde{\Phi}}(\omega;\Delta,1) \frac{\ln\Delta}{\ln\left(\omega^{2}\ln^{2}\Delta\right)} \omega^{2} =1, \tag{9a}\] \[\lim_{\Delta\to\infty}\Omega_{\widetilde{\Phi}}(\omega;\Delta,\frac {3}{2}) \frac{\sqrt{2}(\sqrt{\Delta}-1)}{\pi\sqrt[4]{\Delta}} |\omega|^{3/2} =1,\] (9b) \[\lim_{\Delta\to\infty}\Omega_{\widetilde{\Phi}}(\omega;\Delta,2) \frac{\ln\Delta}{\pi} |\omega| =1,\] (9c) \[\lim_{\Delta\to\infty}\Omega_{\widetilde{\Phi}}(\omega;\Delta,\frac{5} {2}) \frac{\sqrt{6}(\sqrt{\Delta}-1)}{\pi\sqrt{1+\sqrt{\Delta}+\Delta}} |\omega|^{1/2} =1,\] (9d) \[\lim_{\Delta\to\infty}\Omega_{\widetilde{\Phi}}(\omega;\Delta,3) 2\left[\ln\left(1+\frac{4}{\omega^{2}}\right)\right]^{-1} =1. \tag{9e}\]
Equation (9c) reveals the \(1/\omega\) signature of the pink noise, obtained for \(\alpha=2\). Logarithmic corrections to the theoretical frequency scaling are present at the LRD-regime boundaries, \(\alpha=1\) and \(\alpha=3\). Similar logarithmic corrections have been linked to phase transitions and critical behavior of certain statistical-mechanical systems [38; 39; 40], as well as demonstrated for a renewal process with power-law-distributed waiting times [41].
The parameters \(\alpha\) and \(\Delta\) represent two mechanisms shaping the power spectral density in the range of self-similarity: logarithmic corrections and boundedness. Figures 1(a) and 1(c) present plots of the power spectral density \(\Omega_{\widetilde{\Phi}}(\omega;\Delta,\alpha)\) given by Eq. (8) for multiple choices of \(\alpha\) and \(\Delta\), respectively. The corresponding compensated spectra are presented in Figs. 1(b) and 1(d). The chosen values of \(\alpha\) span the entire LRD regime, and are aligned to Eqs. (9). The selected values of \(\Delta\) allow for
examining the scaling behavior of \(\Omega_{\widetilde{\Phi}}(\omega;\Delta,\alpha)\) over different ranges of self-similarity. Compensated spectra aid the identification of the power-law scaling.
_Logarithmic corrections.--_ Figure 1(b) confirms the existence of power-law scaling for \(\alpha=\sfrac{3}{2}\), \(\alpha=2\), and \(\alpha=\sfrac{5}{2}\), as well as the logarithmic corrections to scaling at the boundaries of the LRD regime, \(\alpha=1\) and \(\alpha=3\). The curvature of the compensated spectra increases as \(\alpha\) moves away from the center of the LRD regime, \(\alpha=2\), causing gradual shortening of the power-law scaling ranges. The dashed colored lines in Fig. 1(b) reveal the shape of the compensated spectra for \(\alpha=2\pm\sfrac{6}{7}\) (\(\beta=1\mp\sfrac{6}{7}\)), equivalent to \(1/7\) away from the nearest LRD-regime boundary. These two cases demonstrate that the loss of power-law scaling occurs already inside the LRD regime, not only at its boundaries.
_Boundedness.--_ The theoretical boundaries of the power-law scaling ranges, given by Eq. (4), are marked with dots in Figs. 1(b) and 1(d). The broken power laws affect the spectral scaling in the vicinity of \(\omega\tau_{\uparrow}=1\) and \(\omega\tau_{\downarrow}=1\) by reducing the effective ranges of self-similarity. Figure 1(d) shows that in the center of the LRD regime, \(\alpha=2\), the reduction is by approximately one and a half frequency decades on each side of the self-similarity range, for any of the considered values of \(\Delta\). Power-law scaling does not emerge unless the underlying process is characterized by at least four decades (\(\Delta\geq 10^{4}\)) of scale invariance.
The empirical power spectral densities obtained for realizations of the stochastic process given by Eq. (1) expectedly match the corresponding analytical predictions given by Eq. (8). Examples for \(\alpha=2\) and different values of \(\Delta\) are shown in the inset in Fig. 1(c).
_Apparent universality.--_ The combined effect of the logarithmic corrections to frequency scaling and the boundedness of the self-similarity range is gauged by comparing the effective scaling of the analytical power spectral density \(\Omega_{\widetilde{\Phi}}(\omega;\Delta,\alpha)\) given by Eq. (8) for various combinations
Figure 1: Frequency power spectral density of the filtered Poisson process with one-sided exponential pulse shape and Pareto-distributed pulse-duration times. Legend color coding applies per row. **Top row**: Varied \(\alpha\) at fixed \(\Delta=10^{8}\). **Bottom row**: Varied \(\Delta\) at fixed \(\alpha=2\). **Left column**: Uncompensated spectra \(\Omega_{\widetilde{\Phi}}(\omega;\Delta,\alpha)\) given by Eq. (8). Dashed lines represent Lorentzian-function spectra. **Right column**: Compensated spectra \(\omega^{3-\alpha}\Omega_{\widetilde{\Phi}}(\omega;\Delta,\alpha)\). The horizontal dashed black lines spanning the entire \(\omega\) range mark the inverse of the compensating prefactors according to Eqs. (9). The regions where the dashed black lines overlap with the colored lines indicate the ranges of power-law scaling. Colored dots mark the theoretical boundaries of the self-similarity ranges, \(\omega\tau_{\uparrow}=1\) and \(\omega\tau_{\downarrow}=1\). **(a)** The inset presents the spectra at the boundaries of the LRD regime, \(\alpha=1\) (\(\beta=2\)) and \(\alpha=3\) (\(\beta=0\)), where logarithmic corrections to \(1/\omega^{\beta}\) scaling apply. The domain represented in the inset is shaded in the outer plot. **(b)** Two ancillary \(\alpha\) cases plotted with dashed colored lines showcase the reduction in the range of self-similarity when \(\alpha\) is \(1/7\) away from the nearest LRD boundary. **(c)** The inset presents the empirical power spectra obtained for realizations of the process, shifted vertically by a factor \(\sqrt{\Delta}\) to avoid overlapping. The color coding of the empirical spectra is aligned to the legend. The overlying solid black lines represent the corresponding analytical results. An additional empirical case \(\Delta=0\), representing a constant pulse duration, is plotted in black and overlaid by a dashed-white Lorentzian.
of the parameters \(\alpha\) and \(\Delta\), to the asymptotic scaling relation \(\lim_{\Delta\to\infty}\beta(\alpha)=3-\alpha\). In order to reduce the effect of the break-point curvature, half a decade is discarded on each side of the theoretical self-similarity range, shifting the boundaries of the power-law fitting ranges to \(\omega\tau_{\downarrow}=10^{1/2}\) and \(\omega\tau_{\downarrow}=10^{-1/2}\), respectively. Linear least-square fits are made to logarithmically spaced points in double-logarithmic coordinates. The resulting estimations of power-law scaling exponents \(\tilde{\beta}\) are presented in Fig. 2. As \(\alpha\) approaches any of the LRD-regime boundaries, the effective \(\tilde{\beta}(\alpha)\) relation diverges from the asymptotic limit \(\tilde{\beta}(\alpha)=3-\alpha\) towards the central value \(\tilde{\beta}=1\). The divergence is stronger for small \(\Delta\).
The colored sidebars in Fig. 2 mark the ranges of the estimated exponents \(\tilde{\beta}\) for different values of \(\Delta\). For \(\Delta=10^{8}\) the range is \(\tilde{\beta}\approx 1\pm 0.86\). We recall that Fig. 1(b) demonstrates a notable curvature of the compensated spectra for \(\Delta=10^{8}\) and \(\alpha=2\pm\nicefrac{{6}}{{7}}\) (\(\beta=1\mp 0.86\)). For \(\Delta=10^{2}\) and \(\Delta=10^{4}\) we further recall that even at the center of the LRD regime, \(\alpha=2\) (\(\beta=1\)), the compensated spectra in Fig. 1(d) reveal none, or very short power-law scaling ranges, respectively. The lack of power-law scaling does not affect the power-law fitting procedure. The estimated exponents range within \(\tilde{\beta}\approx 1\pm 0.56\) for \(\Delta=10^{2}\), and \(\tilde{\beta}\approx 1\pm 0.75\) for \(\Delta=10^{4}\).
The findings presented in Figs. 1 and 2 indicate that the effective spectral scaling is biased towards \(\tilde{\beta}=1\), and the bias increases with the decrease of \(\Delta\), or with \(\alpha\) approaching the LRD-regime boundaries. Specifically: (1) For the ranges of the underlying scale invariance shorter than approximately four decades (\(\Delta<10^{4}\)) the power spectral density does not display power-law scaling. (2) For the longer \(\Delta\) ranges the spectral power-law scaling is manifested only for a subrange of exponents centered around \(\alpha=2\) (\(\beta=1\)). (3) The extent of this sub-range increases with the increase of \(\Delta\), up to the asymptotic limit \(\alpha\in(1,3)\) [\(\beta\in(0,2)\)] when \(\Delta\to\infty\).
_Discussion.--_ The results presented in Fig. 2 are obtained under favorable conditions: Power-law fitting is made to logarithmically spaced data points following analytical curves, exact boundaries of the self-similarity ranges are known, and symmetric cutoffs are applied to reduce the effect of the break-point curvature. Despite these measures the effective \(\tilde{\beta}(\alpha)\) relation is biased towards \(\tilde{\beta}=1\) with respect to the asymptotic \(\lim_{\Delta\to\infty}\beta(\alpha)=3-\alpha\). The scaling exponents close to the LRD-regime boundaries \(\tilde{\beta}=0\) and \(\beta=2\) are not observed for any of the investigated finite values of \(\Delta\).
The power spectral density of a one-sided exponential pulse has asymptotic scaling as \(1/\omega^{0}\) for low frequencies and \(1/\omega^{2}\) for high frequencies. The associated break points in the spectrum affect the self-similarity range, biasing the underlying \(1/\omega^{\beta}\) scaling towards \(\tilde{\beta}=1\). The wider the range for power-law fitting, the more weight is put on the break-point curvature. Experiments show that discarding significant margins on both sides of the fitting range reduces the bias, yielding more accurate scaling estimations when compared with the theoretical predictions. However, for relatively narrow ranges of scale invariance the break-point curvature affects the entire \(1/\omega^{\beta}\) range, inflicting a bias too extensive to retrieve the underlying \(1/\omega^{\beta}\) scaling. Consulting compensated spectra allows for scrutinizing the effective scale invariance.
Narrow ranges of scale invariance prone to the \(\tilde{\beta}\to 1\) bias may overemphasize the universality of \(1/f\)-type scaling. Observing long ranges of scale invariance demands both that the underlying process is long-range self-similar, and that it is measured with precision and scope satisfying the long-range extent [27]. Estimating power-law statistics of unequally sampled or merged data sets has been addressed in Refs. [42; 43].
If the exact boundaries of the self-similarity range are not known, the choice of the power-law fitting range is arbitrary, and possibly biased towards either low or high frequencies. Different methods of spectral scaling estimation may increase the bias, or compensate for it. The smoothness of the effective \(\tilde{\beta}(\alpha)\) relations presented in Fig. 2 suggests that knowing the boundaries of the self-similarity range might facilitate tracing back from the observed scaling to the underlying scaling of the studied process.
_Conclusions.--_ The results presented here demonstrate that the estimated spectral scaling of long-range dependent processes may be biased towards \(1/f\) in the presence of finite-size effects. This bias results from the curvature in the spectra due to broken power-law scaling, as well as the logarithmic corrections associated with long range dependence. Identification of the true power-law scaling requires scale invariance over several decades in frequency in the underlying process, as shown in Fig. 1(d). Empirical data seldom display accordingly broad ranges of self-similarity [7; 8; 9; 10; 11; 12], suggesting a spectral scaling bias at least in the case of processes that are well described by a superposition of uncorrelated pulses. Considering that a variety of physical phenomena has been canon
Figure 2: Estimated power-law scaling exponents \(\tilde{\beta}\) of the analytical power spectral density curves \(\Omega_{\tilde{\Phi}}(\omega;\Delta,\alpha)\) given by Eq. (8) for various ranges \(\Delta\) of the underlying scale invariance, and in the entire LRD regime \(1\leq\alpha\leq 3\). The dashed gray line marks the asymptotic scaling relation \(\lim_{\Delta\to\infty}\beta(\alpha)=3-\alpha\). The solid gray line marks \(\tilde{\beta}=1\) representative of the \(1/f\) noise. The colorful vertical sidebars mark the range of \(\tilde{\beta}\) observed for different values of \(\Delta\). Legend color coding is aligned to Fig. 1(d).
ically modeled in this way [28; 29; 30; 31; 32; 33], the observed \(1/f\) universality may be overstated. Whether a similar bias is present for other complex-dynamics systems requires further investigation.
###### Acknowledgements.
This work was supported by the UiT Aurora Centre Program, UiT The Arctic University of Norway (2020). A. T. was supported by Tromso Research Foundation under Grant No. 19_SG_AT.
|
2307.11370 | Plasmon Excitations Across the Charge-Density-Wave Transition in
Single-Layer TiSe$_2$ | $1T$-TiSe$_2$ is believed to posses a soft electronic mode, i.e., plasmon or
exciton, that might be responsible for the exciton condensation and
charge-density-wave (CDW) transition. Here, we explore collective electronic
excitations in single-layer $1T$-TiSe$_2$ by using the ab-initio
electromagnetic linear response and unveil intricate scattering pathways of
two-dimensional (2D) plasmon mode near the CDW phase. We found the dominant
role of plasmon-phonon scattering, which in combination with the CDW gap
excitations leads to the anomalous temperature dependence of the plasmon
linewidth across the CDW transition. Below the transition temperature $T_{\rm
CDW}$ a strong hybridization between 2D plasmon and CDW excitations is
obtained. These optical features are highly tunable due to
temperature-dependent CDW-related modifications of electronic structure and
electron-phonon coupling and make CDW-bearing systems potentially interesting
for applications in optoelectronics and low-loss plasmonics. | Zahra Torbatian, Dino Novko | 2023-07-21T05:48:18Z | http://arxiv.org/abs/2307.11370v2 | # Plasmon excitations across the charge-density-wave transition in single layer TiSe\({}_{2}\)
###### Abstract
1\(T\)-TiSe\({}_{2}\) is believed to posses a soft electronic mode, i.e., plasmon or exciton, that might be responsible for the exciton condensation and charge-density-wave (CDW) transition. Here, we explore collective electronic excitations in single-layer 1\(T\)-TiSe\({}_{2}\) by using the _ab-initio_ electromagnetic linear response and unveil intricate scattering pathways of two-dimensional (2D) plasmon mode. We found the dominant role of plasmon-phonon scattering, which in combination with the CDW gap excitations leads to the anomalous temperature dependence of the plasmon linewidth across the CDW transition. Below the transition temperature \(T_{\text{CDW}}\) a strong hybridization between 2D plasmon and CDW excitations is obtained. These optical features are highly tunable due to temperature-dependent CDW gap modifications and are argued to be universal for the CDW-bearing 2D materials.
Transition metal dichalcogenides (TMDs) host a variety of correlated ordered states which makes them an ideal playground for exploring and manipulating different fundamental interactions in condensed matter [1; 2]. Quasi-two-dimensional 1\(T\)-TiSe\({}_{2}\) belongs to this category having a rich phase diagram, including unconventional charged density wave (CDW) [3] and superconductivity [4], tunable with temperature [3], pressure [5], and doping [4; 6]. The origin of the CDW order in TiSe\({}_{2}\) is still actively debated, where the proposed underlying mechanisms range from purely electronic [3; 7; 8], purely phononic [9], and the combination of both [10; 11; 12; 13; 14; 15]. In the former case, the ordered state is stabilized by the soft electronic mode [8], i.e., exciton or plasmon, which constitutes the intriguing excitonic insulator scenario [16]. However, despite being observed in TiSe\({}_{2}\), the role of the soft plasmon in the CDW formation is unclear, especially since it is strongly Landau damped in the relevant momentum region [17].
The interplay of plasmons and correlated states can result in some unexpected and desirable optoelectronic properties [18], like tunable, long-lived and flat correlated plasmon modes [19; 20]. In bulk 2\(H\) TMDs, such as TaSe\({}_{2}\), TaS\({}_{2}\), NbSe\({}_{2}\), the experimentally observed negative plasmon dispersion was explained in terms of coupling with the CDW state [21; 22]. Furthermore, plasmon mode in bulk TiSe\({}_{2}\) was shown to be highly modified across the CDW transition due to the CDW gap excitations [8; 17; 23; 24]. The two-dimensional (2D) plasmon modes in atomically thin TMDs [25; 26; 27] are characterized with long-wavelength gapless dispersion, low losses, and broad spectral range, opening further possibilities for the CDW-plasmon coupling. For instance, optical measurements of TaSe\({}_{2}\) thin films below transition temperature \(T_{\text{CDW}}\) found intricate hybrid mode consisting of 2D plasmon and CDW excitations showing anomalous broadening [28]. It is therefore clear that, besides the doping [29; 30] and dielectric environment [31; 32], controlling the CDW order is highly appealing pathway for tuning the plasmonic properties, and that the CDW bearing 2D materials are potentially desirable for applications in plasmonics and optoelectronics. Hence, the corresponding microscopic description of coupling between plasmon and CDW are not only essential for unveiling the origin of the CDW formation, but also for bursting the optical properties in TMDs.
Here, we investigate the influence of CDW transition on the electron excitation spectra of the TiSe\({}_{2}\) monolayer by means of the density functional perturbation theory (DFPT) [33] and current-current linear response formalism [34; 30]. The present approach is able to disentangle the relevant plasmon damping channels, such as electron-phonon scattering and Landau damping due to CDW gap excitations. The results show how the CDW-related structural distortions impacts the electronic bands and phonon energies, which in turn leads to remarkable modifications of 2D plasmon. We show that the coupling of plasmon and phonons (in particular, soft CDW phonon) is responsible for the drastic increase of the plasmon decay rate for \(T>T_{\text{CDW}}\), while the closing of the CDW electronic gap for the plasmon energy increase. In fact, in accordance with the optical conductivity measurements [23], we show that the CDW interband excitations are suppressed above \(T_{\text{CDW}}\), which in combination with the plasmon-phonon scattering channel leads to the anomalous temperature-dependence of the plasmon linewidth. Interestingly, below \(T_{\text{CDW}}\) these CDW excitations are more pronounced and are coupled to the 2D plasmon forming a hybrid CDW-plasmon mode, in a close resemblance to the coupled mode recently found in TaSe\({}_{2}\)[28].
The ground state electronic structure calculations are performed by means of the plane-wave density-functional-theory (DFT) code Quantum Espresso [35] and by using the semi-local exchange-correlation PBE functional. Phonon dynamics and electron-phonon coupling (EPC), which are found to be crucial for the CDW properties in TiSe\({}_{2}\)[12; 36], are obtained within the DFPT [33] framework, and then interpolated with maximally-localized Wannier functions [37] and EPW code [38]. Plasmon polariton dispersion and optical conductivity are calculated with _ab-initio_ current-current linear response method suitable for simulating transverse and longitudinal optical response of 2D materials [34], which additionally includes higher-order electron-phonon scattering contributions [30]. This allows for the accurate assessment of the plasmon damping and energy renormalization due to EPC [39; 40; 30]. Further computational details can be found in the Supplemental Material (SM) [41].
The CDW transition in TiSe\({}_{2}\) occurs at \(T_{\rm CDW}\sim 200\) K [3; 42], where the standard \(1\times 1\) unit cell is modified into commensurate \(2\times 2\) structure with periodic lattice distortions below \(T_{\rm CDW}\)[41]. As shown in Figs. 1(a) and 1(b), this structural transition is accompanied by the strong modifications of the electronic band structure near the Fermi level. Namely, the Ti-\(3d\) electron-like states that appear at the M point of the Brillouin zone (BZ) of the \(1\times 1\) cell, are folded back to the \(\Gamma\) point in the CDW \(2\times 2\) structure, where they interact strongly with the Se-\(4p\) hole-like states. This interaction leads to the opening of the CDW gap between the two Se-\(4p\) valence states (denoted v\({}_{1}\)) and the two out of three Ti-\(3d\) conduction bands (denoted c\({}_{2}\)), where the third one remains intact. The temperature dependence of this CDW gap and the total density of states (DOS) are depicted in Figs. 1(c) and 1(d), where the CDW gap behaves in accordance to the mean-field theory result for the second-order transition, i.e., as \(\Delta E_{c_{2}-v_{1}}\propto\tanh(a\sqrt{T_{\rm CDW}/T-1})\)[42; 43]. Since the present results are obtained with the PBE exchange-correlation DFT functional the transition temperature is overestimated \(T_{\rm CDW}^{\rm PBE}\approx 1100\) K, while more accurate result could be obtained with a proper inclusion of electron correlations [36; 15] and anharmonic effects [44]. Despite that, the closing of the gap and the relative difference between the low-temperature \(T\ll T_{\rm CDW}\) and high-temperature \(T>T_{\rm CDW}\) values are in a good agreement with the experimental results as obtained with angle-resolved photoemission spectroscopy [45] and resonant inelastic x-ray scattering (RIXS) [46]. Further, due to the gap opening and modifications of the electronic bands, the DOS is significantly increased at certain energies (e.g., at the gap edges) in accordance to the tunneling spectroscopy studies [47; 48], which, as we will show below, will lead to the formation of the well-defined (interband) and temperature-dependent absorption peaks in the optical conductivity [see the green and purple arrows in Figs. 1(b) and 1(d)].
The Se-\(4p\) states at \(\Gamma\) and Ti-\(3d\) states around M point are strongly coupled with the acoustic phonon mode at \(\mathbf{q}=\) M, which in turn results into significant softening of the latter mode as a function of temperature [see Fig. 1(e)]. In our calculations, the soft CDW phonon becomes unstable below \(T_{\rm CDW}^{\rm PBE}\approx 1100\) K, driving the system into the new stable \(2\times 2\) configuration with distorted lattice. The CDW \(2\times 2\) phase has no soft unstable phonons at low temperatures, as shown in Fig. 1(f). As a consequence of this temperature-dependent electronic structure (e.g., opening of the gap) and phonon dynamics (e.g., sensitive soft phonon), TiSe\({}_{2}\) is characterized with highly tunable EPC and optical properties.
In Fig. 2(a) we show the results of the electron-phonon (Eliashberg) spectral function \(\alpha^{2}F(\omega)\) for the representative temperatures across the CDW transition. The latter spectral function quantifies the degree of the EPC strength for certain phonon energy \(\omega\). The overall coupling strength turns out to be quite small for the \(2\times 2\) CDW phase below \(T_{\rm CDW}\), mostly due to the gap opening and the concomitant small DOS at the Fermi level [see Fig. 1(d)]. When the gap is partially closed for \(T\lesssim T_{\rm CDW}\) or fully closed for \(T>T_{\rm CDW}\), \(\alpha^{2}F(\omega)\) is significantly increased over the whole phonon-energy range. Particularly strong contribution to the Eliashberg function \(\alpha^{2}F(\omega)\) (i.e., the low-energy peak) comes from the soft CDW phonon mode around \(\mathbf{q}=\) M point of the Brillouin zone [49]. Note how the phonon dispersion modifications of the soft phonon in Fig. 1(e) are reflected in \(\alpha^{2}F(\omega)\) as a shift of the main peak towards the higher energies. Figure 2(b) shows the corresponding results for the electron-hole (optical) scattering rate due to EPC \(1/\tau_{\rm ep}\)[41; 50] calculated in the high-energy limit (i.e., for \(\omega\) larger than the highest phonon energy). This decay rate can be obtained from \(\alpha^{2}F(\omega)\) and it is an integral part of the Drude optical conductivity [51; 52], describing the scattering processes between the screened intraband electron-hole excitations (e.g., plasmons) and phonons [30]. Following the temperature behavior of \(\alpha^{2}F(\omega)\), the calculated scattering rate \(1/\tau_{\rm ep}\) shows a dramatic transition from the low values
Figure 1: Electronic band structure of \(1T\)-TiSe\({}_{2}\) along high-symmetry points of the Brillouin zone for (a) \(1\times 1\) phase at high-temperatures \(T>T_{\rm CDW}\) and (b) \(2\times 2\) CDW structure with periodic lattice distortions at \(T\ll T_{\rm CDW}\). The asterisk sign for high-symmetry points denotes the reconstructed \(2\times 2\) Brillouin zone. The black arrow denotes the CDW gap opening at k = \(\Gamma^{*}\), while purple and green arrows show electronic transitions with highest density of states. (c) Temperature modification of the CDW gap at k = \(\Gamma^{*}\) as obtained with DFT and experiments [45; 46]. Note the different transition temperature \(T_{\rm CDW}\) obtained with theory and experiment. (d) Electronic density of states (DOS), showing the full gap opening as a function of temperature. (e) The phonon dispersion for the \(1\times 1\) phase obtained with \(T=1600\) K, \(T=1300\) K, and \(T=1150\) K (from red to blue). (f) The phonon bands for the CDW \(2\times 2\) structure obtained at \(T=300\) K.
(below \(10\,\mathrm{meV}\)) at \(T<T_{\mathrm{CDW}}\) to the large-damping region for \(T\gtrsim T_{\mathrm{CDW}}\). This result agrees fairly well with the decay rate for the bulk plasmon as obtained with the infrared optical measurements [23] and electron energy loss spectroscopy [17]. In Ref. [23] a drastic decrease of \(1/\tau\) below \(T_{\mathrm{CDW}}\) was explained in terms of a reduced scattering phase space due to opening of the CDW gap, while no discussion was provided on the relevant scattering channel. The same effect was elaborated more in Ref. [17], where the strong modifications of \(1/\tau\) was attributed to suppression (enhancement) of the Landau damping due to interband excitations over the CDW gap below (above) \(T_{\mathrm{CDW}}\). Note however that they use a phenomenological model to introduce the CDW interband excitations into the total response function. The results presented in Fig. 2 suggest, on the other hand, that the alterations of \(1/\tau\) dominantly come from the EPC. We will corroborate this further by performing the full calculation of the plasmon dispersion and damping. Note in passing that this results might also provide some answers regarding the anomalous temperature dependence of the resistivity \(\rho\) in TiSe\({}_{2}\)[53, 3], characterized with a peak around \(T_{\mathrm{CDW}}\), since \(\rho\propto 1/\tau\).
The calculated optical excitation properties of single-layer TiSe\({}_{2}\) are presented in Fig. 3. The low-energy electron excitation spectra \(A(q,\omega)\) as a function of momentum \(q\) and for various temperatures near the \(T_{\mathrm{CDW}}\) are depicted in panels (a)-(d). The spectra are dominated by the 2D plasmon mode characterized with a \(\sqrt{q}\) dispersion. As the temperature approaches \(T_{\mathrm{CDW}}\) the plasmon energy increases (for all momenta \(q\)), which agrees well with the behavior of the bulk plasmon in TiSe\({}_{2}\)[17, 23]. This effect comes from the abrupt increase of the electronic DOS at the Fermi level and consequently the increase of the Drude weight and effective numer of carriers [49] once the gap is closed [see Fig. 1(d)]. Slightly below the \(T_{\mathrm{CDW}}\) the plasmon mode interacts with the CDW interband excitations forming a hybrid mode. For \(T=900\,\mathrm{K}\) this coupling is manifested as a shoulder close to the plasmon energy, while for \(T=1000\,\mathrm{K}\) the actual avoided crossing is observed between the two modes [see the black arrows in Figs 3(a) and 3(b)]. The spectral lineshape of this hybrid CDW-plasmon mode is demonstrated further in Figs. 3(e) and 3(f), where we show \(A(q,\omega)\) near the hybridization energy. The strong deviation from the non-interacting Lorentzian lineshape is evident for both \(T=900\,\mathrm{K}\) and \(T=1000\,\mathrm{K}\), where even the two-peak structure is observed for the latter case. A strikingly similar hybrid CDW-plasmon mode was recently observed in \(2H\)-TaSe\({}_{2}\) van der Waals thin films by means of Fourier-transform infrared spectroscopy [28]. This agreement shows that the hybrid CDW-plasmon modes might be universal in the CDW bearing quasi-2D materials, since the 2D plasmon modes are usually well-defined in the low-energy region where the CDW excitations are possible. On the other hand, bulk plasmons are less dispersive and have finite energy at the long wavelengths, and therefore the direct CDW-plasmon coupling is less probable in the bulk CDW materials [17].
To comprehend the CDW-plasmon coupling more closely, we calculate the real part of the interband optical conductivity \(\sigma_{1}^{\mathrm{inter}}(\omega)\) for several relevant temperatures [Fig. 3(g)]. The low-temperature result (i.e., \(600\,\mathrm{K}\)) is characterized with the two well-defined interband excitation peaks (at \(\sim 2\,\mathrm{eV}\) and \(4\,\mathrm{eV}\)), which correspond to the CDW gap excitations pointed out in Fig. 1 with the green and purple arrows. The energy position and shape of these interband peaks are in a good agreement with the low-temperature measurements as obtained with infrared spectroscopy [23] and RIXS [46]. As the temperature increases the two peaks shift to the lower energies and are finally suppressed around and above \(T_{\mathrm{CDW}}\), since the CDW gap is closed. This observation is contrary to the claims made in Ref. [17], while in line with the above mentioned experiments [23, 46]. Further, from the results of \(\sigma_{1}^{\mathrm{inter}}(\omega)\) it is clear that depending on the temperature the 2D plasmon mode interacts with one of these two CDW excitations. Namely, for \(T=900\,\mathrm{K}\) plasmon is coupled to the low-energy, while for \(T=1000\,\mathrm{K}\) to the high-energy CDW excitation.
We are now in the position to inspect the total contribution to the plasmon damping \(\Gamma_{\mathrm{pl}}\), which consists of both interband Landau damping and plasmon-phonon parts, and analyze its temperature dependence. From Fig. 3(h) we see that the total plasmon damping increases with increasing energy (and hence momentum \(q\)), which was also observed for the 2D plasmon in TaSe\({}_{2}\)[28], and comes from the 2D nature of Coulomb screening for which \(\Gamma_{\mathrm{pl}}\propto q\)[41]. Interestingly, for the selected plasmon energy \(\omega_{\mathrm{pl}}\) (i.e., momentum \(q\)) \(\Gamma_{\mathrm{pl}}\) shows an unconventional temperature dependence, i.e., it decreases up to \(T<T_{\mathrm{CDW}}\), and it rapidly increases around \(T_{\mathrm{CDW}}\). We explain this anomalous dependence as a consequence of the interplay between the Landau damping due to CDW gap excitations and plasmon-phonon scattering. From Fig. 2(b) we see that the scattering rate due to EPC abruptly increases around \(T_{\mathrm{CDW}}\), while purely interband contribution to \(\Gamma_{\mathrm{pl}}\) is gradually
Figure 2: (a) Eliashberg electron-phonon spectral function \(\alpha^{2}F(\omega)\) as a function of the phonon energy and calculated for different temperatures across the CDW transition. The first two light blue lines are for the CDW \(2\times 2\) phase, while the rest is calculated for the normal \(1\times 1\) structure. (b) The corresponding electron-hole (or optical) scattering rate due to EPC \(1/\tau_{\mathrm{ep}}\) as a function of temperature (blue dots). \(1/\tau_{\mathrm{ep}}\) is calculated here in the high-energy regime. Orange squares and green diamonds show the results for \(1/\tau\) as extracted from the infrared optical measurements [23] and electron energy loss spectroscopy [17], respectively. Note again the different transition temperature \(T_{\mathrm{CDW}}\) obtained with theory and experiment.
suppressed when temperature is increasing towards \(T_{\rm CDW}\) [see Fig. 3(i)]. Note that the EPC contribution to \(\Gamma_{\rm pl}\) is much larger than the Landau damping coming from the CDW gap excitations. Considering the general importance of the EPC in TiSe\({}_{2}\), e.g., for the formation of CDW and superconductivity [10; 12], it is not entirely surprising that it plays an important role in plasmon dynamics. We point out that the same mechanism might also be behind the anomalous temperature dependence of 2D plasmon damping in the few-layer TaSe\({}_{2}\)[28].
As for the bulk plasmon in TiSe\({}_{2}\), note that its energy ranges from \(50\,\)meV to \(150\,\)meV [8; 13; 17; 23] which is well below the main CDW excitation peaks [23; 46], and it turns out from Fig. 3(g) that the corresponding interband Landau damping will be small below and above \(T_{\rm CDW}\). This suggest that the CDW gap excitations are unlikely to explain the sudden increase of the plasmon linewidth above \(T_{\rm CDW}\) and that the EPC is a dominant plasmon decay channel also in the bulk TiSe\({}_{2}\), as already indicated in Fig. 2(b).
The present study demonstrates a great potential of the correlated TMDs as a highly tunable plasmonic materials with unconventional optical features, induced by the charge-ordered states. Having in mind the rich phase diagram of TiSe\({}_{2}\) it is expected that the CDW-plasmon coupling could be further controlled with carrier doping [4; 6] and pressure [5], or even additional hybrid modes might be activated as a consequence of the interactions with superconductivity-related excitations [54]. Note that similar features are expected in other 2D semi-metallic and metallic TMDs hosting a CDW order, such as ZrTe\({}_{2}\)[55], TiTe\({}_{2}\)[56], NbSe\({}_{2}\)[57], or WTe\({}_{2}\)[58]. Finally, our work supports the idea that plasmon modes could be utilized as a louge for tracking ordered phases and the corresponding intrinsic excitation mechanisms in correlated systems [18; 19; 20; 28; 54].
D.N. acknowledges financial support from the Croatian Science Foundation (Grant no. UIP-2019-04-6869) and from the European Regional Development Fund for the "Center of Excellence for Advanced Materials and Sensing Devices" (Grant No. KK.01.1.1.01.0001). Z.T. acknowledges financial support from the Iran Science Elites Federation. Part of the computational resources were provided by the DIPC computing center.
|
2309.00909 | There is power in general equilibrium | The article develops a general equilibrium model where power relations are
central in the determination of unemployment, profitability, and income
distribution. The paper contributes to the market forces versus institutions
debate by providing a unified model capable of identifying key interrelations
between technical and institutional changes in the economy. Empirically, the
model is used to gauge the relative roles of technology and institutions in the
behavior of the labor share, the unemployment rate, the capital-output ratio,
and business profitability and demonstrates how they complement each other in
providing an adequate narrative to the structural changes of the US economy. | Juan Jacobo | 2023-09-02T11:14:35Z | http://arxiv.org/abs/2309.00909v1 | # There is power in general equilibrium
###### Abstract.
The article develops a general equilibrium model where power relations are central in the determination of unemployment, profitability, and income distribution. The paper contributes to the "market forces versus institutions" debate by providing a unified model capable of identifying key interrelations between technical and institutional changes in the economy. Empirically, the model is used to gauge the relative roles of technology and institutions in the behavior of the labor share, the unemployment rate, the capital-output ratio, and business profitability and demonstrates how they complement each other in providing an adequate narrative to the structural changes of the US economy.
_Keywords._ Power relations, unemployment, automation, labor institutions.
_JEL Classification._ C78, D24, D33, E11, J64, J65, O33, P16
* Department of Economics, Externado University of Colombia, [email protected]
## I Introduction
Over the past 70 years, the US economy has seen dramatic changes in income distribution, technology adoption, corporate profitability, and unemployment rates. The years from the late 1940s to the mid-1970s marked a period with a considerable reduction in income inequality and a slightly increasing labor share, albeit with a higher ratio of capital to value added, a surge in the rate of unemployment, and a deteriorated profitability of businesses. Most of these patterns reverted in the early 1980s and led to a new era with a sharply uneven distribution in favor of upper income groups.
While there have been many discussions about the causes of these macro patterns, there is not a fully compelling explanation. The prevailing theories can be divided into
market-driven versus institution-driven stories. The market-driven approach posits that technical change (particularly automation), globalization, and industrial concentration have created a bias in favor of high-skilled labor and the owners of capital, which are commonly in the top percentiles of the distribution of income (see, e.g., Autor, Dorn, Katz, Patterson, and Van Reenen (2020); Hemous and Olsen (2022); Moll, Rachel, and Restrepo (2022)). The two main problems with this approach is that it cannot account for the fact that not all nations subject to similar technological forces have seen an equal rise of top income shares and that it is hard to reconcile with the behavior of key macro trends like the rate of unemployment and several measures of corporate profitability during the postwar period (Stansbury and Summers, 2020).
The institution driven stories postulate that union memberships, minimum wages, tax policy, preferences for redistribution, and broadly defined organizational practices in the labor market had a major role in macroeconomic outcomes and the evolution of income inequality (see, e.g., Piketty, Saez, and Stantcheva (2014); Stansbury and Summers (2020); Farber, Herbst, Kuziemko, and Naidu (2021).) The difficulty is that it is generally challenging to represent the multidimensional character of labor institutions in a tractable model that highlights the relative role of each specific factor.
The first goal of this paper is to present a comprehensive general equilibrium model capturing key aspects of the market-driven and institution-driven narratives to assess their relative roles in the evolution of inequality and macroeconomic outcomes. To do so, I merge the task-based formalism of Acemoglu and Restrepo (2018) with the search and matching models of equilibrium unemployment, while relaxing the unrealistic assumption that firms can and do include the "required" rate of return as a cost of production. This presents a more realistic model of capitalist economies by explicitly revealing how corporate profitability is determined by power relations between workers and firms, and how these power relations are endogenously formed by norms and organizational practices defining the bargaining protocol of wages. Furthermore, the
model explores the dynamic interrelation between technical and institutional changes, and provides a clear and tractable framework illustrating how unemployment and the functional distribution of income are affected by automation, labor productivity growth, and specific labor institutions like union membership and real minimum wages.
The second goal of the paper is to gauge the relative roles that technical and institutional changes had in the US economy over the postwar period by comparing the predicted paths of the model with their empirical counterparts. I consider macro-level time series of economic, political, and institutional data.
Basing the initial analysis on economic time series evidence, and employing a parsimonious calibration strategy where the only parameters which are directly estimated are the measure of automation and the bargaining power of labor using the theoretical equilibrium conditions, the model reaches two main results. First, the rise and fall of worker power before and after the mid-1970s is probably the major structural change responsible for the behavior of the labor share, corporate profitability, and the unemployment rate. This suggests that an adequate understanding of macroeconomic trends requires a careful study of the institutional and politico-economic variables determining the bargaining power of labor. Second, technical change (particularly automation) is nonetheless a key factor determining the behavior of the labor share and the ratio of capital to value added. Altogether, by studying a wide array of macroeconomic variables over the entire postwar period, the evidence shows that the market-driven and institution-driven stories likely complement, rather than substitute, each other in providing a consistent narrative for the main events of the US economy.
The time series on labor institutions are used to supplement the previous results in two ways. First, they illustrate that the predicted paths of worker power derived from the calibration strategy of the model are consistent with the observed variations in labor institutions in the US. Specifically, worker power increased between the 1940s to the late 1970s when the institutional support to labor was generally rising, and decreased
steadily thereafter when unions, minimum wages, and top marginal income tax rates simultaneously declined. Second, the data exhibits a clear association between the rise and fall of the institutional support to labor with the "Communist threat", which refers to the class compromise between capital and labor induced by the fear that communism could replace the foundations of capitalism (Gerstle, 2022). This presents a plausible story explaining why Democrat and Republican governments alike supported the construction of a welfare state in the US before the mid-1970s, but dismantled some of its foundations afterwards.
Combining these empirical results, the model sheds new light on widely studied phenomena like the wage-premium and the association of corporate markups with market concentration. The evidence shows that corporate profitability is highly correlated with the wage-premium since the 1950s, suggesting that similar mechanisms driving up the rate of return of capital are also raising the relative wage of high-skilled labor. The data also indicates that the behavior of corporate markups is only consistent with the trends of market concentration after the early 1980s (Kwon, Ma, and Zimmermann, 2023), while it is generally well aligned with the behavior of worker power throughout the postwar period. Thus, given the centrality of labor power in explaining the behavior of business profitability, it is likely that the relations between capital and labor have been key actors shaping the behavior of the wage-premium and corporate markups in the US.
To the best of my knowledge, this is the first paper to connect--both theoretically and empirically--the growing literature on the political economy of income distribution, labor institutions, and political preferences (see, e.g., Piketty and Saez (2003); Piketty, Saez, and Stantcheva (2014); Farber, Herbst, Kuziemko, and Naidu (2021)) with the numerous studies on the trends in the labor share, the unemployment rate, and the capital-output ratio. Similar to Stansbury and Summers (2020), DiNardo, Hallock, and Pischke (2000), Krueger and Ashenfelter (2022), Taschereau-Dumouchel (2020),
and Acemoglu, He, and le Maire (2022), the paper establishes an explicit connection between worker power, the distribution of income, and the rents transferred from labor to capital. However, unlike the cited literature, the model corrects for possible confounding factors by developing a methodology that explicitly distinguishes the relative roles of technological and institutional changes in economic dynamics.
The paper also contributes to the growing literature on the effects of technical progress and automation on labor demand and income distribution (Aghion and Howitt, 1994; Acemoglu and Restrepo, 2018; Hemous and Olsen, 2022; Moll, Rachel, and Restrepo, 2022). Relative to these papers, I show how technical change is explicitly associated with technological unemployment in a dynamic setting, and why the effects of automation always depend on the specific institutional arrangements defining the bargaining power of labor. Furthermore, the model establishes the conditions for a balanced growth path (BGP) with positive growth and reveals how they are associated with the institutions enabling the existence of sufficiently large profits for firms.
Finally, this work extends on the literature attempting to explain the trend of key macroeconomic variables in the US economy (Goldin and Katz, 2010; Karabarbounis and Neiman, 2014; Farhi and Guorio, 2018; Autor, Dorn, Katz, Patterson, and Van Reenen, 2020; Barkai, 2020; Stansbury and Summers, 2020). Similar to Stansbury and Summers (2020), the paper identifies worker power as a major source of the structural changes in the US over the postwar period. However, by revealing the links between technical and institutional changes, the model also supports the findings in Bergholt, Furlanetto, and Maffei-Faccioli (2022) and Moll, Rachel, and Restrepo (2022) by showing that automation contributed to the fall of the labor share in the mid-1970s and in the early 2000s, and to the rise of the capital-output ratio since the late 1960s.
The next section describes the basic environment of the model. Section III defines the bargaining protocol of wages and its connection with the equilibrium rate of return
of capital. Section IV reveals the conditions for a general equilibrium with positive growth and derives the key results on transitional dynamics. Section V presents an approximate calibration to the model and evaluates the roles of technology and institutions in the structural changes of the US economy. Section VI shows some channels through which worker power is associated with the wage-premium and disentangles the extent to which business markups are related to market concentration. Section VII concludes. The main Appendix generalizes the model in Section II and complements the theoretical results in Sections III and IV. The online Appendix presents all the relevant proofs and derivations of the paper, the details of the calibration exercise, and the description of the data along with additional robustness tests.
## II Model
This section presents the technology and price structure of the model, describes the matching function and the dynamics of aggregate employment and capital with the automation and creation of new tasks, and characterizes the value functions of capitalists and workers.
### Environment
The description of the production process follows the formalism of Acemoglu and Restrepo (2018) by emphasizing the role of capital and labor in the production of tasks \(j\) indexed over a normalized space \([M_{t}-1,M_{t}]\). Tasks with \(j\in(J_{t},M_{t}]\) are produced with labor, and have an effective unit cost \(W_{t}/A_{t}^{l}(j)\)--\(W_{t}\) is the nominal wage per worker and \(A_{t}^{l}(j)\) is the task-specific labor-augmenting technology. Respectively, tasks \(j\in[M_{t}-1,J_{t}]\) are produced with capital at an effective unit cost \(\delta P_{t}^{k}/A_{t}^{k}(j)\), where \(\delta\in(0,1)\) is the depreciation rate, \(P_{t}^{k}\) is the price of capital, and \(A_{t}^{k}(j)\) is the capital-augmenting technology.
Throughout, the factor augmenting technologies are represented by:
**Assumption 1**: \(A_{t}^{k}(j)=A^{k}>0\) _and \(A_{t}^{l}(j)=e^{\alpha j}\), with \(\alpha>0\)._
Assumption 1 says that labor has a comparative advantage in higher-indexed tasks and guarantees the existence of a threshold \(\tilde{J}_{t}\) such that
\[e^{\alpha\tilde{J}_{t}}=\frac{W_{t}A^{k}}{\delta P_{t}^{k}}.\]
When \(j\leq\tilde{J}_{t}\), tasks are produced with capital since it has a lower effective cost than labor. If \(j>\tilde{J}_{t}\), the production of tasks is bounded by the existing technology and firms will only be able to automatize up to \(J_{t}\). The unique threshold defining the assignment of tasks is consequently \(J_{t}^{*}=\min\{J_{t},\tilde{J}_{t}\}\).
Appendix A.1 shows that, in this setup, the equilibrium output can be expressed as an aggregate production function
\[Y_{t}=\left[(1-m_{t}^{*})^{1/\sigma}\big{(}A^{k}\ K_{t}\big{)}^{\frac{\sigma-1 }{\sigma}}+\Big{(}\int_{0}^{m_{t}^{*}}e^{\alpha j}\ \mathrm{d}j\Big{)}^{1/\sigma}\Big{(}e^{\alpha J_{t}^{*}}L_{t}\Big{)}^{\frac{ \sigma-1}{\sigma}}\right]^{\frac{\sigma}{\sigma-1}}, \tag{1}\]
where \(K_{t}\) is the aggregate capital stock, \(L_{t}\) is aggregate employment, \(P_{t}^{c}\) is the price index of costs of production satisfying the _ideal price index condition_, and \(m_{t}^{*}=M_{t}-J_{t}^{*}\) is the equilibrium measure of automation.
#### ii.a.1. Prices and Growth
The economy-wide price of the final output is given by
\[P_{t}=(1+\mu_{t})P_{t}^{c}. \tag{2}\]
The key characteristic of (2) is that firms only realize a profit _after_ a commodity is produced and sold, meaning that the rate of return of capital, \(\mu_{t}\), cannot be included as a cost of production.
In the text, the exposition is simplified by assuming that the economy can convert one unit of output into \(q_{t}=q\) units of capital, so that \(P_{t}^{k}/P_{t}=q^{-1}\) at any time \(t\). This special case of an economy with _investment-specific technological change_ allows the
existence of a BGP without the introduction of human capital accumulation or further discussions on the so-called "capital-skill" complementarity.1
Footnote 1: Appendix A.1 presents a generalized model showing how investment-specific technological change can be incorporated to the analysis.
Denoting the growth rate of any variable \(X\) as \(g_{X}\), the next lemma specifies the conditions for a BGP in the economy described above.
Lemma 1-- Suppose that Assumption 1 holds. Then in any BGP:
\[g_{K}=g_{Y}=g_{C}=g=\alpha\dot{M}\]
Lemma 1 is a simplified version of Lemma A1 in Appendix A.1, used below to study how changes in the rate of automation or in the pace labor-augmenting technological progress affect the economy; see Proposition 5.
### Matching and State Dynamics
Society is made of a unit measure of risk-neutral workers and a continuum of potential firms (capitalists) with a common discount rate \(\rho\). Lower-case letters represent real stationary variables, whereas stationary per-capita variables are denoted by \(\hat{x}_{t}\).2
Footnote 2: For example, \(w_{t}=W_{t}/\big{(}P_{t}e^{\alpha(M_{t}-m_{t}^{*})}\big{)}\) and \(\hat{y}_{t}=Y_{t}/\big{(}L_{t}e^{\alpha(M_{t}-m_{t}^{*})}\big{)}\).
Employed workers are denoted by \(L_{t}\) and the remaining \(U_{t}=1-L_{t}\) are the unemployed. Vacancies are filled via a matching function \(G(U_{t},V_{t})\) which exhibits constant returns to scale in \((U_{t},V_{t})\) and decreasing returns to scale in \(V_{t}\) or \(U_{t}\) separately. Labor market tightness is defined as the vacancy-unemployment ratio \(\theta_{t}=V_{t}/U_{t}\), the probability of filling a vacancy is \(q(\theta_{t})=G(U_{t},V_{t})/V_{t}\), and the job-finding probability per unit of time is \(f(\theta_{t})=G(U_{t},V_{t})/U_{t}\).
Introducing changes in the automation and the creation of new tasks, the evolution of employment can be described by
\[L_{t+\mathrm{d}t}=(1-\lambda_{0})L_{t}+q(\theta_{t})V_{t}-\overbrace{\Big{[} \underbrace{\int_{J_{t}^{*}}^{J_{t+\mathrm{d}t}}l_{t}(j)dj}_{\text{displacement effect}}-\underbrace{\int_{M_{t}}^{M_{t+\mathrm{d}t}}l_{t}(j)dj}_{\text{ reinstatement effect}}\Big{]}}^{U_{t}^{A}=\text{technological unemployment}}.\]
As usual, \(\lambda_{0}\) is the exogenous job-separation rate. An important feature of the employment dynamics is that the displacement and reinstatement effects of the automation and creation of new tasks give rise to a _technological unemployment_ component. Essentially, technological change creates a displacement effect by replacing labor for capital, and a reinstatement effect by expanding the number of tasks on which labor has a comparative advantage.
In the limit when \(\mathrm{d}t\to 0\), the employment dynamics equation becomes
\[\dot{L}_{t}=q(\theta_{t})V_{t}-\lambda_{t}L_{t} \tag{3}\]
with \(\lambda_{t}=\lambda_{0}+\partial U_{t}^{A}/\partial L_{t}\). The intuition of how technological change affects employment is well captured in the following lemma.
**Lemma 2**: _Suppose that Assumption 1 holds. Then technological unemployment is equal to_
\[U_{t}^{A}=L_{t}\Big{(}1-e^{\alpha(\sigma-1)(\dot{M}_{t}-\dot{m}_{t}^{*})}\ \frac{e^{\alpha(\sigma-1)(m_{t}^{*}+\dot{m}_{t}^{*})}-1}{e^{\alpha(\sigma-1)m_{ t}^{*}}-1}\Big{)}, \tag{4}\]
_and satisfies the relations in Table 1 in the steady-state._
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \(\frac{\partial U_{t}^{A}}{\partial L_{t}}\) (if \(\dot{M}_{t}>0\)) & \(\frac{\partial U_{t}^{A}}{\partial L_{t}}\) (if \(\dot{M}_{t}<0\)) & \(\partial U_{L_{t}}^{A}/\partial\dot{M}_{t}\) & \(\frac{\partial U_{L_{t}}^{A}}{\partial\dot{m}_{t}^{*}}\) (if \(m_{t}^{*}=m_{t}\)) & \(\frac{\partial U_{L_{t}}^{A}}{\partial\dot{m}_{t}^{*}}\) \\ \cline{2-6} \(\sigma>1\) & \(<0\) & \(>0\) & \(<0\) & \(<0\) & \(=0\) \\ \(\sigma\in(0,1)\) & \(>0\) & \(<0\) & \(>0\) & \(<0\) & \(=0\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Scenarios of technological unemployment.
The bottom line in Lemma 2 is that the rate of technological unemployment will decrease in an expanding economy when \(\sigma>1\) and will increase with a higher rate of automation if mechanizing tasks is economically feasible, regardless of the value of \(\sigma\).
Analogous to the evolution of employment, the dynamics of aggregate capital with task automation can be expressed as
\[\dot{K}_{t}=I_{t}-\Big{(}\delta+\frac{\dot{m}_{t}^{*}}{1-m_{t}^{*}}\Big{)}K_{t}= I_{t}-\delta_{t}K_{t}, \tag{5}\]
where \(\delta_{t}\) is the the total depreciation rate of capital. In the steady-state, when \(\dot{m}_{t}^{*}=0\), \(\delta_{t}=\delta\).
### Value Functions
The value function of an unemployed worker satisfies3
Footnote 3: With the exception of time, partial derivatives are denoted by subscripts. For instance, \(y_{L_{t}}\) is the partial derivative of stationary output with respect to labor in period \(t\).
\[(\rho+\alpha\ \dot{m}_{t}^{*}-g)\phi_{U_{t}}-\dot{\phi}_{U_{t}}=b_{t}+f( \theta_{t})\big{(}\phi_{L_{t}}-\phi_{U_{t}}\big{)}. \tag{6}\]
In equation (6), the unemployed receive flow utility \(b_{t}\) and transition to employment with a rate \(f(\theta_{t})\), in which case they receive a payoff \(\phi_{L_{t}}\) satisfying
\[(\rho+\alpha\ \dot{m}_{t}^{*}-g)\phi_{L_{t}}-\dot{\phi}_{L_{t}}=\lambda_{t}( \phi_{U_{t}}-\phi_{L_{t}})+w_{t}. \tag{7}\]
The employed worker receives flow utility from real wages, and at rate \(\lambda_{t}\) the job is dissolved. An important feature of equation (7) is that the job separation rate is partly determined by technological progress, meaning that firms can reduce the worth of a job to a worker by increasing technological unemployment. In addition, the effective discount rate is the sum of two components: (i) the common time-preference parameter \(\rho\); and (ii) variations in the automation and creation of new tasks (which affect \(\dot{m}_{t}\) and \(g\), respectively).
The value of a vacancy for the firm is represented by
\[(\rho+\alpha\ \dot{m}_{t}^{*}-g)\pi_{V_{t}}-\dot{\pi}_{V_{t}}=q(\theta_{t})\big{(} \pi_{L_{t}}-\pi_{V_{t}})-\xi_{t} \tag{8}\]
Here the firm pays the flow cost of opening a vacancy, \(\xi_{t}\), and matches with a worker at a rate \(q(\theta_{t})\). Correspondingly, the value of a filled job for the firm satisfies
\[(\rho+\alpha\ \dot{m}_{t}^{*}-g)\pi_{L_{t}}-\dot{\pi}_{L_{t}}=\lambda_{t}(\pi_{V_{t }}-\pi_{L_{t}})+\hat{y}_{t}-\hat{k}_{t}\hat{y}_{\hat{k}_{t}}-w_{t}, \tag{9}\]
where \(y_{L_{t}}-w_{t}=\hat{y}_{t}-\hat{k}_{t}\hat{y}_{\hat{k}_{t}}-w_{t}\) is the flow utility earned by the firm.
## III Wage Bargaining and the Return of Capital
This section presents the core of the paper by showing how aggregate employment and rate of return of capital are simultaneously determined by the bargaining protocol of wages.
### Bargaining Protocol
The bargaining model is summarized in Figure 1 by dividing wage outcomes in terms of two competing organizational practices. On one side we find the individual bargaining protocol, characterized for allowing employee and employer competition in the determination of wages. The competition process is represented by introducing a minimum time delay affecting the probability that firms and workers will each find new bargaining partners to restart the negotiation of wages. The minimum time delay is proportional to a parameter \(T^{w}\), which plays a key role in the model by capturing the firms' _relative_ capacity of finding new workers willing to compete for lower wages. For instance, \(T^{w}\) can increase as a result of policies or economic conditions which effectively reduce the employment options and the mobility of workers, as it is the case with non-poaching and non-competing clauses or with a higher monopsony power of firms (Krueger and Ashenfelter, 2022; Azar, Marinescu, Steinbaum, and Taska, 2020).4 Similarly, \(T^{w}\) can decrease by passing legislative action
which mitigates the capacity of firms to lower wages through competition, as it can be expected by setting higher minimum wages (Naidu, 2022, p. 18).
In the left-hand side of Figure 1, the model introduces the possibility that workers will choose a collective bargaining process when negotiating wages.
#### iii.a.1. Individual Bargaining
The individual bargaining model has the following structure, shown as an extensive-form game in Figure 1.
* The first node in the right-hand side of Figure 1 is a chance node defining the type of competitive process between workers and firms. Each worker takes a
random sample \(T(\theta)=\min\Bigl{\{}T^{w}(\theta)\sim\mathcal{E}\bigl{(}T^{w}/f(\theta)\bigr{)},T^{F}(\theta)\sim\mathcal{E}\bigl{(}1/q(\theta)\bigr{)}\Bigr{\}}\).5 If \(T(\theta)=T^{F}(\theta)\), the firm will be the first to find a new partner to start bargaining after a time delay \(\Delta T^{F}(\theta)\), measured by the average duration of a vacant job. The contrary occurs when \(T(\theta)=T^{w}(\theta)\), in which case the average time delay is given by the mean duration of unemployment, \(1/f(\theta)\), multiplied by the hiring capacity of firms, \(T^{w}\). Footnote 5: Here \(\mathcal{E}\bigl{(}T^{w}/f(\theta)\bigr{)}\) is an exponential distribution with mean \(T^{w}/f(\theta)\).
* Given the law of large numbers, \(T(\theta)=T^{F}(\theta)\) with probability \(T^{w}/(T^{w}+\theta)\) and \(T(\theta)=T^{w}(\theta)\) with probability \(\theta/(\theta+T^{w})\).
* If \(T(\theta)=T^{F}(\theta)\), the game follows the steps described by Shaked and Sutton (1984), which is depicted in the rightmost branch of Figure 1. However, if \(T(\theta)=T^{w}(\theta)\), the game replicates the alternating offers model of Rubinstein (1982) since firms are identical by assumption.
The following proposition summarizes the main results of the individual bargaining protocol.
Proposition 1-- Suppose that firms always make the first offer, that \(\Delta\) tends to zero, and that the capitalists' response time is \(\Delta_{f}=\gamma^{f}\Delta\), with \(\gamma^{f}>0\). Applying the law of large numbers,
* if \(T(\theta)=T^{w}/f(\theta_{t})\), \(w_{t}^{na}=b_{t}+\Psi_{t}^{na}\bigl{(}y_{L_{t}}-b_{t}\bigr{)}\), with \[\Psi_{t}^{na}=\frac{\Gamma^{na}[\rho+\alpha\;\dot{m}_{t}^{*}-g+\lambda_{t}+f( \theta_{t})]}{\rho+\alpha\;\dot{m}_{t}^{*}-g+\lambda_{t}+\Gamma^{na}f(\theta_{ t})},\;\;\Gamma^{na}=\frac{\gamma^{f}}{1+\gamma^{f}}.\]
* If \(T(\theta)=1/q(\theta_{t})\), \(w_{t}^{nb}=b_{t}+\Psi_{t}^{nb}\bigl{(}y_{L_{t}}-b_{t}\bigr{)}\), with \[\Psi_{t}^{nb}=\frac{\Gamma_{t}^{nb}[\rho+\alpha\;\dot{m}_{t}^{*}-g+\lambda_{t} +f(\theta_{t})]}{\rho+\alpha\;\dot{m}_{t}^{*}-g+\lambda_{t}+\Gamma_{t}^{nb}f( \theta_{t})},\;\Gamma_{t}^{nb}=\frac{\gamma^{f}(1-q(\theta_{t}))}{1+\gamma^{f} +q(\theta_{t})(1-\gamma^{f})}.\]
* The average wage rate from individual bargaining is \[w_{t}^{n}=b_{t}+\Psi_{t}^{n}\big{(}y_{L_{t}}-b_{t}\big{)},\quad\text{with }\Psi_{t}^{n}=\frac{T^{w}\ \Psi_{t}^{nb}+\theta_{t}\ \Psi_{t}^{na}}{T^{w}+\theta_{t}}.\] (10)
In all cases, \(\Gamma_{t}^{(\cdot)}\) and \(\Psi_{t}^{(\cdot)}\) depict the _intrinsic_ and the _actual_ bargaining power of labor, with \(\Psi_{t}^{na}\geq\Psi_{t}^{n}\geq\Psi_{t}^{nb}\) for all \(\theta\geq 0\). The importance of Proposition 1 can be well understood by studying how worker power changes with variations in the labor market, the relative mobility of labor (\(T^{w}\)), the pace of automation, and the labor-augmenting technical progress. This is summarized in the following corollary.
Corollary 1-- Suppose that the assumptions in Proposition 1 hold.
* (_Loose labor market_) If \(\theta\to 0\), then \[\begin{cases}\Psi_{t}^{na}\to\Psi_{t}^{n}\to\Psi_{t}^{nb}\to 0.\\ \Gamma^{na}\to\Gamma_{t}^{nb}\to 0.\end{cases}\]
* (_Tight labor market_) If \(\theta\to\infty\), then \[\begin{cases}\Psi_{t}^{nb}\to\Psi_{t}^{n}\to\Psi_{t}^{na}\to 1.\\ \Gamma_{t}^{nb}\to\Gamma^{na}<0.5\end{cases}\]
* (_Relative mobility of labor_) A lower relative mobility of labor (\(T^{w}\uparrow\)) reduces the power of workers. That is, \[\frac{\partial\Psi_{t}^{n}}{\partial T^{w}}=\frac{1}{T^{w}+\theta}\ \big{[}\Psi_{t}^{nb}-\Psi_{t}^{n}\big{]}\leq 0 \quad\text{for all }\theta\geq 0.\]
* (_Automation_) Suppose that mechanizing tasks is feasible. Then \[\frac{\partial\Psi_{t}^{n}}{\partial\dot{m}_{t}^{*}}>0\quad\text{ if }\left|\frac{\partial\lambda_{t}}{\partial\dot{m}_{t}^{*}}\right|>\alpha.\]
* (_Labor-augmenting technical progress_) A higher equilibrium rate of growth always increases the bargaining power of labor if \(\dot{M}_{t}>0\), i.e., \(\partial\Psi_{t}^{n}/\partial\dot{M}_{t}>0\) for all \(\sigma>0\) and \(\dot{M}_{t}>0\). Particularly, the following is true: \[\left.\frac{\partial\Psi_{t}^{n}}{\partial\dot{M}_{t}}\right|_{\sigma>1}>\ \left.\frac{\partial\Psi_{t}^{n}}{\partial\dot{M}_{t}}\right|_{\sigma\in(0,1) }>\ \left.\frac{\partial\Psi_{t}^{n}}{\partial\dot{M}_{t}}\right|_{\sigma\in(0,1),\dot{M}_{t}<0}\ \stackrel{{\leq}}{{\geq}}\ 0.\]
The results in Corollary 1 are quite intuitive and easy to understand. For instance, the model makes it clear that loose labor markets work as endogenous mechanisms that reduce the bargaining power of labor. Conversely, a tight labor market empowers workers, though it has a limited impact on \(\Gamma^{na}\) and \(\Gamma_{t}^{nb}\) if firms always make the first offer. A relative reduction in the mobility of labor lowers \(\Psi_{t}^{n}\) by increasing the probability that workers will have to compete for each available vacancy. Finally, extending on the results of Aghion and Howitt (1994) and Acemoglu and Restrepo (2018), technology can have two opposing effects on worker power. On one hand, higher automation is expected to weaken workers when the increase in technological unemployment surpasses the reduction in the effective discount rate generated by the rise in the value of capital per unit of time. On the other hand, labor power will generally benefit from a higher productivity growth through the well-known _capitalization effect_
#### iii.a.2 Collective Bargaining
Similar to Taschereau-Dumouchel (2020), collective bargaining is modeled as a Nash bargaining problem between the firm and all its workers. If an agreement is reached, workers receive the net reward from employment and the firm receives the corresponding equilibrium value derived from the Hamilton-Jacobi-Bellman equation. Otherwise, the firm loses all its workers and has to rehire its entire workforce the following period.
The next proposition presents the solution of the Nash bargaining problem in the left-hand side of Figure 1.
**Proposition 2**.: _The real wage under collective bargaining is given by_
\[w_{t}^{u}=b_{t}+\Psi_{t}^{u}\Big{[}y_{L_{t}}-b_{t}+\frac{\rho+\alpha\;\dot{m}_{t}^{* }-g+\lambda_{t}}{\rho+\alpha\;\dot{m}_{t}^{*}-g}\big{(}\hat{y}_{t}-y_{L_{t}} \big{)}\Big{]} \tag{11}\]
with \(\Psi_{t}^{u}=\frac{\Gamma^{u}[\rho+\alpha\;\dot{m}_{t}^{*}-g+\lambda_{t}+f( \theta_{t})]}{\rho+\alpha\;\dot{m}_{t}^{*}-g+\lambda_{t}+\Gamma^{u}f(\theta_{t})}\).
The solution in equation (11) is similar to the real wage under individual bargaining, with the notable difference that the former introduces an additional component representing the benefit that workers can extract from the increase in the aggregate surplus.
### Labor Market Equilibrium
Appendix B.1 presents a game-theoretic model determining the probability \(P(\mathcal{U}=1|\cdot)\) that workers will choose a collective bargaining strategy in the first node of Figure 1. This probability is a function of the perceptions, attitudes, and biases that workers have when sharing economic outcomes, and the preferences for political support of the government. In the main text, however, \(P(\mathcal{U}_{t}=1|\cdot)\) is a a known datum, so that the aggregate wage can be expressed as
\[w_{t}=w_{t}^{n}+P(\mathcal{U}_{t}=1|\cdot)\big{(}w_{t}^{u}-w_{t}^{n}\big{)}. \tag{12}\]
This is an average of the individual and collective bargaining solution, weighted by the relative advantages of each bargaining protocol and the social factors influencing the workers' perceptions, attitudes and biases.
Combining (12) with equations (2) and (9), we reach the main result of the section.
Proposition 3-- Suppose that Assumption 1 holds. If firms reserve the right to manage and aggregate wages satisfy (12), then there exists a unique pair \((\mu_{t}^{*},\theta_{t}^{*})\) resulting from the labor market equilibrium.
The logic behind Proposition 3 is captured in Figure 3. First, given the model in Appendix B.1, workers combine their preferences and political views with the relative advantages of collective bargaining, and decide on a vote share \(P(\mathcal{U}=1|\cdot)\). From this, the aggregate wage and labor market tightness is determined in Panel B using equations
(9) and (12). Lastly, given the equilibrium in the labor market, equations (1) and (2) determine \(\mu^{*}\) and \(\hat{k}^{*}\) simultaneously.
The next corollary presents a simple expression of the labor share in terms of the technology and the institutions determining the equilibrium rate of return.
**Corollary 2**.: _In equilibrium, the labor share satisfies_
\[\Omega_{t}^{*}=\frac{1}{1+\mu_{t}^{*}}\times\left[1+\left(\frac{(1-m_{t}^{*}) \alpha(\sigma-1)}{e^{\alpha(\sigma-1)m_{t}^{*}}-1}\right)^{1/\sigma}\left(\hat {k}_{t}^{*}\right)^{\frac{\sigma-1}{\sigma}}\right]^{-1} \tag{13}\]
Given the results in Proposition 3, the first term in the right-hand side of (13) provides a link between nonmarket mechanisms such as labor institutions and political preferences with worker power, and worker power with the rate of return of capital. The second component on the right-hand side of (13) is similar to the expression of the wage share obtained by Acemoglu and Restrepo (2018); the difference explained by the fact that here the rate of return is not a cost of production. Altogether, equation (13) can reconcile the literature on labor institutions and technological change by showing how each component can potentially affect the labor share over time.
Figure 3. LABOR MARKET EQUILIBRIUM.
## IV Equilibrium and Dynamics
This section presents the equilibrium conditions and the dynamic properties of the model.
### Equilibrium Analysis
Assuming that \(\dot{m}_{t}^{*}\) is determined exogenously, Online Appendix A.3.1 shows that the equilibrium can be characterized by a system of four differential equations (in terms of \(\{L_{t},\theta_{t},\hat{k}_{t},\hat{c}_{t}\}\)) consistent with a BGP with positive growth. This is summarized in the following result.
Proposition 4-- Suppose that Assumption 1 holds. The economy admits a unique and locally stable equilibrium BGP with positive growth6
Footnote 6: Figure C1 in online Appendix C shows that—with the exception of the early 1980s— equation (15) is satisfied in the US.
\[g=s_{t}^{*}(r_{t}^{*}-\chi_{t}^{*}) \tag{14}\]
if
\[\mu_{t}^{*}>\frac{g}{\delta}>\mu_{t}^{\text{min}}. \tag{15}\]
Where \(r^{*}=q\hat{y}^{*}\mu^{*}/(\hat{k}^{*}(1+\mu^{*}))\) is the equilibrium rate of profit, \(\chi^{*}=q(\hat{\xi}^{*}\)\(V^{*}+\hat{\tau}^{*})/\hat{k}^{*}\) is the equilibrium sum of stationary taxes and vacancy expenses per unit of capital, \(s^{*}\in(0,1)\) is the equilibrium savings rate, and \(\mu^{\text{min}}\) is the rate of return of capital for which \(\hat{c}=0\) (see equation (A13) in Online Appendix A.3.1).
The expression in (14) is analogous to Solow's fundamental equation under the assumption that all savings are made by firms and that capitalists have to pay taxes and vacancy expenses. The novelty in Proposition 4 is that--because the return of capital is a surplus over costs of production--a BGP equilibrium with positive growth requires specific social and institutional arrangements allowing the existence of sufficiently large profits. For instance, given the structure in Appendix B.2.1, Figure 4
shows that if \(\mu<\mu^{\text{min}}\), capitalists will be incapable of _continuously_ increasing capital outlays at a rate consistent with a BGP, pay taxes and vacancy expenses, and have a remnant for their own consumption, i.e., it is an _economically unfeasible growth path_. From a political economy perspective, this implies that the support to workers is partly limited by the growth requirements of the system: very high growth probably requires a weak bargaining power of labor.7
Footnote 7: The causal relation need not hold in reverse order: low bargaining power of labor need not lead to high growth because, in a low productivity environment, the increase in the aggregate surplus acquired by capitalists will find little demand for additional units of productive capital.
### Transitional Dynamics
This subsection studies the interrelations and dynamic implications of unanticipated and permanent changes in parameters related to technology and labor institutions.
The first important result is presented in Figure 5, which depicts the main findings of Lemma B1 in Appendix B.2.2. Similar to Acemoglu and Restrepo (2018), the objective is to illustrate how the effects of automation depend on the parameter space defining the behavior of the relative costs of labor and capital. The three regions in Figure 5 are determined by a critical value of the relative price of capital, \(\bar{q}(\mu^{*})\), which is itself
a function of the equilibrium rate of return. To the left of \(\bar{q}(\cdot)\), there is a decreasing curve \(\bar{m}(q)\) defined over \([q^{\text{min}},\bar{q}(\cdot)]\) with \(\bar{m}(\bar{q})=0\) and \(\bar{m}(q^{\text{min}})=1\). Region 1 is the area of values where labor is relatively cheap, meaning that not all automated tasks will be produced with capital. Correspondingly, there is an increasing curve \(\tilde{m}(q)\) defined over \([\bar{q}(\cdot),q^{\text{max}}]\) with \(\tilde{m}(\bar{q})=0\) and \(\tilde{m}(q^{\text{max}})=1\). In the area of values with \(m<\tilde{m}(q)\) we have that \(w_{M}(m)>\delta/(A^{k}q)\), which implies that new tasks would not be adopted because they result in a reduction of aggregate output. Finally, region 2 is the space where \(m>\text{max}\big{\{}\bar{m}(q),\tilde{m}(q)\big{\}}\), meaning that new tasks will raise aggregate output and will be immediately produced with capital.
To understand the implications of this setting, consider the following three scenarios.8
Footnote 8: The notation \(\partial m\to\partial\mu\to\partial m\) reads: changes in the share of automation lead to changes in the rate of return and these lead to changes in the automation regions.
Figure 5. Automation Regions.
1. (\(\partial\mu\to\partial m\)) Suppose the economy is initially in point (a) of Figure 5 and encounters policy changes lowering the power of workers. Given Proposition 3, this raises the rate of return to \(\mu^{(a)^{*}}>\mu^{(a)}\) and the critical relative price of capital to \(\bar{q}(\mu^{(a)*})>\bar{q}(\mu^{(a)})\).9 As a result, the automation regions shift to the right (dotted lines) and we reach a new equilibrium where the weakening power of labor made machinery relatively superfluous. In this case not all tasks would be produced with capital since \(w_{J}(m)<\delta/(A^{k}q)\).10 Footnote 9: This is true because the relative price of capital when \(m=0\) is \(\bar{q}(\mu^{*})=\delta(1+\mu^{*})/A^{k}\); see equation (A1) in the main Appendix.
2. (\(\partial m\to\partial\mu\to\partial m\)) Suppose the economy is initially in point (a) and moves to point (b) in Figure 5. If the rise in \(m\) is large enough, Proposition 5 says that \(\mu^{(b)}\) can decrease so much that \(\mu^{(b)}<\mu^{\text{min}}\), meaning that the system can become unsustainable by an inadequate adoption of machinery.
3. (\(\partial m\to\partial\mu\to\partial m\)) Suppose that the solid lines are now associated with point (b) and that there is an exogenous reduction in \(m\) taking the system to point (a) in Figure 5. By Proposition 5, this shifts the automation regions to the right (see dotted lines) by increasing the rate of return of capital. Thus, automation can lead to the paradoxical result of making machinery relatively redundant by effectively reducing the relative cost of labor.
Given the conclusions derived from Figure 5, the next proposition characterizes the economic implications of small unexpected changes in technology and labor institutions.
**Proposition 5--** Suppose that Assumption 1 holds and that the economy is initially in a BGP with positive growth satisfying (15). Then, the dynamic equilibrium path
converges in finite time to a new BGP when there are small unexpected changes in technology and labor institutions. Particularly:
* (_Automation_) for \(m>\text{max}\{\bar{m}(q),\tilde{m}(q)\}\) and \(|\partial\lambda_{t}/\partial\dot{m}_{t}^{*}|>\alpha\), a small decrease in \(m\) induces a two-stage transition.11 First, there is an initial shock \(\dot{m}_{t}^{*}<0\) leading to a rise in \(U_{t}\) and \(\mu_{t}\), a decrease in \(\hat{k}_{t}/(\hat{y}_{t}q)\) and \(\Omega_{t}\), and ambiguous effects on \(\theta_{t}\) and \(V_{t}\). Before the new steady-state is reached, the economy moves to a new equilibrium with \(m^{\prime}<m\) and \(\dot{m}_{t}=0\). In the new BGP, \(V\), \(\theta\) and \(\Omega\) are lower, whereas \(\mu\), \(U\) and \(\hat{k}/(\hat{y}q)\) are higher for all \(\sigma>0\). Footnote 11: The case where \(|\partial\lambda_{t}/\partial\dot{m}_{t}^{*}|<\alpha\) is studied in Online Appendix A.3.2 and illustrated in Figure 6. The case where \(m\) is either in region 1 or 3 in Figure 5 is studied in Acemoglu and Restrepo (2018).
* (_Labor-augmenting technical change_) a small increase in \(\dot{M}\) lowers the asymptotic value of \(\mu\), and raises the equilibrium labor share and capital-output ratio. If \(\theta\) stays relatively constant, a small increase in \(\dot{M}\) raises the asymptotic values of \(U\) and \(V\) when \(\sigma\in(0,1)\), and lowers the values of \(U\) and \(V\) when \(\sigma>1\).
* (_Labor institutions_) a permanent reduction in the support to labor--represented by, e.g., a higher \(T^{w}\)-- induces a new BGP with lower asymptotic values of \(\Omega\), \(\hat{k}/(q\hat{y})\) and \(U\), and higher values of \(\mu\), \(\theta\) and \(V\), for all \(\sigma>0\).
Figure 6 illustrates the dynamic responses associated with the three shocks in Proposition 5.12 Starting with Figure 6, Panel A, the initial stage of the transition- represented over the interval \([t^{\prime},t^{\prime\prime}]\)-- features a decrease in \(\dot{m}_{t}^{*}\) that gives rise to a higher rate of unemployment and an ambiguous effect on vacancies. The intuition is that the automation shock moves labor demand (9) and labor supply (12) in the same direction by lowering the effective discount rate and by raising the Poisson probability of unemployment. As a consequence, though it is generally not possible to determine how \(\theta\) will change, it can be deduced that the rate of unemployment will increase given that the Beveridge curve moves outwards with the rise of technological unemployment; see Lemma 2.
If \(|\partial\lambda_{t}/\partial\dot{m}_{t}^{*}|>\alpha\), the increase in \(U_{L_{t}}^{A}\) will outweigh the capitalization effect, which will move the labor demand and labor supply schedules downwards, and lead to an immediate increase in the equilibrium rate of return by Proposition 3. Using (13), this translates into a lower labor share, as depicted by the green solid line in the lower panel of Figure 6. The polar case is obtained when \(|\partial\lambda_{t}/\partial\dot{m}_{t}^{*}|<\alpha\), in which case the dominance of the capitalization effect moves the labor share upwards as represented by the orange dashdotted line in Figure 6, Panel A. The resulting variations in the equilibrium rate of return explain the different trajectories of the capital-output ratio over \([t^{\prime},t^{\prime\prime}]\) in Figure 6, Panel A, since \(\hat{k}/q\hat{y}\) will tend to move in the opposite direction of \(\mu\) given the principle of diminishing marginal returns.
At \(t^{\prime\prime}\), the effects of \(\dot{m}_{t}^{*}\) disappear and the economy moves to a new equilibrium with a lower \(m\). Similar to Acemoglu and Restrepo (2018), this reduces the effective wage paid in the least complex task produced with labor and lowers the vacancy-unemployment ratio. In time, the reduction in \(m\) moves the capital-output ratio upwards because, by assumption, automated tasks raise aggregate output and are immediately produced with capital. Moreover, the negative shock on wages is such that in the long-run the labor share always decreases regardless on the value of \(\sigma\) and the strength of the initial capitalization effect.
The effects of a reduction in the support to labor and of a permanent rise in productivity growth are shown in Panel B of Figure 6. Focusing first on the labor-augmenting technological change, we find that--thanks to the capitalization effect--the labor share increases over time for any \(\sigma>0\). Similarly, since higher effective wages reduce the equilibrium rate of return of capital, higher labor productivity growth also raises the capital-output ratio. The result on vacancies and unemployment is ambiguous and depends on the elasticity of substitution parameter. Particularly, if \(\theta\) remains more or less constant with an increase in \(g\), the effects of a higher growth rate are entirely determined by the relation of technological unemployment with \(\dot{M}\). As shown in Lemma 2,
higher growth reduces \(U_{L_{t}}^{A}\) if \(\sigma>1\), which explains the behavior of \(U\) and \(V\) depicted by the red solid lines in Figure 6. The opposite is expected to happen when \(\sigma\in(0,1)\), since in this case \(\partial U_{L_{t}}^{A}/\partial\dot{M}_{t}>0\).
Finally, lower support to workers moves the wage-curve in (12) downwards. As a result, there is a simultaneous increase in the vacancy-unemployment ratio and in the equilibrium rate of return of capital, which reduces the labor share and lowers the capital-output ratio over time.
Figure 6. Transitional dynamics.
## V Empirical Analysis
This section evaluates some of the different channels through which technology and labor institutions have impacted the US economy. To do so, Section V.A applies an approximate calibration of the model and compares the predicted paths with their empirical counterparts. Section V.B extends the analysis and presents a cross-validation exercise that examines the consistency of the rolling estimates of the model with historical information of labor institutions in the US.
### Approximate calibration to the US economy
To get a sense of the effects of power relations and technical change in the US economy, I employ a parsimonious calibration strategy where \(T^{w}\) and \(m_{t}^{*}\) are the only parameters targeting specific data. The relative mobility of workers is set to match the efficient unemployment rate of Michaillat and Saez (2021), which is the amount minimizing the nonproductive labor time used in jobseeking and recruiting.13 The automation measure is estimated using equation (A1) by solving
Footnote 13: Particularly, I employ \(U^{*}=\sqrt{U_{t}V_{t}}\). Online Appendix C shows that similar results are obtained by employing the NAIRU as the equilibrium rate of unemployment.
\[1-m_{t}^{*}=\frac{K_{t}}{qY_{t}}A^{k}q^{1-\sigma}\Big{(}\delta^{\text{BEA-BLS}}( 1+\mu_{t}^{\text{BEA-BLS}})\Big{)}^{\sigma}. \tag{16}\]
Here \(q\), \(\sigma\), and \(A^{k}\) are set as in Table 2, and \(\mu_{t}^{\text{BEA-BLS}}\) and \(\delta^{\text{BEA-BLS}}\) are obtained from the BEA-BLS integrated data; see Online Appendix B for details. All other parameters are either calibrated to roughly describe some basic facts of the US or are directly obtained from macro data.
The first block of numbers in Table 2 presents the time-varying values in the calibration obtained from direct data sources. The probability of collective bargaining is measured using union membership data from Farber, Herbst, Kuziemko, and Naidu (2021), the growth rate of average labor productivity is obtained from the Penn World
Table (Feenstra, Inklaar, and Timmer, 2015), and the opportunity cost of employment is calculated based on equation (20) of Chodorow-Reich and Karabarbounis (2016).
In the second block, I set \(\delta\) close to the average of the time-varying depreciation rate in Barkai (2020). The elasticity of substitution parameter follows the literature and is set at \(\sigma=0.6\); Figure C4 in online Appendix C presents the results with \(\sigma=1.2\) and shows that the conclusions are roughly equal. Consistent with Moll, Rachel, and Restrepo (2022), \(A^{k}\) is calibrated so that labor is about 50\(\%\) more costly than capital in automated tasks.14 The relative price of capital is fixed at 0.35 so that the equilibrium annual capital-output ratio is on average close to 1.5, which is close to the average in Figure 7 and Figure B5 in online Appendix B.15
Footnote 14: Moll, Rachel, and Restrepo (2022) set labor 30\(\%\) more costly than capital. The difference is explained by the fact that they include the rate of profit as a cost of production.
Footnote 15: Using Lemma B1, Figure C2 in online Appendix C shows that the automation measure in Table 2 is in Region 2 of Figure 5 and \(q<\bar{q}(\mu_{t}^{*})\), meaning that automated tasks always raise aggregate output and are immediately produced with capital.
The monthly subjective discount rate is consistent with the experimental data of Andreoni and Sprenger (2012), who find annual rates between 0.2 and 0.4. The matching
\begin{table}
\begin{tabular}{l c l l} \hline \hline Parameter & Average & Description & Target/source \\ \hline Time-varying values & & & \\ \(P(\mathcal{U}=1|.)\) & 0.25 & Union membership: Gallup+BLS & Farber, Herbst, Kuziembko, and Naidu (2021) \\ \(g\) & 0.17\(\%\) & Labor productivity growth & 2\(\%\) annual rate/Feenstra, Inklaar, and Timmer (2015) \\ \(b\) & 0.06 & Opportunity cost of employment & Chodorow-Reich and Karabarbounis (2016) \\ \(1-m^{*}\) & 0.12 & Automation measure & Equation (16)/ BEA-BLS integrated data \\ Technology & & & \\ \(\delta\) & 0.056\(\%\) & Depreciation rate & 7\(\%\) annual rate/Barkai (2020) \\ \(\sigma\) & 0.6 & Elasticity of substitution & Standard calibration \\ \(A^{k}\) & 0.022 & Capital-augmenting technology & \(w\approx 1.5(\delta/(qA^{k}))/\) Moll, Rachel, and Restrepo (2022) \\ \(\alpha\) & 1.4 & Labor-augmenting parameter & \(\Omega\approx 0.63\)/ Standard calibration \\ \(q\) & 0.35 & Relative price of capital & Annual \(K/(qY)\approx 1.5\)/ BEA-BLS integrated data \\ Preferences & & & \\ \(\rho\) & 2.22\(\%\) & Discount rate & 30\(\%\) annual rate/ Andreoni and Sprenger (2012, p. 3346) \\ \(\gamma^{f}\) & 0.45 & Response time of firms & \(\Gamma^{\text{max}}\approx 0.31\)/ Within standard calibrations \\ Search and matching & & & \\ \(\iota\) & 1.25 & Matching function parameter & Petrosky-Nadeau and Zhang (2021) \\ \(\lambda_{0}\) & 0.02 & Separation rate & \(V\approx 3\%\) \\ \(\xi\) & 8 & Vacancy costs & Merz and Yashiv (2007) \\ \hline \end{tabular} _Notes— All parameters are calibrated at a monthly frequency._
\end{table}
Table 2: Baseline calibration
parameter \(\iota\) is set as in Petrosky-Nadeau, Zhang, and Kuehn (2018). The job separation rate is between the estimates of Shimer (2005) and Hobijn and Sahin (2009), and is consistent with average vacancy rate of about 3\(\%\). Lastly, the value of \(\xi\) implies that vacancy costs are about 2 quarters of wage payments, similar to Merz and Yashiv (2007).
ResultsFigure 7 depicts the predicted paths of the labor share, capital profitability, the capital-output ratio, and the measures of automation along with their empirical counterparts.16 Figure 7, Panel A shows that the predictions of the technical change and institutions-driven stories match remarkably well different measures of the labor share: the technical change predicted path match the Penn World Table data, while the predictions based on changes in labor institutions follow closely the BEA-BLS data. Panel B, however, demonstrates that the technical change hypothesis cannot account for the fall in the rate of return before the 1980s and its steady recovery afterwards. Similarly, it shows that the institutions hypothesis alone underestimates the fall in the rate of profit from the 1950s to the late 1970s. These results--as illustrated by the magenta lines in Figure 7, Panel B-- suggest that an adequate understanding of the behavior of capital profitability requires combining the technical change and the institutions-driven stories.
Footnote 16: The model is solved using Julia’s NLboxsolve.jl. The code is in the Supplementary Material.
The data of the capital-output ratio in Figure 7, Panel C, is matched completely by introducing changes in the automation of tasks, but is inconsistent with the predictions of the institutions-driven hypothesis. This conclusion is supported in Figure 7, Panel D, by noting that the estimated value of the automation measure based on (16) is well aligned with the time series of the automation share constructed by Dechezlepretre, Hemous, Olsen, and Zanella (2019) and Mann and Puttmann (2021) using US patent data.
Figure 8, in turn, reveals that the variations in the labor market cannot be matched by changes in the rate of automation or the rate of productivity growth. By contrast, the predicted paths associated with changes in labor institutions are perfectly consistent with the behavior of the efficient unemployment rate (by construction), and with the time series of the vacancy rate and labor market tightness.
In sum, the calibration exercise shows that, while it is unlikely that the trends in the US economy can all be adequately explained by relying on one hypothesis alone, the fluctuations in worker power induced by variations in labor institutions are probably
Figure 7. EQUILIBRIUM PATHS.
the major structural changes given their capacity to explain the behavior of the labor share, the rate of return of capital, and the dynamics of the labor market throughout the postwar period. This conclusion finds additional support in the following subsection by showing that the predicted paths of worker power are consistent with the behavior of important labor institutions in the US.
### Worker Power and Labor Institutions
Figure 9 compares the inferred time series of \(T_{t}^{w}\) with popular measures of the institutional support to labor.17 The rolling estimates of the relative mobility of labor indicate that during the period of the New Deal Order capitalists probably lost power over labor given the increasing difficulty of finding new workers willing to accept lower wages.18 The rise of the federal real minimum wage and the high levels of union membership over this period are some of the institutional changes which support this hypothesis, given that--by legislative and
Figure 8: Labor market equilibrium paths
political action--they helped mitigate the capacity of firms to lower wages through the force of competition.
By the mid-1970s, the political order supporting labor lost momentum and the US found itself in a new era with declining real minimum wages, lower union memberships, and falling top marginal income tax rates.19 These institutional changes coincide with the fall of the relative mobility of labor, which can account for the decline of the labor share in the mid-1970s, the fall of the equilibrium rate of unemployment, and the steady (or even rising) vacancy rates over the 1970s and 1980s.
Figure 9: Worker power and labor institutions.
But, what explains the rise and fall of the institutional support to labor? And why is worker power partly captured by \(T^{w}\)? The answer to the second question is that \(T^{w}\) defines the probability that firms will match with a new worker in the bargaining process of wages; see Proposition 1 above. Thus, as \(T^{w}\) gets bigger, firms gain a hiring advantage by increasing the competition among workers for each available vacancy. In this respect, it is reasonable that \(T^{w}\) will decrease with institutional changes like higher real minimum wages or higher union memberships given that these restrict the capacity of firms to lower wages through competition.
A tentative answer to the first question is found in Figure 9 by following Gerstle's (2022) argument that much of the changes in the institutional support to labor can be attributed to the Communist threat--which refers to the class compromise between capital and labor induced by the fear that communism could challenge capitalism as the dominant economic system. By this logic, it was in the interest of capitalists and the government to compromise by enhancing social programs for the poor, putting forward legislative actions favoring a bigger welfare state, and addressing the international embarrassment of white supremacy in the southern states.20 In the mid-1970s, however, the political pressure to comply with the requirements of a strong welfare state vanished with the decay of the Soviet Union's economy, as illustrated by the simultaneous decline of the Communist threat and the institutional support to labor in the US.21
## VI Further Economic Implications
Extending on the results of the previous section, next I show some connections of the worker power hypothesis with the wage-premium and the association of market power with increasing markups.
### Institutions, profitability, and the wage-premium
Figure 10 depicts two important findings which highlight the predictive capacity of the worker power hypothesis. The first is that the equilibrium path of the rate of return of capital obtained by allowing changes in labor institutions matches remarkably well the behavior of the average markup in the US. Particularly, both the model and the data show a declining trend in business profitability between the 1960s and the late 1970s, and a steady recovery since the early 1980s (see Figures B6 and B7 in Online Appendix B for additional evidence). This contrasts with the predicted path obtained by only allowing changes in technology, where the model and the data move in polar directions. Thus, Figures 9 and 10 present clear evidence--based on solid theoretical foundations--showing that there is a redistribution of the production surplus from labor to capital with a weakening power of workers.
The second important finding in Figure 10 is that the wage-premium is positively correlated with business profitability. A possible interpretation of this result is that the growing surplus going from labor to capital can filtrate to different types of workers depending on the role they play in the production process. For example, managers and executives--who are high up in the scale of skilled workers (Autor, 2015, p. 18)--have profited from the decline of union membership by removing the influence of production workers on executive pay (DiNardo, Hallock, and Pischke, 2000; Rosenfeld, 2006). Additionally, they have probably benefited from lower minimum wages and declining top marginal income tax rates given that part of their pay is directly tied to bonuses and stock options--both of which are not necessarily influenced by their own
performance, but are rather determined by external circumstances related to the profitability of businesses (Piketty and Saez, 2003; Piketty, Saez, and Stantcheva, 2014; Acemoglu, He, and le Maire, 2022).22
Footnote 22: It goes without saying that the association of worker power with the wage-premium does not rule out the possibility that skills and education play an important part in the determination of wages. However, given that the demand for high-skilled labor is probably associated with business profitability, it is unlikely that the bias in skilled labor is an exogenous factor causing the sharp increase in the wage-premium.
### Concentration and markups
Many works attempting to explain the fall in the labor share since the 1980s base their analyzes on the principle that large firms can pay workers below their marginal productivity, such that (in the text's notation):
\[y_{L_{t}}=w_{t}(1+\mu_{t}) \tag{17}\]
Figure 10. PROFITABILITY AND WAGE-PREMIUM.
The problem, as previously noted by Stansbury and Summers (2020), is that there is essentially no way to distinguish between the rise in \(\mu\) as a result of an increasing concentration in markets or a fall in worker power using equation (17) alone.
Figure 11 helps solve this identification problem by directly comparing different measures of capital profitability with the concentration of markets on large firms; Figure B7 in online Appendix B presents additional evidence. The key takeaway is that the association between market concentration and higher markups is only clear after 1982, which is the period commonly studied in the papers defending the market power hypothesis (e.g., Autor, Dorn, Katz, Patterson, and Van Reenen, 2020; Barkai, 2020). Between the 1950s and the late 1970s, by contrast, market concentration and business profitability move in polar directions, while--as shown in Figure 10--the latter is always consistent with the behavior of worker power induced by the institutional changes in the US.
Figure 11. profitability and market concentration.
## VII Conclusions
The article has proposed a novel approach showing how politico-economic variables can intervene in macroeconomic outcomes by directly affecting the power of labor. In this environment, labor institutions define the "playing field" in the bargaining process of wages, which is instrumental for determining the equilibrium rate of unemployment and the rate of return of capital. Moreover, the surplus realized by capitalists in the bargaining process is central in the model by establishing the funds for a continuous reproduction of the economy at an increasing scale, and by defining the regions for which it is profitable for firms to substitute capital for labor.
Empirically, the model offers a plausible explanation for the long-run behavior of the labor share, capital profitability, the capital-output ratio, the rate of unemployment, and the vacancy rate, based on a combination of institutional and technological changes over the postwar period. In addition, the analysis helps narrow down the multidimensionality of institution-driven stories of the fall in the labor share over the past half-century to specific policy changes which include--but are not necessarily limited to--the variations in union membership, unemployment benefits, real minimum wages, and geopolitical threats. In this respect, the model opens up the traditional framework by showing how the political economy of income distribution, labor institutions, and political preferences is not a mere complement to, but rather a vital part of, macroeconomic analysis.
## Appendix A Main Appendix
### Model with investment-specific technological change
The analysis in the text was carried out under Assumption 1 and the principle that \(q_{t}=q\) for any \(t\). This section introduces a generalization of the model in the text by replacing Assumption 1 for
Assumption A1--\(A_{t}^{k}=A^{k}D(h_{t})^{-a_{0}}\) and \(A_{t}^{l}(j)=e^{\alpha j}D(h_{t})^{a_{1}}\), with \(D^{\prime}(h_{t})>0\) and \(a_{0},a_{1}>0\).
Assumption A1 follows Grossman, Helpman, Oberfield, and Sampson (2017) by positing a relation between the management effort of firms--here denoted as \(h_{t}\)--and the disembodied technology functions. Intuitively, the assumption says that firms can raise the productivity of labor at the expense of increasing the relative supply of effective capital, which tilts the unit isoquants and leads to a technological change which is both labor saving and capital using.
Using Assumption A1 and following similar steps as those outlined in Acemoglu and Restrepo (2018), the aggregate output of the economy can be written as
\[Y_{t}=\Bigg{[}(1-m_{t}^{*})^{1/\sigma}\big{(}K_{t}A^{k}D(h_{t})^{-a_{0}}\big{)} ^{\frac{\sigma-1}{\sigma}}+\Big{(}\frac{e^{c(\sigma-1)m_{t}^{*}}-1}{\alpha( \sigma-1)}\Big{)}^{1/\sigma}\Big{(}A_{t}^{l}(J^{*})L_{t}\Big{)}^{\frac{\sigma-1 }{\sigma}}\Bigg{]}^{\frac{\sigma}{\sigma-1}}.\]
Given the ideal price index condition, the partial derivatives of \(Y_{t}\) with respect to \(K_{t}\) and \(L_{t}\) satisfy
\[\begin{split} Y_{K_{t}}&=\Bigg{(}\frac{Y_{t}}{K_{t }}\Bigg{)}^{1/\sigma}(1-m_{t}^{*})^{1/\sigma}A^{h\frac{\sigma-1}{\sigma}}D(h_{ t})^{-a_{0}\frac{(\sigma-1)}{\sigma}}=\frac{\delta P_{t}^{k}}{P_{t}^{c}}\\ Y_{L_{t}}&=\Bigg{(}\frac{Y_{t}}{e^{\alpha J_{t}^{* }}D(h_{t})^{a_{1}}L_{t}}\Bigg{)}^{1/\sigma}\Big{(}\frac{e^{\alpha(\sigma-1)m_{ t}^{*}}-1}{\alpha(\sigma-1)}\Big{)}^{1/\sigma}e^{c\alpha_{t}^{*}}D(h_{t})^{a_{1}}= \frac{W_{t}}{P_{t}^{c}}\end{split}\] (A1)
To further simplify the analysis, let \(Y_{t}\) be expressed as
\[Y_{t}=e^{\alpha J_{t}^{*}}D(h_{t})^{a_{1}}L_{t}\Bigg{[}(1-m_{t}^{*})^{1/\sigma }Z_{t}+\Bigg{(}\frac{e^{\alpha(\sigma-1)m_{t}^{*}}-1}{\alpha(\sigma-1)}\Bigg{)} ^{1/\sigma}\Bigg{]}^{\frac{\sigma}{\sigma-1}}.\]
Where \(Z_{t}=\Big{(}(K_{t}/L_{t})D(h_{t})^{-(a_{0}+a_{1})}e^{-\alpha J_{t}^{*}} \Big{)}^{\frac{\sigma-1}{\sigma}}\). Denoting \(A=a_{1}/(a_{0}+a_{1})\), the aggregate production function can be expressed as
\[Y_{t}=\left(L_{t}e^{\alpha J_{t}^{*}}\right)^{1-A}K_{t}^{A}Z_{t}^{\frac{-A\sigma} {\sigma-1}}\Bigg{[}(1-m_{t}^{*})^{1/\sigma}Z_{t}+\left(\frac{e^{\alpha(\sigma-1 )m_{t}^{*}}-1}{\alpha(\sigma-1)}\right)^{1/\sigma}\Bigg{]}^{\frac{\sigma}{ \sigma-1}}\]
which is a Cobb-Douglas function with possible shifts in the factor share parameters. The next lemma presents a generalization of Lemma 1 in the text.
Lemma A1-- Suppose that Assumption A1 holds. If firms choose the management effort to maximize output, then in any BGP:
* \(g_{K}=g_{Y}+g_{q}\).
* \(g_{Y}=g_{C}=g=\alpha\dot{M}+a_{1}\ g_{q}/a_{0}\).
* \(\frac{D^{\prime}(h)}{D(h)}\dot{h}=g_{q}/a_{0}\).
The proof of Lemma A1 is shown in Online Appendix A. For now, the main argument is that the model in the text can be easily generalized to incorporate investment-specific technological change.
## Appendix B Auxiliary results
### Decision over bargaining strategies
Here I propose a game-theoretic model determining the probability that workers will choose a collective bargaining strategy in Figure 1. The multidimensionality in the preferences of workers under collective bargaining is expressed as:
\[U_{W}^{i,1}=\omega_{i0}+\omega_{1}L^{u}+\omega_{2}w^{u}-\omega^{3}(\mathcal{ R}-\bar{\mathcal{R}}_{i})^{2}-\omega_{4}(\mathcal{Q}-\bar{\mathcal{Q}})^{2}\]
with \(\omega_{j}\geq 0\) for \(j\in\{i0,1,2,3,4\}\), \(\omega_{10}>\omega_{20}\) and \(\bar{\mathcal{R}}_{1}<\bar{\mathcal{R}}_{2}\). The first term \(\omega_{0i}\) is a proxy of the government's support to labor. The second term is a Stone-Geary type utility function describing the wage-employment gains associated with participating in a collective bargaining protocol (Lee, Roemer, and Van der Straeten, 2006). The third term represents the workers' view on identity issues. For example, a higher \(\mathcal{R}\) can
represent a higher degree of racism among workers; whereas a lower \(\bar{\mathcal{R}}\) may represent a greater government support to minorities. The last term is meant to represent the workers' view on "social justice," where \(\mathcal{Q}\) is a measure of economic equality and \(\bar{\mathcal{Q}}\) is the perceived ideal level of inequality by the typical worker (Alesina and Giuliano, 2011). The utility of workers under individual bargaining is simply \(U_{W}^{i,2}=\omega_{1}L^{n}+\omega_{2}w^{n}\) for \(i\in\{1,2\}\).
For conceptual simplicity, I assume that the government is exclusively interested in maximizing its vote share. In each scenario, the government gets
\[U_{G}^{1,j} =\mathcal{V}_{1,j}+\mathcal{V}_{3}\varphi,\ \ \text{with}\ \mathcal{V}_{3}>0.\] \[U_{G}^{2,j} =\mathcal{V}_{2,j},\ \ \text{with}\ j\in\{1,2\}.\]
Here \(\varphi\) is the measure of the "Communist threat" and \(\mathcal{V}_{i,j}\) is an autonomous component capturing the public's preference in each possible scenario. Surely this is over simplistic, but it helps illustrate how the Communist threat can induce the government to favor a bigger welfare state to avoid losing public support.23
Footnote 23: This is well represented in a letter of president Eisenhower to his brother in the early 1950s, where he stated: “Should any political party attempt to abolish social security, unemployment insurance, and eliminate labor laws... you would not hear of that part again in our political history” (Gerstle, 2022, p. 45).
If workers and the government maximize the expected payoff associated with each strategy in Table 3, subject to an entropy constraint, there will exist (given the appropriate regularity conditions) a unique Nash equilibrium with mixed strategies (Mackowiak, Matejka, and Wiederholt, 2023)
\begin{table}
\begin{tabular}{l|c|c} \hline \hline & Collective bargaining & Individual individual \\ \hline High political support & \(U_{W}^{1,1}\), \(U_{G}^{1,1}\) & \(U_{W}^{1,2}\), \(U_{G}^{1,2}\) \\ \hline Low political support & \(U_{W}^{2,1}\), \(U_{G}^{2,1}\) & \(U_{W}^{2,2}\), \(U_{G}^{2,2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3. Payoff table.
\[P(\mathcal{U}=i|\cdot)=\frac{e^{\lambda^{W}\sum_{j=1}^{2}P^{G}(S=j|\cdot)U_{W}^{j,i} }}{\sum_{j^{\prime}=1}^{2}e^{\lambda^{W}\sum_{j=1}^{2}P^{G}(S=j|\cdot)U_{W}^{j ^{\prime}j^{\prime}}}}\] (B1)
and
\[P^{G}(S=j|\cdot)=\frac{e^{\lambda^{G}\sum_{i=1}^{2}P(\mathcal{U}=i|\cdot)U_{G}^ {ji}}}{\sum_{j^{\prime}=1}^{2}e^{\lambda^{G}\sum_{j=1}^{2}P(\mathcal{U}=j|\cdot )U_{G}^{j^{\prime}j}}}\] (B2)
Here \(P(\mathcal{U}=1|\cdot)\) denotes the probability of collective bargaining and \(P^{G}(S=1|\cdot)\) is the probability that the government provides high institutional support to labor. The key feature of (B1) and (B2) is that by introducing some "randomness" in the behavior of workers and the government (represented by \(\lambda^{W}\) and \(\lambda^{G}\)), both equations capture the complexity of aggregating over heterogeneous individuals with limited information-processing capacities.
Figure 12 illustrates the basic argument of the decision model by associating each equilibrium outcome with the proxy of the Communist threat. For instance, the model
shows that the probability of an equilibrium with high institutional support to labor and high collective bargaining increases with a rise in \(\varphi\)--as illustrated in the data of Figure 9, where the surge in the relative real GDP per capita of the Soviet Union was accompanied by a rise in the institutional support to labor in the US. Correspondingly, Figure 12 shows that a decrease in \(\varphi\) can raise the probability of an equilibrium with low institutional support to labor and a higher density of individual bargaining, as it happened in the US following the mid-1970s.
These results do not substitute, but rather complement the existing studies associating factors like racism and the "American exceptionalism" with the public's support to welfare (Lee, Roemer, and Van der Straeten, 2006; Alesina and Giuliano, 2011). In fact, this is a potentially fruitful area for future research since it can help disentangle the causes determining the political state of society and thus the factors which shape the power of labor.
### Auxiliary results to Section IV
This subsection presents the theoretical structure for Figures 4 and 5.
#### b.2.1. Arbitrage Condition
Assume the existence of a representative capitalist consumer looking to maximize24
Footnote 24: To save notation I assume that \(q_{t}=q\) and \(a_{0}=a_{1}=0\), as in the text.
\[\int_{0}^{\infty}e^{-(\mu_{t}^{*}\delta_{t}-\epsilon g_{t})t}\ \frac{C_{t}^{1- \epsilon}-1}{1-\epsilon}\mathrm{d}t\quad\text{s.t. (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq
of capital.25 The second element \((\epsilon g_{t})\) states that regardless of the activity chosen by the capitalist, it will always expect a diminishing marginal utility of consumption resulting from the expansion of the economy.
Footnote 25: Intuitively, \(\mu_{t}^{*}\delta_{t}\) can be interpreted as the equilibrium return that a typical capitalist can expect to receive in a competitive environment from an additional unit of productive capital. This takes the place of the _required rate of return_ commonly used in the literature.
Expressing the results using stationary per-capita variables:
\[\frac{\dot{\hat{c}}_{t}}{\hat{c}_{t}}=\frac{1}{\epsilon}\Big{[}\hat{y}_{\hat{k }_{t}}q-\delta_{t}(1+\mu_{t}^{*})\Big{]}\] (B3)
\[\lim_{t\rightarrow\infty}\hat{k}_{t}\;e^{-\int_{0}^{t}(\mu_{t^{\prime}}\delta_ {t^{\prime}}-g_{t^{\prime}})\mathrm{d}t^{\prime}}=0.\] (B4)
Equation (B3) is meant to create an _analogy_ of the social conditions of arbitrage characterizing the tendency towards an equilibrium rate of return. This is clear if we use equation (A1), in which case (B3) is reduced to \(\dot{\hat{c}}_{t}/\hat{c}_{t}=\delta(\mu_{t}-\mu_{t}^{*})/\epsilon\). By this logic, there is a flat consumption profile, \(\dot{\hat{c}}=0\), when \(\mu_{t}=\mu_{t}^{*}\), indicating that there are no net advantages for changes in the use of capital. However, if \(\mu_{t}>\mu_{t}^{*}\), capitalists will be willing to sacrifice some consumption today for consumption tomorrow given that current capital inflows will be rewarded above its equilibrium level.
Equation (B4) shows that in a dynamic efficient equilibrium the marginal net return per unit of capital must be greater than the equilibrium growth rate of the economy.
#### b.2.2. Automation Regions
The next lemma is a modified version of Lemma A2 in Acemoglu and Restrepo (2018).
Lemma B1-- Suppose that Assumption A1 holds and that the economy is initially in a BGP with positive growth satisfying (15). Then, for a given \(\mu^{*}\), there exist \(q^{\text{min}}<\bar{q}<q^{\text{max}}\) such that:
1. If \(q\in[q^{\text{min}},\bar{q}]\), there is a decreasing function \(\bar{m}(q):q\in[q^{\text{min}},\bar{q}]\to(0,1)\) such that for all \(m>\bar{m}(q)\), we have \(w_{J}(m)>D(h_{t})^{a_{0}}\delta/(A^{k}q)<w_{M}(m)\) and \(D(h_{t})^{a_{0}}\delta/(A^{k}q)=w_{M}(\bar{m}(q))\). Moreover, \(\bar{m}(q^{\text{min}})=1\) and \(\bar{m}(\bar{q})=0\).
2. If \(q\in[\bar{q},q^{\text{max}}]\), there is an increasing function \(\tilde{m}(q):q\in[\bar{q},q^{\text{max}}]\to(0,1)\) such that for all \(m>\tilde{m}(q)\), we have \(w_{J}(m)>D(h_{t})^{a_{0}}\delta/(A^{k}q)<w_{M}(m)\) and \(D(h_{t})^{a_{0}}\delta/(A^{k}q)=w_{J}(\tilde{m}(q))\). Moreover, \(\tilde{m}(q^{\text{max}})=1\) and \(\tilde{m}(\bar{q})=0\).
The case where \(q>q^{\text{max}}\) and \(q<q^{\text{min}}\) is analogous to cases (iii) and (iv) in Acemoglu and Restrepo (2018, p. 1531).
Proof.: See Online Appendix A.
|
2308.06089 | An Autoethnographic Exploration of XAI in Algorithmic Composition | Machine Learning models are capable of generating complex music across a
range of genres from folk to classical music. However, current generative music
AI models are typically difficult to understand and control in meaningful ways.
Whilst research has started to explore how explainable AI (XAI) generative
models might be created for music, no generative XAI models have been studied
in music making practice. This paper introduces an autoethnographic study of
the use of the MeasureVAE generative music XAI model with interpretable latent
dimensions trained on Irish folk music. Findings suggest that the exploratory
nature of the music-making workflow foregrounds musical features of the
training dataset rather than features of the generative model itself. The
appropriation of an XAI model within an iterative workflow highlights the
potential of XAI models to form part of a richer and more complex workflow than
they were initially designed for. | Ashley Noel-Hirst, Nick Bryan-Kinns | 2023-08-11T12:03:17Z | http://arxiv.org/abs/2308.06089v1 | # An Autoethnographic Exploration of XAI in Algorithmic Composition
###### Abstract.
Machine Learning models are capable of generating complex music across a range of genres from folk to classical music. However, current generative music AI models are typically difficult to understand and control in meaningful ways. Whilst research has started to explore how explainable AI (XAI) generative models might be created for music, no generative XAI models have been studied in music making practice. This paper introduces an autoethnographic study of the use of the MeasureVAE generative music XAI model with interpretable latent dimensions trained on Irish folk music. Findings suggest that the exploratory nature of the music-making workflow foregrounds musical features of the training dataset rather than features of the generative model itself. The appropriation of an XAI model within an iterative workflow highlights the potential of XAI models to form part of a richer and more complex workflow than they were initially designed for.
**Human-centered computing**\(\rightarrow\)** Interactive systems and tools**; _Graphical user interfaces_; **Applied computing**\(\rightarrow\)**Sound and music computing**.
**Additional Key Words and Phrases: ethnographic artificial intelligence, music generation, explainable artificial intelligence
**ACM Reference Format:**
Ashley Noel-Hirst and Nick Bryan-Kinns. 2023. An Autoethnographic Exploration of XAI in Algorithmic Composition. In _The 1st International Workshop on Explainable AI for the Arts (XAIxArts)_, ACM Creativity and Cognition (C&C) 2023. Online, 3 pages. [https://xiaarts.github.io](https://xiaarts.github.io)
## 1. Introduction
In recent years, computational approaches to music generation and creativity support have prompted and adopted a range of Artificial Intelligence (AI) techniques, especially Machine Learning. Composers who utilize novel musical representations, timbres, and pitch and time divisions go so far as to create new software for their works to be realized often using AI tools support this process (Bryan and Krain, 2017). These approaches broadly fall into two paradigms: the appropriation of materials from existing deep learning models (Bryan and Krain, 2017), and the development of novel models e.g. (Bryan and Krain, 2017). However, regardless of the paradigm or application of AI for music, the underlying processes are typically hard to understand or control in meaningful ways (Bryan and Krain, 2017). Whilst explainable AI (XAI) (Krain, 2017) and Human-Centred AI (HCAI) (Krain, 2017) aim to make AI models more understandable to humans, there have been no studies of how XAI approaches might be used in artistic practice.
## 2. Exploring an Explainable Variational Autoencoder
To explore the use of XAI in creative practice we report on an autoethnographic exploration of a generative music XAI system - MeasureVAE (Bryan and Krain, 2017) implemented as a Max4Live plugin (Bryan and Krain, 2017) - as an illustrative case study of how such XAI models might be appropriated in music-making practice. The autoethographic exploration is reported in first person by the first author whose music making practice involves composing interactive and generative musical systems (Bryan and Krain, 2017), which are then rendered as fixed electronic works.
In my practice, I utilize a range of rule-based composition systems with other AI methods. These are typically focused on the evolution of rhythms over time, and integrate a number of Max/MSP tools - such as Euclidean sequencers, which generate complex rhythms from a few parameters. These rhythms are found in a number of folk traditions, but interestingly they are not represented in the Irish folk data that MeasureVAE was trained on. To explore this dissonance
between musical practice and AI training data I reflexively iterated through several approaches to music making. Here I report on one approach informed by my music making practice in which Euclidean rhythms are used to drive and explore the generation of melodies with MeasureVAE - as illustrated by the workflow in figure 1.
Euclidean rhythms are lists of pulses [x] and rests [.]. These lists, expressed as E(i,j,k), describe i pulses dividing the j total beats as evenly as possible, displaced to the right by k positions. For example, the phrase [x..x..x..x..] is the Euclidean Rhythm expressed by E(5,13,0). Toussaint [11] outlined how this algorithm can be used to generate, analyse and permute a range of rhythmic patterns from across the world. It can also be found in use in minimalist composition [6] as well as being used by live coders and modular synth users [9].
In my own practice, I typically use a range of parameterised sequences in parallel to drive the creation of new musical phrases. For this study I created a number of Euclidean sequences: E(3,7,2) [. X. X. X]; E(4,16,0) [X...]; and E(2,5,2) [. X. X]. I then applied each rhythm to a note from a C minor chord, resulting in the poly-rhythm-based melody illustrated in figure 1(a). Since MeasureVAE needs monophonic input, monophonic melodies were extracted using a lowest note policy - restraint brings tension to the infrequent high notes. When applied to the pattern in figure 1(a), this produces the legato melody in figure 1(b). This monophonic melody is then fed in to MeasureVAE which generates an output illustrated in figure 1(c). Permuting the parameters of each Euclidean rhythm produces a number of similar melodies with comparable divergence from the model. I then explored modulations of my Euclidean system, composing 'into' the fixed MeasureVAE to produce new melodies as illustrated in figure 1(d). Through this process I found musical artifacts in the MeasureVAE - encoded elements which reflected the underlying training data (or lack thereof) more than the input itself.
Figure 1: Musical workflow for generating musical measures from Euclidean Rhythms.
Figure 2: a) Three Euclidean rhythms are played in parallel using notes from a C minor chord; b) This is made monophonic, generating a melody; c) Its encoding without any modification; d) Its encoding after the Euclidean parameters were altered.
### Feedback and Explainability
Through exploration, I found that employing MeasureVAE in tandem with other computational composition systems extended MeasureVAE's explainability beyond its intended scope. First, we gain new avenues for explainability. MeasureVAE provides a navigable heatmap of the latent space, highlighting areas of the source distribution which are more or less dense in the original dataset. In the denser areas, regularisation makes for useful 'contrastive feature' explanation and navigation. By feeding a series of permuting Euclidean sequences into MeasureVAE, I was able to explore the under-defined contours of the latent space through example. My parametrically-defined melodies would be approximated within the distribution of the original dataset, and the artifacts which arose during this process gave some idea of the extent to which my melody was congruent with the distribution of the training set. Conversely, this could also be thought of as an evaluation of the appropriateness of the original dataset for my task, or a qualitative understanding of the high dimensional topology of the latent space, going beyond the 4 regularised dimensions originally explored (Beng et al., 2020).
Second, there is some appropriation of the explainability of one system to another. For example, Euclidean systems are difficult to navigate and evaluative feedback from MeasureVAE in the music making process helped to inform the tweaking of my Euclidean system. Specifically, the MeasureVAE encoder gave me feedback about the extent to which my melody caused activation in its regularised dimensions. With this, I was able to more fluidly modify my Euclidean parameters so as to create melodies which were encoded as being less dense or complex e.g. 2d.
## 3. Conclusions
Explainable AI music systems promise increased control and transparency to the computer music composer. As illustrated in this paper, the location of the explainability in a musical workflow has implications for the types of feedback it can give - in our case, exploring the features of the training set, and appropriating explainability from an XAI to an algorithmic composition process. Future work includes exploring how we can leverage such re-contextualised feedback for algorithmic surprise, qualitative evaluation of datasets, and cross-genre adaptation of pre-trained models.
|
2307.01163 | Improving Language Plasticity via Pretraining with Active Forgetting | Pretrained language models (PLMs) are today the primary model for natural
language processing. Despite their impressive downstream performance, it can be
difficult to apply PLMs to new languages, a barrier to making their
capabilities universally accessible. While prior work has shown it possible to
address this issue by learning a new embedding layer for the new language,
doing so is both data and compute inefficient. We propose to use an active
forgetting mechanism during pretraining, as a simple way of creating PLMs that
can quickly adapt to new languages. Concretely, by resetting the embedding
layer every K updates during pretraining, we encourage the PLM to improve its
ability of learning new embeddings within a limited number of updates, similar
to a meta-learning effect. Experiments with RoBERTa show that models pretrained
with our forgetting mechanism not only demonstrate faster convergence during
language adaptation but also outperform standard ones in a low-data regime,
particularly for languages that are distant from English. | Yihong Chen, Kelly Marchisio, Roberta Raileanu, David Ifeoluwa Adelani, Pontus Stenetorp, Sebastian Riedel, Mikel Artetxe | 2023-07-03T17:12:44Z | http://arxiv.org/abs/2307.01163v3 | # Improving Language Plasticity via
###### Abstract
Pretrained language models (PLMs) are today the primary model for natural language processing. Despite their impressive downstream performance, it can be difficult to apply PLMs to new languages, a barrier to making their capabilities universally accessible. While prior work has shown it possible to address this issue by learning a new embedding layer for the new language, doing so is both data and compute inefficient. We propose to use an _active forgetting_ mechanism during pretraining, as a simple way of creating PLMs that can quickly adapt to new languages. Concretely, by resetting the embedding layer every \(K\) updates during pretraining, we encourage the PLM to improve its ability of learning new embeddings within limited number of updates, similar to a meta-learning effect. Experiments with RoBERTa show that models pretrained with our forgetting mechanism not only demonstrate faster convergence during language adaptation, but also outperform standard ones in a low-data regime, particularly for languages that are distant from English.
## 1 Introduction
Pretrained language models (PLMs) have been swiftly reshaping the landscape of natural language processing (NLP) by improving upon standardized benchmarks across the board (Radford and Narasimhan, 2018; Devlin et al., 2019; Liu et al., 2019; Brown et al., 2020). At their core, they acquire knowledge by ingesting large datasets and store this knowledge in their parameters during pretraining. Using finetuning or prompting (Brown et al., 2020), such knowledge can then be applied to downstream applications, such as semantic analysis, question answering, and others.
Despite their success, PLMs still have a number of shortcomings (Weidinger et al., 2021, 2022). In particular, it requires massive data and computation to pretrain them (Gururangan et al., 2020; Kaplan et al., 2020; Hernandez et al., 2021; Hu et al., 2021; Touvron et al., 2023). Naively retraining a new PLM to accommodate every lingual space shift 1 would be prohibitively expensive. Thus, it is a highly relevant research target to create PLMs that can be efficiently adapted to new lingual spaces.
Footnote 1: We use the term _lingual space shift_ to describe changes in language usage between pretraining and the target downstream application, caused by factors such as language change, time evolution, or domain switch. A model with high _language plasticity_ would quickly adapt to these shifts.
While forgetting in the context of both human and machine learning is often perceived as something negative (for instance catastrophic forgetting (McCloskey and Cohen, 1989; Ratcliff, 1990; Kirkpatrick et al., 2017)), recent works have shown that for artificial neural networks forgetting can also play a positive role in increasing their "plasticity", such as improving generalization to
unseen data (Zhou et al., 2022; Chen et al., 2022; Igl et al., 2021), enabling learning in low-data regimes (Alabdulmohsin et al., 2021; Taha et al., 2021), or counteracting primacy bias (Nikishin et al., 2022; D'Oro et al., 2023). Given these developments, in this work we ask whether we can draw upon forgetting as a mechanism to improve _pretraining_ and imbue PLMs with similar benefits.
It is well established in the NLP community that models struggle to generalize across languages without substantial intervention (Conneau et al., 2020; Pfeiffer et al., 2020; Ansel et al., 2022; Preiffer et al., 2020), which is especially true for low-resources languages. We thus see this as a promising testing ground for forgetting techniques. Our focus is on the input layer of the PLM, the _token embedding layer_, as learning it has been shown to be highly effective when adapting between languages (Artetxe et al., 2020).
Concretely, we introduce a simple _active forgetting_ mechanism, that resets the token embeddings at regular intervals, while leaving all other parameters and study if our approach creates a PLM that can be more easily _rewired_ (Figure 1) to an unseen (possibly distant) language.
Our zero-shot evaluations on several cross-lingual transfer benchmarks show that for cases where the unlabeled adaptation corpus for the unseen language has as few as 5 million tokens (simulating a low-data regime), forgetting PLMs outperforms the baseline by large margins: average gains of \(+21.2\%\) on XNLI, \(+33.8\%\) on MLQA, and \(+60.9\%\) on XQUAD. In addition, models pretrained using active forgetting converge significantly faster during language adaptation. Finally, we find that forgetting is especially beneficial for languages that _are distant from_ English, such as Arabic, Hindi, Thai, and Turkish.
## 2 Rewire PLMs for New Languages
Using unlabeled data, Artetxe et al. (2020) demonstrates the possibility of rewiring a monolingual PLM for a new language; they propose to relearn the embedding layer for the new language while keeping all the other parameters frozen. The underlying assumption is that the token embedding layer and the transformer body (the non-token-embedding parameters) divide up the responsibility in a way that the former handles language-specific lexical meanings, while the latter deals with high-level general reasoning. Hence, rewiring an English PLM for a new language boils down to separately adapting the former with unlabeled data in the new language and the latter with English task data. The procedure can be summarized as follows:
1. Pretrain: A transformer-based model is pretrained on an _English_ corpus. In our experiments, we choose to pretrain RoBERTa-base Liu et al. (2019), a 12-layer transformer-based model, on English CC100 (Conneau et al., 2020).
2. Language Adapt: The token embedding layer is finetuned using unlabelled data in the new language, while the transformer body is frozen.
3. Task Adapt: The transformer body is finetuned using downstream task data in English, while the token embedding layer is frozen.
4. Assemble: The final model is assembled by taking the adapted token embedding layer from stage 2 and the adapted transformer body from stage 3.
Figure 1: _Rewiring_ via relearning token embeddings: where the transformer body (the purple part) is “frozen” and reused for a new language, but the token embeddings are relearned to suit the new language.
### On The Difficulty of Rewiring PLMs via Relearning the Token Embeddings
While the above procedure [Artetxe et al., 2020] offers a general framework for rewiring a monolingual PLM with unlabelled data in the new language, it is unclear how efficient such rewiring can be, including both sample efficiency and computational efficiency. To better understand the difficulty of rewiring PLMs via relearning the token embeddings, we design an experiment where we relearn the token embedding layer using varying amounts of adaptation data. For illustration purpose, we pick English as the pseudo "adaptation language" because the English dataset is large enough to bootstrap a series of sub-datasets with varying quantity. We create sub-datasets with \([1\text{K},10\text{K},100\text{K},1\text{M},5\text{M},10\text{M},100\text{M}, 1\text{B},10\text{B}]\) tokens and relearn the English embeddings while keeping the transformer body frozen.
The dashed blue line in Figure 3 summarizes the influence of the adaptation data quantity on the quality of the rewired PLMs (relearned embeddings assembled with the English NLI task body). We can see that the standard PLMs are easy to rewire if there is enough adaptation data. However if the adaptation corpus contains fewer than 10 million tokens, the rewiring performance of the standard PLMs (the blue dashed line in the figure) drops drastically as the adaptation data quantity goes down, from near \(80\) to around \(35\) near random-guessing performance for the NLI task. This motivates us to develop a new method for addressing the sample efficiency of relearning the token embeddings and create more rewirable PLMs.
Figure 3: The rewiring performance for standard PLMs (blue dashed line) drops drastically if the adaptation tokens \(\leq 10\)M.
Figure 2: Unsupervised zero-shot cross-lingual transfer. **Left**: in the pretrain stage, we compare standard pretraining with forgetting pretraining, where the token embeddings are actively forgotten at a regular interval while the transformer body is learned as the standard pretraining. **Middle**: the task adapt stage and the language adapt stage separately adapt the transformer body using English task data and the token embeddings using unlabeled data in the new language. **Right**: the assemble stage reassemble the adapted body and token embedding layer into a usable PLM.
Enhance Rewirability via Pretraining with Active Forgetting
Recent works have shown that incorporating forgetting through iterative weights resetting can increase the "plasticity" of neural networks, enabling them to learn from small data and generalize better to unseen data in supervised learning (Alabdulmohsin et al., 2021; Taha et al., 2021; Zhou et al., 2022). Building on these efforts, we wonder if we can bring such forgetting into the pretrain stage so that the resulting PLM would have more rewiablity, allowing easier adaptation to new languages.
Our Hypothesis.In effect, when Artetxe et al. (2020) relearned the token embedding layer, the reinitialization of the embeddings can be seen as forgetting applied _once_ at the start of the language adapt stage. However, the PLM (specifically the transformer body) has never encountered forgetting before this stage and may struggle to handle this new situation. Without early exposure to forgetting, the PLM might suffer from slow recovery caused by forgetting before eventually benefiting from it. The learning of a new lexical embedding layer in a PLM henceforth consumes lots of data in new languages along with long training horizons as shown in Section 2.1. In this paper, to ensure swift learning of the new languages with both high sample efficiency and convergence rate, we argue that the PLM must be exposed to forgetting during pretraining, allowing itself to maximize the positive impact of forgetting and minimizing the cost of recovery.
Our Method.With this hypothesis in mind, we propose to add an _active forgetting_ mechanism to the pretraining procedure, which simply resets the token embedding layer periodically as described in Algorithm 1. Concretely, the forgetting mechanism operates by intentionally clearing the weights of the embedding layer, which stores the static representations for all tokens, and reinitializing them to a new set of random values every \(K\) gradient updates. Since pretraining involves advanced training strategies, like optimizers with states and learning rate schedulers, we also reset them along with the token embedding layer. We refer to language models pretrained with such active forgetting mechanism as _forgetting PLMs_, in contrast to _standard PLMs_ which are pretrained in a standard way, i.e. without active forgetting.
Research Questions.We study _forgetting PLMs_ and _standard PLMs_, across the axes of sample efficiency and convergence speed during the language adapt stage. To be precise, we are interested in answering the following research questions:
* RQ1: For most low-resources languages in the real world, there exists little data for the language adapt stage. Does pretraining with active forgetting help forgetting PLMs learn the new language in such _low-data_ regimes?
* RQ2: When practitioners deploy PLMs for their own languages, they might face the challenge of low-compute. Can we use forgetting PLMs to reduce the computation time in the language adapt stage for such _low-compute_ scenarios?
* RQ3: The new language can be either very similar to the pretraining language or very distant from it. Does such similarity or distance impact forgetting PLMs' relative benefit over standard PLMs?
## 4 Evaluate Forgetting PLMs
To assess the effectiveness of forgetting PLMs and answer RQ1-RQ3, we perform experiments on several cross-lingual transfer benchmarks.
### Experimental Setup
In our work, we closely follow the setup in Artetxe et al. (2020) and Marchisio et al. (2022). Our pretraining model is RoBERTa-base, a standard \(12\)-layer transformer-based language model. We trained language-specific sentencepiece tokenizers (Kudo and Richardson, 2018) with a vocabulary size of \(50\)K over the corresponding data subsets in CC100. The model was pretrained with the English subset of the CC-100 dataset. The pretraining process consists of \(125\)K updates, with a batch size of 2048. We used a learning rate scheduler with linear decay and an initial learning rate of \(7\)e \(-4\), with \(10\)K warm-up updates. Checkpoints were saved every \(500\) updates and we always choose the last
pretraining checkpoint where possible for optimal performance. For forgetting pretraining, we chose the checkpoint corresponding to the best validation perplexity since the last checkpoint might have token embeddings reset. We set the frequency of forgetting \(\text{K}=1000\) and used a clip-norm of \(0.5\).
During the language adapt stage, we kept most of the hyper-parameters the same as for pretraining. We finetuned the token embedding layer while keeping the others frozen, as described in Section 2. Note that _no_ forgetting happens during this stage because we want the models to learn the new languages as well as possible. In the task adapt stage, both models were finetuned for \(10\) epochs on the English task data, specifically MultiNLI (Williams et al., 2018) for the NLI task and SQUAD Rajpurkar et al. (2016) for the QA task. After the assemble stage, we evaluate the zero-shot performance of the assembled model on XNLI (Conneau et al., 2018), a cross-lingual NLI task, along with XQUAD (Artetxe et al., 2020) and MLQA (Lewis et al., 2020), two cross-lingual QA tasks. We report the NLI accuracy and QA F1 on the test sets.
Our experiments were implemented using fairseq (Ott et al., 2019). The pretraining and language adaptation experiments were conducted on \(32\) Tesla V100 GPUs (each with \(32\) GB memory) and took approximately \(24\)-\(36\) hours to complete. The time taken for both stages were quite close to each other even though the latter only involved tuning the embeddings. This demonstrates the importance of reducing the computational cost of the language adaptation stage.
Differing from prior work (Artetxe et al., 2020, Marchisio et al., 2022), we focus on language adapt in low-data regimes. We simulate low-resources scenarios by limiting the adaptation data for each downstream language to only \(5\)M subword tokens from CC100. This is in contrast with conventional setups, where all the tokens in the corresponding languages in CC100 are used for language adaptation. As Table 5 shows, such setups consume several orders of magnitude more data than our \(5\)M-token setup; for instance, the Swahili CC100 subset contains \(345\)M tokens, roughly \(69\) times larger than our corpus, and the Russian subset contains \(34.9\)B tokens, roughly \(6,980\) times larger. Therefore, PLMs that can successfully learn new languages with rich data under traditional setups may struggle to do so with our limited \(5\)M-token corpus.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c c c c|c|c} \hline \hline
**Method** & **i** & **i** & **sw** & **es** & **bg** & **de** & **Pr** & **d** & **rn** & **Ab** & **w** & **M** & **tr** & **ar** & **b** & **Avg** & **on** \\ \hline Standard & **65.8** & 55.6 & 68.0 & 65.5 & 62.2 & 63.5 & 63.1 & 56.9 & 53.2 & 36.8 & 39.7 & 38.9 & 41.2 & 35.3 & 53.3 & **86.1** \\ Forwarding & 62.8 & **59.5** & **74.0** & **71.7** & **68.5** & **71.2** & **70.8** & **65.3** & **65.5** & **65.8** & **52.9** & **52.7** & **95.5** & **59.7** & **62.7** & 85.1 \\ \hline Relative Gain & -4.69 & -4.70 & -4.85 & -49.5u & -40.15 & -42.19 & +42.29 & +41.22 & +45.69 & +49.48 & +42.5u & +43.29 & +35.5u & +44.49 & +40.15 & +42.25 & -1.29 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy comparison of forgetting and standard PLMs on XNLI. Forgetting PLMs bring a \(21.2\%\) averaged relative gain \(=\frac{\sum_{x\in\{\text{language}\}}\text{Relative Gain of }x}{\#\text{Languages}}\).
### Forgetting PLMs Work Better in Low-Data Regimes (RQ1)
A low-data scenario poses a serious obstacle for rewiring PLMs to new languages. Our experiments show that while a standard PLM (assembled with the task-adapted transformer body) can achieve an English NLI accuracy of \(86.1\), its performance drops significantly for new languages, with an average accuracy of only \(53.3\) on XNLI, as shown in Table 1. Compared to prior work which uses the full data from Wikipedia (Artetxe et al., 2020) or the full data from CC100 (Marchisio et al., 2022), the average accuracy on XNLI drops about \(18\%\) (from \(66.8\)/\(66.3\) to \(53.3\)) when the adaptation corpus is limited to \(5\) million tokens. This highlights the shortcoming of standard PLMs in low-data regimes. In contrast, forgetting PLMs still deliver a decent average accuracy on XNLI of \(62.7\), with an average relative gain of \(+21.2\%\) across languages over standard PLMs, as shown in Table 1.
Forgetting PLMs also outperform standard PLMs on MLQA and XQUAD, with average F1 relative gains of \(+33.8\%\) and \(+60.9\%\) across languages, as respectively demonstrated in Table 2 and Table 3. Across both NLI and QA tasks, forgetting PLMs consistently outperform standard PLMs by a large margin in low-data regimes. Why do forgetting PLMs handle the low-data regime better than standard PLMs? We hypothesize that this is because forgetting PLMs are more robust to different embedding initializations. They encode more universal knowledge within the transformer body, while the body of standard PLMs may encode more "shortcut" knowledge that only applies to certain embedding initializations. In a rich data regime, standard PLMs can always access sufficient data and gradually adjust the embeddings towards the "shortcut" knowledge route. However, this is not possible in a low-data regime.
### Forgetting PLMs Learn New Languages with Fewer Parameter Updates (RQ2)
We are also interested in how quickly forgetting PLMs and standard PLMs can learn new languages in the language adapt stage. Figure 4 summarizes the adaptation curves on XNLI, MLQA and XQUAD, with each point representing the averaged performance across all languages. To determine how quickly both methods can pick up new languages, we also examine the first \(5\)K adaptation steps, which only require \(4\%\) of the compute time compared to a complete run of \(125\)K updates. On XNLI, we observe that forgetting PLMs reach a better performance level within \(5\)K updates, achieving an average accuracy of around \(57.8\), while the standard PLMs still struggle around random guessing, achieving only \(37.2\). Similar results are found for MLQA and XQUAD. For example, as shown in the last plot in Figure 4, forgetting PLMs reach \(92\%\) of their final performance within \(5\)K updates, while standard PLMs only reached \(53\%\) of their final performance at that point.
Why do forgetting PLMs converge faster? We hypothesize that it is because the active forgetting mechanism, or rather the periodical embedding resetting poses a requisite for the body to gradually locate itself on a particular manifold, where its _cooperation_ with various new embeddings (to reach a lower loss) becomes easier. In other words, the active forgetting makes the body sensitive to changes in its input (the embedded tokens) so that the body can encourage larger updates in the embedding layer when meeting new languages during the language adapt stage. This allows active forgetting to
\begin{table}
\begin{tabular}{l|c c c c c c|c|c} \hline \hline
**Method** & **es** & **v1** & **de** & **zh** & **hi** & **ar** & **Avg** & **en** \\ \hline Standard & 49.4 & 38.3 & 45.3 & 34.1 & 17.7 & 20.8 & 34.3 & **78.9** \\ Forgetting & **55.3** & **45.0** & **53.4** & **43.0** & **28.8** & **34.7** & **43.4** & 78.3 \\ \hline Relative Gain & +12.0\% & +17.6\% & +17.8\% & +26.2\% & +62.5\% & +67.0\% & +33.8\% & -0.8\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: F1-score comparison of forgetting and standard PLMs on MLQA. Forgetting PLMs bring a \(33.8\%\) averaged relative gain \(=\frac{\sum_{x\in\{\text{langages}\}}}{\#\text{Languages}}\).
\begin{table}
\begin{tabular}{l|c c c c c c c c c|c} \hline \hline
**Method** & **vi** & **es** & **ru** & **de** & **el** & **zh** & **hi** & **ar** & **th** & **tr** & **Avg** \\ \hline Standard & 49.7 & 57.7 & 49.4 & 50.9 & 48.5 & 32.4 & 21.4 & 22.2 & 15.4 & 13.0 & 36.1 \\ Forgetting & **52.9** & **64.6** & **56.5** & **60.9** & **59.9** & **43.7** & **33.3** & **38.7** & **38.4** & **41.4** & **49.0** \\ \hline Relative Gain & +6.4\% & +12.0\% & +14.5\% & +19.7\% & +23.6\% & +34.6\% & +55.8\% & +74.2\% & +149.7\% & +218.8\% & +60.9\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: F1-score comparison of forgetting and standard PLMs on XQUAD. Forgetting PLMs bring a \(60.9\%\) averaged relative gain \(=\frac{\sum_{x\in\{\text{langages}\}}}{\#\text{Languages}}\).
simulate multiple language changes during pre-training2 without the actual need of crafting the data in new languages.
Footnote 2: To be exact, simulating multiple vocabulary swappings during pre-training, each of which constitutes a drastic change in the input for the body
### Languages That Are Distant To English Benefit Most From Forgetting PLMs (RQ3)
So far we mainly report the averaged performance. In this section, we compare the language-specific performances of forgetting PLMs and standard PLMs on XNLI, MLQA, and XQUAD to better understand which languages benefit most from using forgetting PLMs. Figure 5 summarizes the relative performance change from using forgetting PLMs instead of standard PLMs on XNLI and MLQA, while the results on XQUAD can be found in Figure 7 in the appendix.
Across the spectrum of languages listed in Table 4, we observe that forgetting is particularly helpful for languages that are very different from the pretraining language (Engl
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Name & Code & Family & Script & Morphology \\ \hline Arabic & ar & Sentile & Arabic (Abjad) & Introflexive \\ Bulgaria & bg & IE:Balto-Slavic & Cyrillic & Analytic \\ German & de & IE:Germanic & Latin & Fusional \\ Greek & el & IE:Hellenic & Greek & Fusional \\ English & en & IE:Germanic & Latin & Analytic \\ French & fr & IE:Romance & Latin & Fusional \\ Hindi & hi & IE:Indo-Iranian & Devanagari & Fusional \\ Russian & ru & IE:Balto-Slavic & Cyrillic & Fusional \\ Spanish & es & IE:Romance & Latin & Fusional \\ Swahili & sw & Niger-Congo:Bantu & Latin & Agglutinative \\ Thai & th & Tai-Kadai & Thai & Analytic \\ Turkish & tr & Turkic & Latin & Agglutinative \\ Urdu & ur & IE:Indo-Iranian & Perso-Arabic & Fusional \\ Vietnamese & vi & Austroasiatic & Latin & Analytic \\ Chinese & zh & Sino-Tibetan & Chinese & Analytic \\ \hline \hline \end{tabular}
\end{table}
Table 4: Languages by family, script, and morphology.
Figure 4: Adaptation curves on XNLI, MLQA, and XQUAD. Numbers aggregated across languages. The first row contains the full adaptation curves, which comprises \(125\)K adaptation steps. The second row contains the zoom-in versions of curves for the first \(5\)K adaptation steps. Forgetting PLMs converge faster than standard PLMs; for instance, on XQUAD (the last plot), forgetting PLMs reach \(92\%\) of their final performance within \(5\)K updates, while standard PLMs only reached \(53\%\) of their final performance at that point.
family, script and morphology. More specifically, forgetting brings large relative gains for languages such as _Arabic_, _Hindi_, _Thai_, _Turkish_, and _Urdu_, while for languages that are closer to English, like _German_, forgetting brings small relative gains. We also examine the adaptation curve for each individual language within the first \(5\)K steps. As shown in Figure 6 for selected languages (and in the appendix for all languages), forgetting PLMs reaches substantially better performance level within \(5\)K updates for all the languages except _Urdu_, while the standard PLMs still struggle around random-guessing rewiring performances.
Figure 5: Relative gains of forgetting PLMs over standard PLMs across languages. Forgetting brings large relative gains for languages such as Arabic, Hindi, Thai, Turkish, and Urdu, while for languages that are closer to English, like German, forgetting brings small relative gains.
Figure 6: Adaptation curves on XNLI within \(5\)K updates for individual languages: Bulgaria, Greek, Spanish, French, Russian, Swahili, Vietnamese and Chinese. For all languages except Urdu, the forgetting PLMs converge faster than the standard PLMs during the language adaptation stage.
Related Work
### Forgetting and its Positive Roles
The common perception of forgetting is that it implies weak memory and a loss of acquired knowledge, thus it is often regarded as a sign of _un-intelligence_ or a undesirable property. In neural networks, _catastrophic forgetting_(McCloskey and Cohen, 1989; Ratcliff, 1990; Kirkpatrick et al., 2017) is portrayed as a forgetting phenomenon where neural networks lose the ability to predict old patterns after new inputs alter their weights. Forgetting in this context has negative consequences, as the new knowledge overwrites the old. Plenty of prior research strives to overcome catastrophic forgetting and enable continual learning (Schmidhuber, 2013; Kirkpatrick et al., 2017; Lopez-Paz and Ranzato, 2017; Shin et al., 2017; Schwarz et al., 2018; Mallya and Lazebnik, 2018; Parisi et al., 2019; Rolnick et al., 2019; Beaulieu et al., 2020; Veniat et al., 2020; Gaya et al., 2023; Khetarpal et al., 2022).
Our work differs from the above ones in that our subject is _intentional forgetting_ rather than passive forgetting and its associated negative impact. To put it in another way, we seek to understand how forgetting - if purposely incorporated as an active process during training - might _help_ new learning. Similar positive roles of forgetting have been discussed in the literature. Specifically, Pastotter et al. (2008) demonstrate forgetting enhances the learning of new information by resetting the encoding process and holding the attention at high levels; Levy et al. (2007) show that it helps second language acquisition by inhibiting the native language; Barrett and Zollman (2009) find it promote the emergence of an optimal language by preventing partial success from reinforce sub-optimal practice. Norby (2015) further suggests forgetting serves adaptive functions, helping people regulate emotions, acquiring knowledge and staying attuned to the context. More recently Anderson and Hulbert (2021) reviews evidence on active forgetting by prefrontal control and shows how it can adapt the memory to suit either emotional or cognitive goals.
### Forgetting Via Partial Neural Weights Reset
In neural networks, forgetting can be instantiated in many forms. One simple way is to reset a subset of parameters before the next round of learning. Iterative repetitions of such resetting processes have been shown to benefit generalization with low compute (Frankle and Carbin, 2019) and low data (Alabdulmohsin et al., 2021; Taha et al., 2021; Alabdulmohsin et al., 2021) for computer vision tasks. More recently, Zhou et al. (2022) demonstrate that a similar forgetting strategy helps both image classification and language emergence. Closely linked to our method, Chen et al. (2022) periodically reset the node embedding layer in order to truncate the infinite message-passing among nodes and thereby enable reasoning over new graphs with new nodes. Our work uses similar forgetting mechanism but over the token embeddings, enabling reasoning over new languages with new tokens. As far as we know, _we are the first to bring periodical forgetting into the pretraining process and demonstrate that such forgetting pretraining helps cross-lingual transfer_. A relevant thread in reinforcement learning (RL) research investigates similar forgetting approaches for mitigating the non-stationarity inherent in RL training. Igl et al. (2021) propose to periodically reset the current policy by distilling it into a reinitialized network throughout training. Intuitively, such iterative policy resetting releases the network capacity storing the suboptimal policies and thus opens up space for the the yet-to-be-discovered optimal (final) policy. Nikishin et al. (2022) further simplified the forgetting mechanism to merely resetting an RL agent's last few layers without policy self-distillation. They find that such simple forgetting mechanism prevent RL agents from overfitting to early experiences, overcoming the _primary bias_. D'Oro et al. (2023) also show that fully or partially resetting the parameters of deep RL agents allows a larger number of model updates per environment interaction, thus improving the sample efficiency.
### Making Pretrained Language Models Multilingual
One straightforward way to make PLMs multilingual is to pretrain on multilingual data, such as XLM-R (Conneau et al., 2020). However, this has several issues: the need of large multilingual corpus with appropriate mixing, potential interference among languages, and the difficulty of covering all possible languages. On the other hand, the line of research on cross-lingual transfer offers an alternative way to make PLMs multilingual by extending English-only PLMs to other languages. Artetxe et al. (2020) demonstrate the possibility of such an extension by simply relearning the embedding layer with unsupervised data from the new language. Marchisio et al. (2022) further increase the computational efficiency of such an extension by using a mini-model proxy during the language adaptation stage.
Approaches based on adapters and sparse finetuning have also been proposed (Pfeiffer et al., 2020, 2022, 2021, Ansell et al., 2022). Adapters are bottleneck layers (usually placed after the feedforward layers) that add extra capacity when adapting to a different task or language. Our proposed forgetting mechanism can also be applied to adapter-based methods as we can allow forgetting to happen in the adapter layers. The current choice of forgetting embeddings keeps the architecture intact and incurs no additional hyperparameter tuning, allowing us to understand the fundamental capability of forgetting pretraining.
## 6 Conclusion & Future work
### Conclusions
While forgetting is usually perceived as negative, recent work point out that it can also be beneficial in certain cases, particularly for quickly learning new tasks, training in non-stationary environments (Igl et al., 2021, Nikishin et al., 2022, D'Oro et al., 2023) and improving sample efficiency (Taha et al., 2021, Zhou et al., 2022). Joining this line of work, our paper demonstrates that forgetting techniques can improve pretrained language models by imbuing them with more linguistic plasticity. Specifically, our proposed _active forgetting_ mechanism can create PLMs that are easier to rewire for new lingual spaces. Experiments with RoBERTa show that models pretrained via active forgetting can better learn from small amounts of data while also enjoying faster convergence during language adaptation, particularly for languages distant from English.
Going beyond language adaptation, we argue that rewirable PLMs are a promising direction for future research, as they allow easier adaptation to various tasks, domains, languages and can evolve faster as the real world changes. Unlike symbolic methods, such as knowledge graphs, which can easily rewire a fact by modifying the corresponding knowledge triplet, current static PLMs are harder to rewire since changing one fact via updating model weights may disrupt multiple other facts without substantial post-hoc intervention. Improving the rewirability via forgetting pretraining thus can be seen as one way of imbuing PLMs with similar benefits as symbolic methods (making the resulted model more controllable i.e. can be modified with tiny cost), complementing the line of post-hoc model editing research (Mitchell et al., 2021, 2022).
### Limitations
Our work considers the simplest form of forgetting: directly discarding the learned embedding weights and resetting them to some random initialization. Future work could consider more advanced forgetting techniques such as gradually injecting noise into the embedding weights. We focus on masked language modeling with language-specific tokenizers. It would be interesting to also investigate active forgetting for auto-regressive language modeling (e.g. the GPT family models) and various tokenizers.
On the theory front, potential connections can be made between forgetting and meta-learning (Schaul and Schmidhuber, 2010, Thrun and Pratt, 2012, Andrychowicz et al., 2016, Finn et al., 2017) since both attempt to learn solutions that can quickly adapt themselves to new inputs. Another possible theoretical explanation for why active forgetting works so well might be related to the flatness of the solution in the loss landscape (Alabdulmohsin et al., 2021). Flatter minima tend to enjoy better generalization (Liu et al., 2023). Thus, it might be worthwhile to study the flatness of the transformer body during the forgetting pretraining.
|
2306.16736 | GraMMaR: Ground-aware Motion Model for 3D Human Motion Reconstruction | Demystifying complex human-ground interactions is essential for accurate and
realistic 3D human motion reconstruction from RGB videos, as it ensures
consistency between the humans and the ground plane. Prior methods have modeled
human-ground interactions either implicitly or in a sparse manner, often
resulting in unrealistic and incorrect motions when faced with noise and
uncertainty. In contrast, our approach explicitly represents these interactions
in a dense and continuous manner. To this end, we propose a novel Ground-aware
Motion Model for 3D Human Motion Reconstruction, named GraMMaR, which jointly
learns the distribution of transitions in both pose and interaction between
every joint and ground plane at each time step of a motion sequence. It is
trained to explicitly promote consistency between the motion and distance
change towards the ground. After training, we establish a joint optimization
strategy that utilizes GraMMaR as a dual-prior, regularizing the optimization
towards the space of plausible ground-aware motions. This leads to realistic
and coherent motion reconstruction, irrespective of the assumed or learned
ground plane. Through extensive evaluation on the AMASS and AIST++ datasets,
our model demonstrates good generalization and discriminating abilities in
challenging cases including complex and ambiguous human-ground interactions.
The code will be available at https://github.com/xymsh/GraMMaR. | Sihan Ma, Qiong Cao, Hongwei Yi, Jing Zhang, Dacheng Tao | 2023-06-29T07:22:20Z | http://arxiv.org/abs/2306.16736v3 | # GraMMaR: Ground-aware Motion Model for 3D Human Motion Reconstruction
###### Abstract
Demystifying complex human-ground interactions is essential for accurate and realistic 3D human motion reconstruction from RGB videos, as it ensures consistency between the humans and the ground plane. Prior methods have modeled human-ground interactions either implicitly or in a sparse manner, often resulting in unrealistic and incorrect motions when faced with noise and uncertainty. In contrast, our approach explicitly represents these interactions in a dense and continuous manner. To this end, we propose a novel **Ground-aware Motion Model** for 3D Human Motion Reconstruction, named **GraMMaR**, which jointly learns the distribution of transitions in both pose and interaction between every joint and ground plane at each time step of a motion sequence. It is trained to explicitly promote consistency between the motion and distance change towards the ground. After training, we establish a joint optimization strategy that utilizes GraMMaR as a dual-prior, regularizing the optimization towards the space of plausible ground-aware motions. This leads to realistic and coherent motion reconstruction, irrespective of the assumed or learned ground plane. Through extensive evaluation on the AMASS and AIST++ datasets, our model demonstrates good generalization and discriminating abilities in challenging cases including complex and ambiguous human-ground interactions. The code will be available at [https://github.com/xymsh/GraMMaR](https://github.com/xymsh/GraMMaR).
+
Footnote †: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding Corresponding: author: Corresponding: author: Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding author: Corresponding author: Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding Corresponding: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding Corresponding author: Corresponding: author: Corresponding Corresponding author: Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding author: Corresponding Corresponding: author: Corresponding Corresponding author: Corresponding: Corresponding author: Corresponding Corresponding: author: Corresponding author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding author: Corresponding: author: Corresponding author: Corresponding Corresponding author: Corresponding: Corresponding author: Corresponding author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding author: Corresponding Corresponding: author: Corresponding Corresponding author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding author: Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding Corresponding: author: Corresponding: Corresponding author: Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: author: Corresponding Corresponding: Corresponding author: Corresponding: Corresponding: author: Corresponding Corresponding: author: Corresponding: Corresponding author: Corresponding: Corresponding author: Corresponding: Corresponding: author: Corresponding Corresponding: author: Corresponding: Corresponding Corresponding: author: Corresponding: Corresponding Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding Corresponding: author: Corresponding: Corresponding: author: Corresponding Corresponding: Corresponding: author: Corresponding: Corresponding: Corresponding author: Corresponding: Corresponding: author: Corresponding: Corresponding Corresponding: author: Corresponding: Corresponding: Corresponding author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding Corresponding: author: Corresponding: Corresponding Corresponding: author: Corresponding: Corresponding Corresponding: author: Corresponding: Corresponding Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: Corresponding: author: Corresponding: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: Corresponding: author: Corresponding: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: Corresponding: author: Corresponding: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: Corresponding: author: Corresponding: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding:: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding:: Corresponding: author: Corresponding: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: Corresponding: author: Corresponding:: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author:: Corresponding: Corresponding: author: Corresponding:: Corresponding: author:: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author:: Corresponding: author: Corresponding: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding:: author:: Corresponding: author::: Corresponding: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding:: author:: Corresponding:: author:: Corresponding:: author::: Corresponding: Corresponding: author:: Corresponding:: Corresponding: author::: Corresponding: author:::: Corresponding:: author
## 1. Introduction
The human body frequently engages in movements that involve interactions with the ground plane. In real-life scenarios, when a body part is in close proximity to the ground, individuals may need to slow down, lean their torso, orient their head to look at the ground or position their hands and feet on the ground. The capability to accurately predict 3D human motion with physical plausibility from RGB videos, which encompasses realistic interactions with the ground, is crucial for numerous applications (Sandhi et al., 2017), such as scene understanding (Sandhi et al., 2017; Wang et al., 2017; Wang et al., 2018), 3D dance motion reconstruction and generation (Sandhi et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), and augmented and virtual reality games (Sandhi et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). While extensive research has focused on 3D motion estimation under camera space(Sandhi et al., 2017; Wang et al., 2018), considering alignment solely in the camera view might be insufficient and potentially deceptive. There are cases where poses appear reasonable in the camera view but exhibit physically implausible body support on the assumed ground plane when viewed from an alternate viewpoint or placed in a 3D scene. Moreover, even within the camera view, handling noisy observations can result in visually implausible recovered motions, such as body twists, penetration, and jittery movements. Fig. 1 demonstrates these issues. These challenges primarily arise because most state-of-the-art methods rarely consider the interaction between humans and the ground plane, thus unable to satisfy the physical constraints that govern the human body during interactions.
To address these issues, a natural solution is to model human-ground interaction explicitly to ensure consistency between the human body and the ground. Essentially, human-ground interaction involves the interdependent relationships between a 3D human and the ground plane. However, to date, few methods have explored human-ground interaction; those that do have primarily focused on the body-ground contact using binary contact labels (Wang et al., 2018; Wang et al., 2018) or ground reaction force (Wang et al., 2018). (Wang et al., 2018) models the interaction by directly predicting binary contact labels, indicating whether predefined joints are in contact with the ground. These classification results are then used as hard constraints that restrict the distance between joints and the ground during inference, thereby enabling the generation of physically plausible poses. However, the use of binary labels is inadequate as it only applies to joints in direct contact with the ground, leaving most other joints without physical restrictions. Moreover, the sparse and uncertain nature of contact occurrence across all joints significantly impacts the accuracy of motion reconstruction. The performance heavily relies on the quantity of frames and joints within a given motion sequence where contact is established, leading to instability. Alternatively, some work (Wang et al., 2018) has introduced ground reaction force as a means of representing human-ground interaction. A larger reaction force corresponds to a heavier penalty on the distance between the joints and the ground during optimization. Although intuitive, it is difficult to access and only applies to joints in contact with the ground.
In this work, we address these issues by building a robust human motion model that accurately captures the dynamics of 3D human motion through human-ground interactions. To achieve this goal, we first introduce a novel continuous distance-based per-joint interaction representation to encode fine-grained human-ground interactions at each time step. It overcomes the limitations of binary contact labels and ground reaction force by combining per-joint ground distance and its velocity along the gravity axis. Unlike previous methods, our new representation provides a continuous and differentiable measure with physical significance, allowing for a comprehensive depiction of motion patterns and ground-based body support for both contacting and non-contacting joints.
Building upon the novel representation, we devise an explicit ground-aware motion dynamics model that incorporates human-ground interactions and human pose. This is formulated as an autoregressive conditional variational auto-encoder (CVAE) (Wang et al., 2018) to capture the temporal variations in human pose and human-ground interactions. The model simultaneously learns the distribution of transitions for both pose and joint-to-ground distances across adjacent frames within a motion sequence, producing a wide range of plausible poses and human-ground interactions. By conditioning the decoder to predict future motion based on existing poses and human-ground interactions, the model enforces consistency between the body and the ground plane.
We train our model on AMASS (Wang et al., 2018) and develop a joint optimization strategy for 3D human motion reconstruction from noisy observations and RGB videos. The trained model serves as a dual-prior to regularize the optimization towards the space of plausible ground-aware motions, resulting in realistic and coherent motion reconstruction, regardless of the assumed or learned ground plane. The resultant reconstruction method is termed GraMMaR, which stands for **G**round-**a**ware **M**otion **M**odel for 3D Human Motion **R**econstruction. We evaluate GraMMaR quantitatively and qualitatively on both RGB videos and noisy settings and demonstrate its superiority over the baseline in complex and ambiguous contact conditions. GraMMaR proves effective irrespective of the ground plane being known or unknown.
## 2. Related Work
**Kinematic estimation.** Kinematic methods for 3D pose estimation in videos (Sandhi et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) can be categorized as end-to-end learning-based or optimization-based approaches. End-to-end methods, such as VNet (Wang et al., 2018), directly extract 2D and 3D joint positions using CNN-based regression, while VIBe (Wang et al., 2018) estimates SMPL body model (Wang et al., 2018) parameters using a temporal generation model trained with a discriminator. Other works, such as LEMO (Wang et al., 2018) and HuMoR (Wang et al., 2018), train priors for motion transition using large-scale data (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), which are used for fitting 3D poses from 2d poses extracted by off-the-shelf models (Wang et al., 2018; Wang et al., 2018) during optimization. However, these methods may produce physically implausible results, such as body twists and foot skating, especially for complex actions or when training data is limited.
**Physics-based estimation with simulators.** Several methods (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) have been proposed to enhance physical plausibility by incorporating physics laws. These methods use physics simulators such as MuJoCo (Wang et al., 2018) and Isaac Gym (Wang et al., 2018) as a black box to guide 3D pose prediction. Due to the non-differentiable nature of the physics simulator, reinforcement learning is employed to learn control of the simulator (Wang et al., 2018; Wang et al., 2018). For instance, SimPoE (Wang et al., 2018) uses a kinematic-aware policy to generate control signals for the physics simulator to recover realistic 3D poses. Similarly, PoseTriplet (Wang et al., 2018) incorporates the simulator into a semi-supervised framework to
reduce artifacts in pseudo labels. Although effective, they can be computationally intensive for training from scratch and prone to collapse, limiting their generalization ability on videos in the wild. To address this issue, differentiable simulators such as TDS (Krizhevsky et al., 2014) are introduced for articulated 3D motion reconstruction.
**Physics-based estimation without simulators.** Recent research (Krizhevsky et al., 2014; Krizhevsky et al., 2015; Krizhevsky et al., 2016; Krizhevsky et al., 2017; Krizhevsky et al., 2018) has focused on developing physical constraints for 3D motion optimization that do not require physics simulators (Krizhevsky et al., 2014; Krizhevsky et al., 2015; Krizhevsky et al., 2016; Krizhevsky et al., 2017). These methods learn to predict contact conditions for specific joints, imposing boundary constraints during optimization. GraviCap (Krizhevsky et al., 2015) incorporates the physical properties of moving objects in a scene to recover scale, bone length, and ground simultaneously. However, these constraints are only applied to contact joints and overlook the physical characteristics of the body's other joints. (Krizhevsky et al., 2016) infers reaction forces from contact joints and forwards them to the entire body via dynamic equations, but this approach results in approximation errors. In our work, we propose a continuous representation of human-ground interaction that enables us to investigate interaction conditions for all joints, including non-contact ones.
**Human-ground interaction representation in pose estimation.** In physics-based methods for pose estimation (Krizhevsky et al., 2014; Krizhevsky et al., 2015; Krizhevsky et al., 2016; Krizhevsky et al., 2017; Krizhevsky et al., 2018), to impose constraints on height, velocity, and ground reaction forces during optimization, human-ground interaction is typically defined in three ways, the foot-ground contact signal, a contact variable related to penetration distances and contact forces, or the mass center. However, these methods only consider binary contact and ignore non-contact joints. To address this limitation, we propose a continuous and expressive representation for human-ground interaction and establish CVAE-based generative model for human-ground relations to achieve physically plausible motions and reasonable ground planes.
## 3. Method
We propose GraMMaR, a robust generative motion model that captures the dynamics of 3D human motion while being ground-aware, and demonstrate its effectiveness as a regularizer in optimization-based approaches for estimating accurate and plausible 3D human motion and ground plane.
**Preliminaries.** With the frame state \(I\), we represent the state of a person by an interaction state \(g\) defined in the subsequent section, and a motion state \(x\) following (Krizhevsky et al., 2016). The motion state \(x\) is composed of a root translation \(r\in\mathbb{R}^{3}\), a root orientation \(\Phi\in\mathbb{R}^{3}\), body pose joint angles \(\Theta\in\mathbb{R}^{3\times 21}\), joint positions \(J\in\mathbb{R}^{3\times 22}\) and their velocities. All the angles are in axis-angle format.
### Analysis of Human-ground Interaction
**Representation for human-ground interaction state.** In contrast to binary contact labels, our objective is to devise a more comprehensive representation that not only indicates whether a joint contacts the ground, but also characterizes the interaction state between joints and the ground in the present, past, and, most importantly, immediate future. This will enable capturing information regarding joints approaching the ground, moving away from the ground, and remaining stationary in the air.
To this end, we represent the human-ground relationship as \(g=[d,v]\), consisting of the human-to-ground distance \(d\in\mathbb{R}^{23}\) between all joints (including the root joint) and the ground, as well as its velocity \(v\in\mathbb{R}^{23}\) along the gravity axis. We employ SMPL body model (Krizhevsky et al., 2016) and utilize the first 23 joints for calculations.
To calculate the human-to-ground distance, we use either the assumed ground or the ground variable \(n\in\mathbb{R}^{4}\) to be optimized at each step, which will be discussed in Section 3.3. With the ground plane \(n\), we can get a random point \(Q\) on it. For the \(i\)-th joint \(J_{i}\), there is an angle \(\alpha_{i}\) between the ground plane normal \(\vec{n}_{d}\in\mathbb{R}^{3}\) and
Figure 3. Analysis of the interaction state \(g\) defined in Section 3.1. We see its components \(d\) and \(v\) present unique and dense patterns in separating different types of motion in (a)-(b) and different joints in the same sequence in (b)-(c).
Figure 2. _GraMMaR architecture._ In training, given the previous state \(I_{t-1}\) and current state \(I_{t}\), we obtain the motion state \(x_{t-1}\), \(x_{t}\), and interaction state \(g_{t-1},g_{t}\). Our model learns the transition of motion and interaction state changes separately by two priors and reconstructs \(\hat{x}_{t},\hat{g}_{t}\) by sampling from the two distributions and decoding them conditioned on both \(x_{t-1}\) and \(g_{t-1}\).
vector \(\overrightarrow{Q}_{i}\). By calculating the projection of vector \(\overrightarrow{Q}_{i}\) on the plane normal \(\vec{n}_{d}\), we can get the distance representation as follows,
\[d^{i}=|\overrightarrow{Q}_{i}|\cdot cos(\alpha_{i}),\ d=[d^{0},d^{1},\ldots,d^{ 22}]. \tag{1}\]
Moreover, we assume that the ground is flat, rigid, and has a floor normal vector oriented along the gravitational axis. In this work, we also make the assumption that the human body primarily interacts with the ground, a circumstance encountered in most in-the-wild cases, such as dance, yoga, and other activities.
**Analysis of the interaction state.** As shown in Fig. 3, our interaction state \(g\), including distance \(d\) and distance velocity \(v\), presents unique and dense patterns in separating different types of motion (Fig. 3(a)-(b)) and different joints in the same motion sequence (Fig. 3(b)-(c)).
**Comparison with binary contact label.** Compared to the binary contact label, the continuous interaction representation as shown in Fig. 3 provides more detailed information beyond mere contact. For example, suppose the distance between a joint and the ground is zero, and the velocity along the gravitational direction is significant. It indicates that the joint has recently made contact with the ground, and due to inertia, both the joint and its adjacent counterparts are likely to continue moving toward the ground for the next few frames. Under these conditions, it is highly improbable for the joints to exhibit any motion in the opposite direction.
### Ground-aware Generative Motion Model
Building upon the proposed representation, we aim to develop an explicit ground-aware motion dynamics model that incorporates human-ground interactions with human pose to capture the temporal variations in human pose and human-ground interactions.
Specifically, we model the probability of a motion sequence \(x_{t}\) by considering the human-ground interaction \(g_{t}\) at each step, _i.e._,
\[p_{\theta}(x_{0},g_{0},x_{1},g_{1},\cdots,x_{T},g_{T})=p_{\theta}(x_{0},g_{0} )\prod_{t=1}^{T}p_{\theta}(x_{t},g_{t}|x_{t-1},g_{t-1}), \tag{2}\]
where \(x_{t}\) and \(g_{t}\) are the motion and interaction states at the time step, respectively. For each time step, the overall motion depends not only on the motion state \(x_{t-1}\) at the previous time step but also on the human-ground interaction state formulated as \(g_{t-1}\). Consequently, this allows \(p(x_{t},g_{t}|x_{t-1},g_{t-1})\) to capture the fine-grained physical plausibility of the transition.
As illustrated in Fig. 2, we propose GraMMaR that leverages a conditional variational autoencoder (CVAE) to model the transition probability. This model formulates the probability of transition in motion state and interaction state as follows:
\[p_{\theta}(x_{t},g_{t}|x_{t-1},g_{t-1})=p_{\theta}(x_{t}|x_{t-1},g_{t-1})\cdot p _{\theta}(x_{t},g_{t}|x_{t}), \tag{3}\]
where \(x_{t}\) is the latent variable for time step \(t\). For the purpose of computation efficiency, we formulate \(p_{\theta}(x_{t}|x_{t-1},g_{t-1})\) into two independent probabilities as:
\[p_{\theta}(z_{t}^{m}|x_{t-1}),p_{\theta}(z_{t}^{g}|g_{t-1}),\ \text{s.t.}\ z_{t}=z_{t}^{m}\oplus z_{t}^{g}, \tag{4}\]
where \(z_{t}^{m}\) and \(z_{t}^{g}\) denote the latent transitions for motion and human-ground interaction respectively, and \(\oplus\) denotes the concatenation operation in implementation. During training, these two probabilities are learned by two priors with the adjacent states as input, instead of the previous state. They are approximated as independent Gaussian distributions using implicit neural networks.
\[p_{\theta}(z_{t}^{m}|x_{t-1},x_{t}),\ p_{\theta}(z_{t}^{g}|g_{t-1},g_{t}) \tag{5}\]
To enable the differentiation and learning of unique characteristics for the two priors, we employ two conditional priors as guidance rather than relying on the standard Gaussian distribution, _i.e._,
\[p_{\theta}(z_{t}^{m}|x_{t-1})=\mathcal{N}(z_{t}^{m};\mu_{\theta}( x_{t-1}),\sigma_{\theta}(x_{t-1})),\] \[p_{\theta}(z_{t}^{g}|g_{t-1})=\mathcal{N}(z_{t}^{g};\mu_{\theta} (g_{t-1}),\sigma_{\theta}(g_{t-1})). \tag{6}\]
By simultaneously learning the distribution of transitions for both pose and joint-to-ground interactions across adjacent frames within a motion sequence, our model can produce a wide range of plausible poses while being ground-aware.
In the next step, we employ a shared decoder to estimate the future motion conditioned on both the motion state and the human-ground interaction from the previous step, thereby ensuring consistency between the body pose and the ground plane. Specifically, the shared decoder is designed to enable the combination of multiple inputs, including a random motion latent sample \(z_{t}^{m}\), a random interaction latent sample \(z_{t}^{g}\) (with the combined latent variables denoted as \(z_{t}\)), the motion state \(x_{t-1}\), and the interaction representation \(g_{t-1}\). Besides, to facilitate an auto-regressive manner in further applications, it outputs the motion state \(x_{t}\) and the interaction \(g_{t}\) simultaneously. Similar to the baseline, it also predicts a binary contact label \(c_{t}\) for the predefined nine contact joints.
**Training loss and implementation details.** As in Fig. 4, the training loss contains reconstruction loss \(\mathcal{L}_{recon}\) for motion state and interaction state, KL loss \(\mathcal{L}_{KL}\) between conditional prior and the corresponding encoder output, and consistency loss \(\mathcal{L}_{consist}\) between motion state and learned interaction state, _i.e._,
\[\mathcal{L}=\mathcal{L}_{recon}+\mathcal{L}_{KL}+\mathcal{L}_{consist}, \tag{7}\]
where the reconstruction loss \(\mathcal{L}_{recon}\) is defined as:
\[\mathcal{L}_{recon}=||x_{t}-\hat{x}_{t}||^{2}+||g_{t}-\hat{g}_{t}||^{2}, \tag{8}\]
given the training pair \((x_{t},g_{t},x_{t-1},g_{t-1})\). \(\hat{x}_{t}\) and \(\hat{g}_{t}\) are the output of the decoder for motion state and interaction state, respectively. The KL loss \(\mathcal{L}_{KL}\) is calculated separately for motion and interaction states by computing the KL divergence \(D_{KL}(||\cdot||\cdot)\) between the output of the encoder and the corresponding conditional prior. The consistency loss \(\mathcal{L}_{consist}\) promotes consistency between the
Figure 4. Training of GraMMaR. For simplicity, “Prior” can be either interaction prior or motion prior. Similarly, \(s_{t}\) can indicate \(g_{t}\) and \(x_{t}\), depending on the prior type.
learned interaction state \(\hat{g}_{t}\) and the human-ground interaction information, which is extracted through the function \(f(\cdot)\) from the predicted joints \(\hat{x}_{t}\) and the ground truth ground plane \(n\), _i.e._,
\[\mathcal{L}_{consist}=||\hat{g}_{t}-f(\hat{x}_{t},n)||^{2}. \tag{9}\]
Lastly, for comparing the contact accuracy with the baseline, we also incorporate a contact classification head and compute the BCE loss between the predicted contact label \(\hat{c}_{t}\) and the ground truth.
### Joint Optimization Strategy
After training, following (Wang et al., 2017), we devise a joint optimization strategy for 3D human motion reconstruction from noisy observations and RGB videos. We leverage GraMMaR to regularize optimization toward the space of plausible ground-aware motions, thereby maintaining consistency between the human body and the ground plane. We consider our GraMMaR for two tasks: (1) denoising under the fixed ground plane; and (2) motion reconstruction from RGB videos where the ground plane is unavailable and subject to optimization alongside the motion sequence.
**Optimization variables.** Given a sequence of motion observations \(y_{0:T}\) in 2D/3D joints format and an optional ground plane \(n\), we aim to obtain a sequence of SMPL parameters \((r_{0:T},\Phi_{0:T},\Theta_{0:T})\), body shape \(\beta\), and ground plane \(n\) (if not provided), which could not only match the observation but also maintain physical plausibility and consistency between human and ground. Our GraMMaR could be incorporated into the optimization by parameterizing the SMPL parameter sequence into an initial motion state \(x_{0}\), an initial interaction state \(g_{0}\), and a sequence of latent variables \(z_{1:T}\) composed of motion latent variables \(z_{1:T}^{m}\) and interaction latent variables \(z_{1:T}^{g}\). With optimized latent variables and initial states, we can roll-out the whole sequence of SMPL parameters through the decoder in an auto-regressive way.
**Noisy observation setting.** In this setting, we consider a scenario where a ground plane and a set of joint positions, generated using existing motion reconstruction algorithms like SMPLify (Beng et al., 2017), are available in a noisy form. Our goal is to optimize the motion sequence to ensure both physical plausibility and accuracy in human-ground interactions when the ground plane is provided and fixed. We show our model performs better when used for fitting to noisy joints and known ground planes, especially in challenging cases.
To this end, the objective function is formulated as a combination of dual-prior loss, prior consistency loss, data loss, and regularization loss, with the last two loss terms following the design of (Wang et al., 2017). In this context, we primarily focus on the dual-prior loss:
\[\begin{split} L_{prior}&=\prod_{t=1}^{T}log \mathcal{N}(z_{t}^{m};\mu_{\theta}(x_{t-1}),\sigma_{\theta}(x_{t-1}))\\ &+\prod_{t=1}^{T}log\mathcal{N}(z_{t}^{g};\mu_{\theta}(g_{t-1}),\sigma_{\theta}(g_{t-1})),\end{split} \tag{10}\]
and the prior consistency loss:
\[L_{posist}=\prod_{t=1}^{T}||g_{t}-f(x_{t},n)||^{2}, \tag{11}\]
where \(L_{prior}\) adopts the learned conditional priors for calculation. \(n\) is the fixed ground plane normal.
**RGB video setting.** In this particular setting, we tackle the problem of motion reconstruction from RGB videos, where a set of 2D/3D keypoint positions \(y_{0:T}\), extracted from each individual frame in the camera view, is provided, but without any knowledge of the ground plane. Our objective is to seek both the physically plausible and precise motion state \(x_{0:T}\) and the ground plane \(n\) that can transform the motion state into world space.
In contrast to the noisy observation setting, the ground plane is unavailable here. As a result, we establish a ground plane variable \(n\), allowing it to be optimized alongside the motion state. In total, we optimize the motion and interaction latent variables \(z_{1:T}^{m}\), \(z_{1:T}^{g}\), initial motion and interaction states \(x_{0}\), \(g_{0}\), and the ground plane vector \(n\) at the same time. In each optimization iteration, the prior consistency loss, as shown in Eq. (11), is calculated based on the optimized ground plane vector \(n\) rather than the assumed one.
**Implementation details.** During optimization, initialization phase includes latent variables \((z_{1:T}^{m},z_{1:T}^{g})\) and first-frame motion and interaction states \(x_{0}\), \(g_{0}\). We first initialize the SMPL parameters by a single-frame algorithm (Beng et al., 2017; Chen et al., 2017), and thus obtain the initialization of first frame states \(x_{0}\), \(g_{0}\) and the latent variables \((z_{1:T}^{m},z_{1:T}^{g})\) through the trained priors \(p_{\theta}(z_{t}^{m}|x_{t-1},x_{t})\), \(p_{\theta}(z_{t}^{g}|g_{t-1},g_{t})\).
## 4. Experiment
### Datasets and Splits
**AMASS**(Zhou et al., 2017) is a large motion capture dataset containing multiple types of motions, mainly running, walking, and turning around. We follow (Wang et al., 2017) to process the sequences into 30 hr and extract the contact labels for evaluation. Our model and the baseline are both trained on the training set of AMASS and evaluated on the test set of all datasets without retraining or fine-tuning.
To assess the effectiveness of our proposed model in handling various types of human-ground relationships, we partition the test set of AMASS into distinct levels according to the minimum hip height within each sequence. Our hypothesis is that as the minimum hip height decreases, the interaction between the human and the ground becomes increasingly intricate.
**AIST++** dataset (Wang et al., 2017) comprises a vast collection of dance motion data that includes RGB videos, multiple camera parameters, and 3D motion annotations for 1,408 sequences of 30 different subjects. For the purpose of evaluating our model's performance under different human-ground relations, we partition the test set according to the degree of difficulty involved in estimating the ground plane.
### Baselines and Metrics
**Baselines.** We conducted a comparative analysis between our method and the baseline HuMoR (Wang et al., 2017), a CVAE-based prior that does not take dense human-ground interaction into account. We ensure that the initialization and optimization settings are identical for both methods. In the noisy observation setting, VPoser-t serves as the initialization algorithm, while in the RGB video setting, we use PARE (Chen et al., 2017), a single-frame learning-based pose reconstruction technique, for initialization. VPoser-t uses VPoser (Wang et al., 2017) and 3D joints smoothness constraints during optimization.
**Metrics.** In our evaluation, we employ several common metrics to assess the performance of our method and the baseline. The 3D positional errors are measured by the mean per joint position error
(MPJPE), MPJPE after procrustes alignment (MPJPE-PA), MPJPE over global positions (MPJPE-G), and the per-vertex error (PVE). In addition, we evaluate the binary classification accuracy of nine pre-defined joints (Wang et al., 2018) that are likely to be in direct contact with the ground. We also assess the smoothness of the generated motion by computing the average per-joint accelerations (Accel). Moreover, we report the performance of our method on different levels of human-ground interaction, which cannot be captured by the overall errors on the entire test set. We also report the cosine similarity scores (Cos) between normal vectors of planes to evaluate performance in estimating the ground plane.
### Optimization with noisy observations
First, we evaluate GraMMaR with the observation of noisy 3D joint positions and a fixed ground plane, and demonstrate that GraMMaR performs better than the baseline, especially in cases with complex human-ground relations. We use the 90-frame (3s) clips from the AMASS dataset. To simulate the presence of noise, we introduce Gaussian noise to the joint positions with a mean of zero and a standard deviation of 0.04m, following (Wang et al., 2018).
Table 1 presents the mean results attained over the entire test set of the AMASS dataset. We compare GraMMaR with baseline HuMoR, as well as the initialization method VPoser-t. Our results demonstrate that our GraMMaR approach produces more precise poses and yields better performance in terms of contact accuracy. These findings suggest that the use of interaction states facilitates the extraction of human-ground interaction and significantly enhances human-ground relations. Regarding smoothness, while HuMoR reports the lowest acceleration, our approach outperforms VPoser-t substantially and provides an inherently smooth outcome. In contrast to HuMoR, our method affords greater flexibility to accommodate noisy poses, particularly those characterized by complex human-ground relations.
Table 2 presents the outcomes for data splits categorized by varying levels of human-ground interaction. Compared with HuMoR,
\begin{table}
\begin{tabular}{l|c c c c c c} \hline Method & MPJPE-G (\(\downarrow\)) & MPJPE (\(\downarrow\)) & MPJPE-PA (\(\downarrow\)) & PVE (\(\downarrow\)) & contact acc (\(\uparrow\)) & accel mag (\(\downarrow\)) \\ \hline VPoser-t (Wang et al., 2018) & 32.8 & 34.8 & 27.9 & 43.2 & - & 61.6 \\ VPoser-t + HuMoR (Wang et al., 2018) & 22.7 & 23.9 & 19.0 & 30.3 & 89.3\% & **16.7** \\ VPoser-t + **GraMMaR** & **21.9** & **22.6** & **18.1** & **29.5** & **91.1\%** & 20.8 \\ \hline \end{tabular}
\end{table}
Table 1. Results on the AMASS dataset under the noisy observation setting.
Figure 5. Qualitative comparison on the AMASS test set under the noisy observation setting. Our method doesn’t show body twist even under complex human-ground interaction. For each case, the first row shows the front view in the world space, while the second row shows the side view. Please view the supplementary video for more details.
our approach demonstrates superior performance, particularly in the most challenging level "0-0.3". At this level, our method exhibits improvements in both positional error and contact accuracy, indicating that it produces a more physically realistic and accurate pose with a more reasonable contact condition. While VPoser-t displays a consistently robust performance across all levels of data, it is unable to predict the ground plane and exhibits inferior smoothness capabilities. Notably, our method outperforms VPoser-t at "0.3-0.6" and "0.6-1.0" levels.
Fig. 5 presents qualitative examples of our approach compared to HuMoR. In Figures 5(a) and 5(b), HuMoR exhibits body twists in jumping, while our method doesn't. As HuMoR lacks an understanding of human-ground interaction, it struggles to accurately discern the motion and distinguish between joint position changes caused by noise versus those caused by the actual action itself. In Figures 5(c) and 5(d), HuMoR shows inaccurate orientation and body twist in sitting because of complexity in motion and the human-ground interaction, while our method performs well in these cases.
### Optimization with RGB video
Next, we show that our GraMMaR can predict a more physically reasonable pose and ground plane simultaneously, and can accurately figure out the ambiguous pose under camera view. In this setting, we use 60-frame (2s) video clips from the AIST++ dataset.
To quantify the challenge of estimating the ground plane for video clips, we assume that a larger divergence in the predicted ground planes by different methods indicates a higher level of difficulty in pose ambiguity. This is due to the fact that the pose in camera view may not readily differentiate the ground plane. To assess this, we calculate the cosine similarity scores of the predicted ground plane from our approach and the baseline HuMoR separately, and then sort clips according to the absolute difference in similarity scores between the two methods. The absolute difference for the top 20% of clips is shown in Fig. 6(a), while the remaining clips indicate a negligible difference and are therefore not presented.
Table 3 presents the results of our method, baseline HuMoR, and the initialization method PARE, for the entire test set and the top 1% of clips. GraMMaR exhibits superior performance in estimating the ground plane, particularly for the top 1% of difficult data regarding human-ground relations. This suggests GraMMaR can better distinguish between mistaken poses under camera view. In terms of smoothness, both HuMoR and GraMMaR show significant improvements compared to PARE, albeit at an acceptable cost of position accuracy. Although GraMMaR reports relatively inferior results regarding positional metrics for the entire test set, it can produce more reasonable poses under the world space by considering the ground plane, especially for ambiguous motions. As shown in Fig. 6(b), regarding the MPJPE in world space, our GraMMaR outperforms HuMoR on the top 20% difficult cases.
The qualitative examples also provide evidence to support this conclusion. Fig. 7 presents some examples from AIST++. We showcase both the prediction in camera view and in world view. Since PARE cannot predict the ground plane, we exclude its prediction under the world view. As demonstrated in Fig. 7, HuMoR generates accurate poses in most cases under camera view, but produces completely physically unreasonable poses with incorrect contact conditions in world space. This suggests that HuMoR is incapable of resolving ambiguous poses in world space and solely optimizes motion by observation. On the other hand, our method, aided by the interaction map, accurately resolves ambiguous poses and generates physically plausible poses with the correct conditions.
**Generalizing to videos in the wild.** Finally, we compare our method with the baseline HuMoR on videos sourced from the Internet and demonstrate that our method generalizes better to videos in the wild without the need for retraining. Fig. 8 showcases the challenging scenarios like yoga, and handstanding.
## 5. Limitation and Future Work
Although our model can yield superior performance in predicting physically plausible motion and reasonable ground planes in challenging cases, there are some limitations, such as inconsistency in hand motion. In some extreme cases, our method can make a reasonable inference on the ground plane but have a large error in positions due to the extreme angle and the high moving speed. Nonetheless, our approach outperforms the baseline method HuMoR in these challenging cases. In future work, it is promising to learn a stronger prior from large-scale training data (_e.g._, flexible contact joints, fine-grained hand motion) to further improve the performance. More discussion is in Section C in the Appendix.
\begin{table}
\begin{tabular}{l|l|c c c c} \hline Method & Metric & 0-0.3 & 0.3-0.6 & 0.6-1.0 & avg \\ \hline VPoser-t [35] & MPJPE-G (\(\downarrow\)) & **34.4** & 33.5 & 32.6 & 33.5 \\ & MPJPE (\(\downarrow\)) & **38.0** & 36.0 & 34.5 & 36.2 \\ & PVE (\(\downarrow\)) & **48.1** & 44.4 & 43.0 & 45.2 \\ \hline VPoser-t + HuMoR [38] & MPJPE-G (\(\downarrow\)) & 53.0 & 24.0 & 21.6 & 32.9 \\ & MPJPE (\(\downarrow\)) & 53.0 & 25.8 & 22.8 & 33.9 \\ & PVE (\(\downarrow\)) & 65.3 & 33.1 & 28.9 & 42.4 \\ & contact acc (\(\uparrow\)) & 75.9\% & 86.6\% & 90.0\% & 84.2\% \\ \hline VPoser-t + **GraMM-** & MPJPE-G (\(\downarrow\)) & 48.2 & **22.6** & **21.1** & **30.6** \\ & MPJPE (\(\downarrow\)) & 49.1 & **23.9** & **21.7** & **31.6** \\
**MaR** & PVE (\(\downarrow\)) & 59.6 & **30.4** & **28.5** & **39.5** \\ & contact acc (\(\uparrow\)) & **78.6\%** & **87.6\%** & **91.8\%** & **86.0\%** \\ \hline \end{tabular}
\end{table}
Table 2. Results on AMASS dataset under the noisy observation setting at different levels of human-ground interaction.
Figure 6. (a) Difficulty levels of the top 20% of challenging data. (b) Our method outperforms the baseline HuMoR in the top 20% of challenging cases.
## 6. Conclusion
In this work, we propose a dense and continuous representation for human-ground interaction and a CVAE-based model named GraMMaR based on it to address the consistency issue between the human and the ground. We further establish a joint optimization-based approach that uses our proposed GraMMaR as a regularizer to estimate physically plausible and correct motion from noisy observations and RGB videos. The proposed method demonstrates promising results in generating realistic outcomes, particularly in challenging scenarios characterized by complex and ambiguous human-ground interaction.
Figure 8. Qualitative comparison on the videos from the Internet. The blue arrow and red circle have the same meaning as in the above figures.
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \hline Method & Cos (\(\uparrow\)) & Cos 1\% (\(\uparrow\)) & Accel (\(\downarrow\)) & Accel align (\(\downarrow\)) & MPJPE-G (\(\downarrow\)) & MPJPE (\(\downarrow\)) & MPJPE-PA (\(\downarrow\)) & MPJPE\({}^{*}\) 1\% (\(\downarrow\)) \\ \hline PARE [(19)] & - & - & 65.6 & 23.8 & **257.3** & **102.5** & **62.0** & - \\ PARE + HuMoR [(38)] & 0.99175 & 0.70452 & **4.0** & **3.3** & 606.2 & 114.3 & 80.7 & 383.0 \\ PARE + **GraMMaR** & **0.99965** & **0.99956** & 4.4 & 3.6 & 666.0 & 130.5 & 92.9 & **327.0** \\ \hline \hline \end{tabular}
\end{table}
Table 3. Results on AIST++ dataset under the RGB video setting. “Cos” is the mean cosine similarity between the predicted ground plane and the gt. “Cos 1%” is the Cos scores of the top 1% difficult clips in estimating the ground plane. “MPJPE” 1%” denotes the MPJPE of the predictions in world space for the top 1% difficult data in estimating the ground plane.
Figure 7. Qualitative comparison on the AIST++ test set under the RGB video setting. “front” and “side” denote the front and side view in world space. The direction of the body torso and contacts of HuMoR are highlighted. HuMoR tends to predict the body torso in a direction perpendicular to the ground while our method doesn’t. |
2301.09699 | Series and Product Representations of Gamma, Pseudogamma and Inverse
Gamma Functions | We derive and prove product and series representations of the gamma function
using Newton interpolation series. Using these identities, a new formula for
the coefficients in the Taylor series of the reciprocal gamma function are
found. Two new representations for the Euler-Mascheroni constant, containing
only rational terms, are also found. After that, we introduce a new pseudogamma
function which we call the $\Lambda$ function. This function interpolates the
factorial at the positive integers, the reciprocal factorial at the negative
integers and converges for the entire real axis. Finally, we conjecture a novel
series representation for the principal branch of the inverse gamma function
$\Gamma(y)=x.$ | David Peter Hadrian Ulgenes | 2023-01-23T20:01:17Z | http://arxiv.org/abs/2301.09699v3 | # Series and Product Representations of Gamma and Pseudogamma functions
###### Abstract
We derive and prove product and series representations of the gamma function using Newton interpolation series. We then show how these equations can be used to construct better and better approximations of the gamma function by writing it as a product over the prime numbers. The series definition is also used to find a new representation for the Euler-Mascheroni constant, containing only rational terms. After that, we introduce a new pseudogamma function which we call the \(\Lambda\) function. This function interpolates the factorial at the positive integers, the reciprocal factorial at the negative integers and converges for the entire real axis. Finally, we conjecture a novel series representation for the principal branch of the inverse gamma function \(\Gamma(y)=x\).
**Key Words:** Gamma function, Series representation of gamma function, Product representation of gamma function, Newton Interpolation, Pseudogamma function, Pseudo-gamma function, Euler-Mascheroni constant, Inverse gamma function, Series representation of inverse gamma function
## 1 Introduction
The gamma function is a widely used extension of the factorial function to the complex plane. It is arguably the most ubiquitous special function in mathematics (and certainly one of the most important), and it has been studied intensely since it was introduced by Euler and Bernoulli in the 18th century [1]. The gamma function is defined for \(x>0\) by Euler's second integral
\[\Gamma\left(x\right)=\int_{0}^{\infty}e^{-t}t^{x-1}dt, \tag{0.1}\]
from which the functional equation
\[\Gamma(x+1)=x\Gamma(x) \tag{0.2}\]
follows by integrating by parts. In Theorem 1 in this paper, we derive and prove an infinite product definition for the gamma function using Newton interpolation series. Written in a different form, this product was first discovered by Hermite [2].
Several product definitions of the gamma function are already known. For instance, we have (for complex \(z\) except for the non-positive integers):
\[\Gamma(z)=\frac{e^{-\gamma z}}{z}\prod_{n=1}^{\infty}\left(1+\frac{z}{n} \right)^{-1}e^{\frac{z}{n}}\]
where \(\gamma\) is the Euler-Mascheroni constant \(0.577215...\) and
\[\Gamma(z)=\frac{1}{z}\prod_{n=1}^{\infty}\frac{\left(1+\frac{1}{n}\right)^{z}}{1+ \frac{z}{n}}\]
which are due to Weierstrass and Euler respectively. However, numerical tests indicate that the product in theorem 1 converges more rapidly than the products due to Weierstrass and Euler.
Theorem 2 in this paper introduces a new Newton series representation of the gamma function (as opposed to a product representation) containing only rational terms. This comes as a contrast to the Laurent series of the gamma function [3],
\[\Gamma(x)=\frac{1}{x}-\gamma+\frac{1}{2}\left(\gamma^{2}+\frac{\pi^{2}}{6} \right)x-\frac{1}{6}\left(\gamma^{3}+\frac{\gamma\pi^{2}}{2}+2\zeta(3)\right) x^{2}+O(x^{3}), \tag{0.3}\]
which contains only irrational terms, or at least terms that are strongly suspected to be so. Furthermore, whereas the coefficients in equation (0.3) are found by computing a line integral, in theorem 2 they are expressed in closed form. The series in theorem 2 also converges over a greater region than equation (0.3).
Despite its ubiquity, many have pointed out issues with the gamma function, the chief among them being the poles at the negative integers. Because of this, there has been some development in the theory of _pseudogamma_ functions (i.e. factorial-interpolating functions not equal to \(\Gamma\)). For instance, we have the Hadamard function, given by [4]
\[H(x)=\frac{\psi\left(1-\frac{x}{2}\right)-\psi\left(\frac{1}{2}-\frac{x}{2} \right)}{2\Gamma(1-x)},\]
where \(\psi(x)\) is the digamma function \(\psi(x)=\frac{\Gamma^{\prime}(x)}{\Gamma(x)}\). \(H(x)\) is an entire function, and for a positive integer \(n\) we have \(H(n)=(n-1)!\).
In theorem 3 a new pseudogamma function, which we denote \(\Lambda(x)\), is introduced. Like the Hadamard function, it converges for the entire real axis, and it interpolates the factorial. However, one advantage of \(\Lambda(x)\) is that it assigns a meaningful value to the factorial at the negative integers: in particular, the function satisfies the reflection formula \(\Lambda(x)\Lambda(-x)=1\) and so for a positive integer \(n\) we have \(\Lambda(-n)=\frac{1}{n!}\).
Throughout this paper Newton interpolation series (i.e a series of the form \(\sum_{j=0}^{k}a_{j}\prod_{i=0}^{j-1}(x-x_{i})\), where \(x_{i}\) are distinct points) is used to obtain definitions of the gamma and related functions. At the end of this paper, we attempt to take this one step further; in particular, we explain how the theory of Newton interpolation can be applied to the principal branch of the inverse gamma function \(\Gamma(y)=x\), and, as such, we conjecture a new series definition of said function. Obtaining a series expansion of the inverse gamma function on its own is not that difficult; for instance, the Lagrange Inversion Theorem can be used to construct a power series. However, unlike such series, the proposed formula contains all the terms in closed form, without having to compute any derivatives or limits.
## 2 Main Results
**Theorem 1**.: _The gamma function has the following product representation which converges for positive real values of \(x\):_
\[\Gamma(x)=\frac{1}{x}\prod_{n=1}^{\infty}\prod_{k=1}^{n}k^{(-1)^{k+n}\binom{x }{n}\binom{n-1}{k-1}}=\frac{1}{x}\left(\frac{2}{1}\right)^{\binom{x}{2}} \left(\frac{3}{4}\right)^{\binom{x}{3}}\left(\frac{32}{27}\right)^{\binom{x}{ 4}}...\;. \tag{1.1}\]
Proof.: To begin, taking logarithms of (1.1) and using \(\Gamma(x+1)=x\Gamma(x)\) yields the following Newton series:
\[\ln\Gamma(x+1)=\sum_{n=1}^{\infty}\left(-1\right)^{n}\binom{x}{n}\sum_{k=1}^{n} \left(-1\right)^{k}\binom{n-1}{k-1}\ln\left(k\right). \tag{1.2}\]
But using the following integral representation of the natural logarithm [7]\(\ln(x)=\int_{0}^{\infty}\frac{e^{-t}-e^{-tx}}{t}dt\) equation (1.2) is equal to
\[\ln\Gamma(x+1)=\sum_{n=1}^{\infty}\int_{0}^{\infty}\frac{\left(-1\right)^{n}}{ t}\binom{x}{n}\sum_{k=1}^{n}\left(-1\right)^{k}\binom{n-1}{k-1}\left(e^{-t}-e^{-tk }\right)dt. \tag{1.3}\]
Using the binomial theorem and the fact that \(\sum_{k=1}^{n}\left(-1\right)^{k}\binom{n-1}{k-1}=0\) for integer \(n>1\), we are left with
\[\Gamma\left(x+1\right)=-\sum_{n=2}^{\infty}\int_{0}^{\infty}\binom{x}{n} \frac{e^{-t}}{t}\left(e^{-t}-1\right)^{n-1}dt, \tag{1.4}\]
which is a definition of the log-gamma function first given by Charles Hermite in [2], meaning the proof is complete. According to Hermite, equation (1.1) converges for complex \(a+bi\) such that \(a>0\).
**Remark 1**.: _It is possible to improve the region of convergence of equation (1.1) by repeated application of the identity \(\Gamma(x)=\frac{\Gamma(x+1)}{x}\). For instance, we have_
\[\Gamma(x)=\frac{1}{x\left(x+1\right)}\prod_{n=1}^{\infty}\prod_{k=1}^{n}k^{ \left(-1\right)^{k+n}\binom{x+1}{n}\binom{n-1}{k-1}} \tag{1.5}\]
_which converges at least for \(x>-1\), and_
\[\Gamma(x)=\frac{1}{x\left(x+1\right)\left(x+2\right)}\prod_{n=1}^{\infty}\prod _{k=1}^{n}k^{\left(-1\right)^{k+n}\binom{x+2}{n}\binom{n-1}{k-1}} \tag{1.6}\]
_which converges at least for \(x>-2\), etc._
**Question 1**.: _What happens if the steps taken in equations (1.5) and (1.6) are repeated N times such that \(N\rightarrow\infty\)? Can the \(\frac{1}{x\left(x+1\right)\ldots\left(x+N\right)}\) term be brought inside the product to obtain a globally convergent definition for \(\Gamma\)?_
**Remark 2**.: _The first few finite products in equation (1.1) can be written in terms of their prime factors. Collecting like terms, we can get better and better approximations to \(\Gamma\) as a product over the prime numbers. For instance, for the first \(N=4\) terms in equation (1.1) we have, for \(x>0\)_
\[\Gamma(x)\approx x^{-1}2^{\binom{x}{2}-2\binom{x}{3}+5\binom{x}{4}}3^{\binom{ x}{3}-3\binom{x}{4}}, \tag{1.7}\]
_and after \(N=5\) terms we are left with_
\[\Gamma(x)\approx x^{-1}2^{\binom{x}{2}-2\binom{x}{3}+5\binom{x}{4}-12\binom{x}{5}}3 \binom{\binom{x}{3}-3\binom{x}{4}+6\binom{x}{5}}5\binom{x}{5}. \tag{1.8}\]
_The coefficients 1, -2, 5, -12, 1, -3, 6, 1, etc. before \(\binom{x}{n}\) are given by the summation_
\[\sum_{k=1}^{n}\left(-1\right)^{n+k}\nu_{p}\left(k\right)\binom{n-1}{k-1}\]
_where \(\nu_{p}(k)\) is the exponent to which \(\,\mathrm{p}\) appears in the prime factorization of \(\,\mathrm{k}\) (i.e. the \(\,\mathrm{p}\)-adic order of \(\,\mathrm{k}\)). For instance, for the term \(2^{k\binom{x}{4}}\) term the coefficient is_
\[k=0\binom{4-1}{0}+1\binom{4-1}{1}+0\binom{4-1}{2}+2\binom{4-1}{3}=5\]
_because \(\nu_{2}(k)=0,1,0,2,0,1...\) for \(k=1,2,3...\)._
_To obtain this formula we first note that an integer \(\,\mathrm{n}\) can be written in its prime factorization as \(n=\prod_{p\leq n}p^{\nu_{p}(n)}\) (for instance, \(12=2^{\nu_{2}(12)}3^{\nu_{3}(12)}=2^{2}3\)). Substituting this equation for \(\,\mathrm{k}\) in equation (1.1) and multiplying by \((-1)^{n+k}\binom{n-1}{k-1}\) (which is the exponent of \(\,\mathrm{k}\)) gives the desired equation._
**Question 2**.: _Can this procedure also be repeated such that \(N\to\infty\)? And if so, does this generalize Legendre's formula for the factorial_
\[n!=\prod_{p}p^{\lfloor\frac{n}{p}\rfloor+\lfloor\frac{n}{p^{2}}\rfloor+ \lfloor\frac{n}{p^{3}}\rfloor+...}\]
_to \(x>0\) as opposed to just \(n\in\mathbb{N}\)?_
**Theorem 2**.: _For real values of \(x\geq 1\) the gamma function has the following Newton series representation containing only rational terms_
\[\Gamma(x)=x^{x-1}\sum_{n=1}^{\infty}\left(-1\right)^{n}\binom{x}{n}\sum_{k=1} ^{n}\left(-1\right)^{k}\frac{k!}{k^{k}}\binom{n}{k}. \tag{2.1}\]
_For instance,_
\[\Gamma\left(\frac{3}{2}\right)=\frac{\sqrt{\pi}}{2}=\sqrt{\frac{3}{2}}\left( \frac{3}{2}-\frac{9}{16}-\frac{31}{288}-\frac{517}{12288}+...\right).\]
Proof.: We first note that \(k^{-k}=\int_{0}^{\infty}\frac{e^{-kt}t^{k-1}}{(k-1)!}dt\) which becomes obvious after the substitution \(u=kt\).
Substituting this into equation (2.1) and using the binomial theorem gives
\[\Gamma(x)=x^{x-1}\sum_{n=1}^{\infty}\int_{0}^{\infty}\left(-1\right)^{1+n} \binom{x}{n}n\frac{\left(1-e^{-t}t\right)^{n}}{e^{t}-t}dt \tag{2.2}\]
At this point we would like to interchange the sum and the integral to conclude the proof. But first, consider the following:
\[x^{x-1}\sum_{n=1}^{\infty}\int_{0}^{\infty}\left|\binom{x}{n}n\frac{\left(1- e^{-t}t\right)^{n}}{e^{t}-t}\right|dt\leq x^{x-1}\sum_{n=1}^{\infty}\int_{0}^{ \infty}\left|\binom{x}{n}n\frac{1}{e^{t}-t}\right|dt\]
\[=Cx^{x-1}\sum_{n=1}^{\infty}\left|\binom{x}{n}n\right| \tag{2.3}\]
where \(C=1.35909...\) and \(n\geq 1\). Writing the summands in equation (2.3) as \(\left|\frac{a_{n+1}}{a_{n}}\right|\) we have, for large \(n\), \(\left|\frac{x-n}{n}\right|=1-\frac{x}{n}\). Hence equation (2.1) converges uniformly at least for real values of \(x>1\), which is guaranteed by Gauss' test [9]. Although as \(\Gamma(1)=\sum_{n=1}^{\infty}\left(-1\right)^{n}\binom{1}{n}\sum_{k=1}^{n} \left(-1\right)^{k}\frac{k!}{k^{k}}\binom{n}{k}=1\) the series converges uniformly for \(x\geq 1\). As a result, we can interchange the summation and the integral in equation (2.2) to get
\[\Gamma(x)=x^{x-1}\int_{0}^{\infty}\sum_{n=1}^{\infty}\left(-1\right)^{1+n} \binom{x}{n}n\frac{\left(1-e^{-t}t\right)^{n}}{e^{t}-t}dt=x^{x}\int_{0}^{ \infty}e^{-tx}t^{x-1}dt\]
**Corollary 1**.: _The Euler-Mascheroni constant \(\gamma\) has the following series representation containing only rational terms_
\[\gamma=-1+\sum_{n=2}^{\infty}\left(n-2\right)!\sum_{k=1}^{n}\frac{\left(-1 \right)^{1+k}}{\left(n-k\right)!\,k^{k}},\]
\[\gamma=-1+\frac{3}{4}+\frac{31}{108}+\frac{517}{3456}+\frac{322537}{3600000}+...\;.\]
Proof.: We begin by differentiating equation (2.1) with respect to \(x\) after dividing by \(x^{x-1}\). So we have
\[\frac{d}{dx}\frac{\Gamma\left(x\right)}{x^{x-1}}=-\frac{\Gamma\left(x\right)} {x^{x}}\left(x+x\ln\left(x\right)-x\psi\left(x\right)-1\right)=\frac{d}{dx} \sum_{n=1}^{\infty}\left(-1\right)^{n}\binom{x}{n}\sum_{k=1}^{n}\left(-1 \right)^{k}\frac{k!}{k^{k}}\binom{n}{k}, \tag{2.4}\]
where we have made use of the quotient rule for differentiation and the fact that \(\Gamma^{\prime}\left(x\right)=\Gamma\left(x\right)\psi\left(x\right)\). To differentiate \(\binom{x}{n}\) (for integer \(n\)) we can use the following formula
\[\frac{d}{dx}\binom{x}{n}=\binom{x}{n}\sum_{i=0}^{n-1}\frac{1}{x-i}\]
which holds by using logarithmic differentiation on \(\binom{x}{n}\). This can be rewritten as
\[\sum_{i=0}^{n-1}\frac{1}{n!}\left(\prod_{m=0}^{i-1}\left(x-m\right)\right)\prod _{m=i+1}^{n-1}\left(x-m\right)=\sum_{i=0}^{n-1}\frac{\left(-1\right)^{n+i+1}} {n!}\frac{\Gamma\left(x+1\right)}{\Gamma\left(x-i+1\right)}\frac{\Gamma\left( n-x\right)}{\Gamma\left(i-x+1\right)}. \tag{2.5}\]
Extracting the first term in equation (2.4) and rewriting it using equation (2.5) gives
\[\frac{d}{dx}\frac{\Gamma\left(x\right)}{x^{x-1}}=1+\sum_{n=2}^{\infty}\sum_{k =1}^{n}\frac{\left(-1\right)^{k}}{\left(n-k\right)!\,k^{k}}\sum_{i=0}^{n-1} \frac{\left(-1\right)^{i-1}\Gamma(x+1)}{\Gamma(x-i+1)}\frac{\Gamma\left(n-x \right)}{\Gamma\left(i-x+1\right)}.\]
Since \(\psi\left(1\right)=-\gamma\) ([6], pg. 258), we have at \(x=1\):
\[-\gamma=1+\sum_{n=2}^{\infty}\sum_{k=1}^{n}\frac{\left(-1\right)^{k}}{\left(n-k \right)!\,k^{k}}\sum_{i=0}^{n-1}\frac{\left(-1\right)^{i-1}}{\left(1-i\right)!} \frac{\left(n-2\right)!}{\left(i-1\right)!}=1+\sum_{n=2}^{\infty}\left(n-2 \right)!\sum_{k=1}^{n}\frac{\left(-1\right)^{k}}{\left(n-k\right)!\,k^{k}}\]
**Theorem 3**.: _The function_
\[\Lambda(x)=\prod_{n=1}^{\infty}\prod_{k=1}^{n}k^{\frac{\left(-1\right)n+k \left(2k-1\right)}{\left(n-k\right)!\left(k+n-1\right)!}\frac{\left(x+n-1 \right)!}{\left(x-n\right)!\left(2n-1\right)}} \tag{3.1}\]
_interpolates the factorial at the positive integers, interpolates the reciprocal factorial at the negative integers and converges for the entire real axis 2._
Footnote 2: For the negative integers, the value of \(\Lambda\) is taken to be the limit.
Proof.: For a positive integer N, we need to show that
\[N!=\prod_{n=1}^{N}\prod_{k=1}^{n}k^{\frac{\left(-1\right)^{n+k}\left(2k-1 \right)}{\left(n-k\right)!\left(k+n-1\right)!}\frac{\left(N+n-1\right)!}{ \left(N-n\right)!\left(2n-1\right)}}\]
where the product is finite due to the \(\frac{1}{\left(N-n\right)!}\) term in the exponent. To show this is true, we will use induction. For the base case \(N=1\) we have
\[1=\prod_{n=1}^{1}\prod_{k=1}^{n}k^{\frac{\left(-1\right)^{n+k}\left(2k-1 \right)}{\left(n-k\right)!\left(k+n-1\right)!}\frac{n!}{\left(1-n\right)! \left(2n-1\right)}}\]
which is clearly true. For the (_N+1_)st case we have
\[\prod_{n=1}^{N+1}\prod_{k=1}^{n}k^{\frac{\left(-1\right)^{n+k}\left(2k-1 \right)}{\left(n-k\right)!\left(k+n-1\right)!}\frac{\left(N+n\right)!}{\left( N+1-n\right)!\left(2n-1\right)}}=\left(N+1\right)\prod_{n=1}^{N}\prod_{k=1}^{n}k^{ \frac{\left(-1\right)^{n+k}\left(2k-1\right)}{\left(N-n\right)!\left(2n-1 \right)}} \tag{3.2}\]
where the equality has been established using the factorial identity \(\left(N+1\right)!=\left(N+1\right)N!\). Dividing both sides of equation (3.2) by the first N terms of the product on the left side and taking natural logarithms gives
\[\begin{split}&\sum_{k=1}^{N+1}\frac{\left(-1\right)^{N+1+k}\left(2k -1\right)\left(2N\right)!\ln\left(k\right)}{\left(N+1-k\right)!\left(k+N \right)!}=\ln\left(N+1\right)\\ &-\sum_{n=1}^{N}\sum_{k=1}^{n}\frac{\left(-1\right)^{n+k}\left(2 k-1\right)}{\left(n-k\right)!\left(k+n-1\right)!}\frac{\left(N+n-1\right)!\ln \left(k\right)}{\left(N+1-n\right)!}.\end{split} \tag{3.3}\]
Taking \(\binom{n}{k}=0\) for \(k>n\) we can change the upper bound of the \(k\) summation on the right of equation (3.3) to \(N\) without changing the value of the sum. This gives the right side of the equation as
\[\ln\left(N+1\right)+\sum_{k=1}^{N}\left(-1\right)^{1+k}\left(2k-1\right)\ln \left(k\right)\sum_{n=1}^{N}\frac{\left(-1\right)^{n}}{\left(n-k\right)!\left( k+n-1\right)!}\frac{\left(N+n-1\right)!}{\left(N+1-n\right)!}.\]
Mathematica [11] can to evaluate the inner sum:
\[\sum_{n=1}^{N}\frac{\left(-1\right)^{n}}{\left(n-k\right)!\left(k+n-1\right)!} \frac{\left(N+n-1\right)!}{\left(N+1-n\right)!} \tag{3.4}\]
\[=\frac{\sin\left(\pi\left(k+1\right)\right)\left(N+k\right)!\left(N+1-k\right)! +\pi\left(-1\right)^{N}\left(2N\right)!\left(N+k\right)\left(N+1-k\right)}{ \left(N+k\right)!\left(N+1-k\right)!\left(N+1-k\right)!\left(N+k\right)\left(N +1-k\right)}.\]
But for integer \(k\), \(\sin\left(\pi\left(k+1\right)\right)=0\). So disregarding this term we are left with the sum in equation (3.4) equalling \(\frac{\left(-1\right)^{N}\left(2N\right)!}{\left(N+k\right)!\left(N+1-k\right)!}\). Substituting this into equation (3.3) and simplifying we see that the inductive step indeed holds. And so we have shown that for positive integers N the \(\Lambda\) function equals the factorial. But since
\[\frac{\left(x+n-1\right)!}{\left(x-n\right)!}=\left(x-n+1\right)...\left(x+n- 1\right)=x\prod_{k=1}^{n-1}\left(x^{2}-k^{2}\right)\]
we see that equation (3.1) satisfies the reflection formula \(\Lambda(x)\Lambda(-x)=1\). Therefore at the negative integers, the \(\Lambda\) function equals the reciprocal factorial.
We will now demonstrate that equation (3.1) converges. Firstly, taking logarithms of equation (3.1) gives
\[\ln\Lambda(x)=\sum_{n=1}^{\infty}\frac{\left(x+n-1\right)!\left(-1\right)^{n} }{\left(x-n\right)!\left(2n-1\right)}\sum_{k=1}^{n}\frac{\left(-1\right)^{k} \left(2k-1\right)}{\left(n-k\right)!\left(k+n-1\right)!}\ln\left(k\right). \tag{3.5}\]
Since \(\frac{\Gamma\left(z+a\right)}{\Gamma\left(z+b\right)}\sim z^{a-b}\) (if \(z\rightarrow\infty\) in the sector \(|\mbox{ph}\,z|\leq\pi-\delta\) ) [10], we have, for the absolute value of the summands in (3.5):
\[\left|\frac{\left(x+n-1\right)!\left(-1\right)^{n}}{\left(x-n\right)!\left(2 n-1\right)}\sum_{k=1}^{n}\frac{\left(-1\right)^{k}\left(2k-1\right)\ln\left(k \right)}{\left(n-k\right)!\left(k+n-1\right)!}\right|\sim\left|\frac{x^{2n-1}} {\left(2n-1\right)}\sum_{k=1}^{n}\frac{\left(-1\right)^{k}\left(2k-1\right) \ln\left(k\right)}{\left(n-k\right)!\left(k+n-1\right)!}\right|\]
\[\leq\left|\frac{x^{2n-1}}{\left(2n-1\right)}\sum_{k=1}^{n}\frac{\left(2k \right)k}{\left(n-k\right)!\left(k-1\right)!}\right|=\left|\frac{x^{2n-1}}{ \left(2n-1\right)}\frac{2^{n-2}n\left(n+3\right)}{\left(n-1\right)!}\right|.\]
Figure 1: A complex plot of the \(\Lambda\) function made using Mathematica [11].
And so the \(\Lambda\) function converges if \(\sum_{n=1}^{\infty}\left|\frac{x^{2n-1}}{\left(2n-1\right)}\frac{2^{n-2}n(n+3)}{ \left(n-1\right)!}\right|\) does. But since
\[\lim_{n\rightarrow\infty}\frac{2\left(n+1\right)\left(n+4\right)\left(2n-1 \right)x^{2}}{n^{2}\left(n+3\right)\left(2n+1\right)}=0 \tag{3.6}\]
the series converges by the ratio test. Hence the \(\Lambda\) function converges for the entire real axis.
## 3 Appendix A
In this section, we will seek to motivate the series definition of the inverse gamma function given in conjecture 1. Our goal is to find a Newton interpolating series for this function, which we denote \(\tilde{\Gamma}(x)\). For this purpose, it would be convenient to choose a series of the form
\[a_{1}+a_{2}\left(x-1!\,\right)+a_{3}\left(x-1!\,\right)\left(x-2!\,\right)+a_ {4}\left(x-1!\,\right)\left(x-2!\,\right)\left(x-3!\,\right)+... \tag{3.7}\]
i.e. selecting the nodes to be the points \((1,\,2)\), \((2,\,3)\), \((6,\,4)\), \((24,\,5)\)3 etc. as the value of \(\tilde{\Gamma}(x)\) at these points is well understood. For comparison, it would be tricky to build a typical Newton series such that it interpolates \(\tilde{\Gamma}(x)\) at integer values of \(x\), as the value of \(\tilde{\Gamma}(x)\) at these points is not well understood. For instance, both \(\tilde{\Gamma}(3)=3.405869986...\) and \(\tilde{\Gamma}(4)=3.664032797...\) have no known closed forms.
Footnote 3: Recall that, for integers, \(\Gamma(n)=(n-1)!\).
With this in mind, we can begin computing the coefficients \(a_{n}\) in equation (3.7): we first set \(x=1=\tilde{\Gamma}(1)\) to get \(a_{1}=2\). Then we set \(x=2!\) to get \(a_{2}=1\), and set \(x=3!\) to get \(a_{3}=-\frac{3}{20}\), and set \(x=4!\) to get \(a_{4}=\frac{559}{91080}\) etc. However, the resulting series diverges: the factorials grow too fast to be suitable as nodes for Newton interpolation.
However, if we instead compute the Newton series for the function \(\ln\Gamma(y)=x\) using a series of the form
Figure 2: The graph shows the \(\Lambda\) function (black) with the dots representing the values of the factorial at the positive integers and the reciprocal factorial at the negative integers, as well as the log-\(\Lambda\) function (red). The image seems to suggest that the \(\Lambda\) function is log-convex for \(x>0\) and log-concave for \(x<0\).
\[a_{1}+a_{2}\left(x-\ln\left(1!\,\right)\right)+a_{3}\left(x-\ln\left(1!\,\right) \right)\left(x-\ln\left(2!\,\right)\right)+...\]
we obtain a far better-behaved series, which does not seem to diverge. We can then set \(x=\ln(x)\) to obtain a series for \(\tilde{\Gamma}(x)\).
The coefficients can again be obtained by repeating the same procedure as earlier. Doing the calculation yields \(a_{1}=2\), \(a_{2}=\frac{1}{\ln(2)}\), \(a_{3}=\frac{2}{\ln(6)\ln(3)}-\frac{1}{\ln(2)\ln(3)}\) etc.
The following formula
\[\sum_{k=0}^{n}\frac{k}{\left(\prod_{i=1}^{k}\ln\left(\frac{(k+1)!}{i!}\right) \right)\prod_{i=1}^{n-k}\ln\left(\frac{(k+1)!}{(k+1+i)!}\right)}\]
can compute these coefficients (but we will not prove this). With this in mind, we propose the following conjecture:
**Conjecture 1**.: _We conjecture that the principal branch of the inverse gamma function \(\Gamma(y)=x\) has the following Newton series representation_
\[\tilde{\Gamma}(x)=2+\sum_{n=0}^{\infty}\sum_{k=0}^{n}\frac{k}{\left(\prod_{i =1}^{k}\ln\left(\frac{(k+1)!}{i!}\right)\right)\prod_{i=1}^{n-k}\ln\left( \frac{(k+1)!}{(k+1+i)!}\right)}\prod_{i=1}^{n}\ln\left(\frac{x}{i!}\right),\]
_on the interval \((\alpha,\infty)\) where \(\alpha=0.886...\) is the unique positive minima of \(\Gamma(x)\)._
## 4 Conclusion
In this paper, several new definitions of important functions in mathematics have been introduced using Newton Interpolation. However, this paper raises more questions than it answers. For instance, does \(\Lambda\) satisfy a functional equation? What are the half-integer values of the function? And can the \(\Lambda\) function be expressed as a finite combination of known functions and/or special functions, in the
Figure 3: The graph shows the inverse gamma function \(\Gamma(y)=x\). The dots appear whenever the x coordinate is an integer factorial.
same way that the Hadamard function in [4] is written in terms of the digamma function? Maple 2021.2 is able to give \(\Lambda(1/2)=0.9290565913\) and \(\Lambda(i)=0.8827342488-0.4698725849i\). However these numbers do not appear to be expressible in closed form.
|
2302.10698 | Unpaired Translation from Semantic Label Maps to Images by Leveraging
Domain-Specific Simulations | Photorealistic image generation from simulated label maps are necessitated in
several contexts, such as for medical training in virtual reality. With
conventional deep learning methods, this task requires images that are paired
with semantic annotations, which typically are unavailable. We introduce a
contrastive learning framework for generating photorealistic images from
simulated label maps, by learning from unpaired sets of both. Due to
potentially large scene differences between real images and label maps,
existing unpaired image translation methods lead to artifacts of scene
modification in synthesized images. We utilize simulated images as surrogate
targets for a contrastive loss, while ensuring consistency by utilizing
features from a reverse translation network. Our method enables bidirectional
label-image translations, which is demonstrated in a variety of scenarios and
datasets, including laparoscopy, ultrasound, and driving scenes. By comparing
with state-of-the-art unpaired translation methods, our proposed method is
shown to generate realistic and scene-accurate translations. | Lin Zhang, Tiziano Portenier, Orcun Goksel | 2023-02-21T14:36:18Z | http://arxiv.org/abs/2302.10698v1 | # Unpaired Translation from Semantic Label Maps to Images
###### Abstract
Photorealistic image generation from simulated label maps are necessitated in several contexts, such as for medical training in virtual reality. With conventional deep learning methods, this task requires images that are paired with semantic annotations, which typically are unavailable. We introduce a contrastive learning framework for generating photorealistic images from simulated label maps, by learning from unpaired sets of both. Due to potentially large scene differences between real images and label maps, existing unpaired image translation methods lead to artifacts of scene modification in synthesized images. We utilize simulated images as surrogate targets for a contrastive loss, while ensuring consistency by utilizing features from a reverse translation network. Our method enables bidirectional label-image translations, which is demonstrated in a variety of scenarios and datasets, including laparoscopy, ultrasound, and driving scenes. By comparing with state-of-the-art unpaired translation methods, our proposed method is shown to generate realistic and scene-accurate translations.
+
Footnote †: Corresponding author: Orcun Goksel ([email protected])
+
Footnote †: Corresponding author: Orcun Goksel ([email protected])
_Keywords--_ Image translation, Simulated training, Medical training in VR, Contrastive learning, Adversarial learning, GAN
## 1 Introduction
Photorealistic image simulation has been an active research area for decades with a wide range of applications from movie- and game-industries [1, 2] to medical imaging for surgical training [3, 4, 5, 6]. Extensive research in modelling imaging physics [5, 6, 7] and material representations [8, 9] has substantially improved simulation realism in different applications, but there is still a very perceiveable visual difference between the state-of-the-art simulators and real world images. Recent progress in deep learning has paved the way for synthesizing photorealistic images by learning image features from large-scale real data. Among them, generative adversarial networks [10] have shown promising results in generating photorealistic images. Methods were shown to successfully generate images from noise inputs with controlled styles at different level of details learned from given domains [11, 12, 13] as well as to achieve semantic image synthesis [14, 15, 16, 17, 18] given paired label and target images. In the absence of paired training samples, methods based on cycle consistency loss [19, 20], shared latent space [21, 22], layer normalizations [23] and more recently contrastive loss [24, 25] have been investigated with varying levels of success.
As opposed to the widely used cyclic approaches, contrastive learning (CL) based unpaired image translation methods focus on translating in single direction by relaxing the strong bijective assumption, which has achieved impressive results in various unpaired translation settings [24]. Compared to image-level feature contrasting for unsupervised classification and segmentation, patch-level contrasting is employed in [24] which enforces structural similarity between images. Various approaches have been accordingly proposed for bridging the appearance gap between synthetic and real images [26, 27, 28, 29], also further improved by leveraging auxiliary simulation information such as simulated semantic maps [30] and geometry buffers (G-buffers) generated during 3D computer graphics rendering [31]. A parallel line of work investigate photographic transfer [32, 33, 34, 35], aiming at translating the appearance of reference images to simulated contents; however, such methods require lengthy and difficult-to-parametrize optimizations for each single target image. All above work aim to improve the realism of existing, sub-realistic (_e.g._, simulated) images and hence require the existence of preceding, complex simulation and rendering methods.
## 1 Introduction
Figure 1: Overview of unpaired label-image translation by leveraging domain-specific simulations. (a) An illustration of the simulation/generation pipeline from 3D computer graphics (CG) model to label-maps \(L^{\text{content}}_{\text{style}}\) and images \(I^{\text{content}}_{\text{style}}\) with the subscript indicating the style domain and superscript indicating the content domain. \(R\) and \(S\) denote the real and simulated domains, respectively. Note that the goal is to generate images \(I^{\text{S}}_{\text{R}}\) with realistic appearance for simulated content, based on (_i.e._ consistent with) label-maps \(L^{\text{S}}_{\text{S}}\) of simulated scenes. To that end, one can collect and use many real-life images \(I^{\text{R}}_{\text{R}}\), but these will not one-to-one match existing simulated content therefore preventing classical supervised training. (b) A schematic summary of existing unpaired image translation approaches CycleGAN [19], CUT [24], and ConPres [30], as well as our proposed methods SimIT, SimIT-C, and SimIT-PC. We define the label-to-image mapping function as \(G:\mathbb{L}\to\mathbb{I}\) and the image-to-label mapping function as \(F:\mathbb{I}\to\mathbb{L}\), with the label domain \(\mathbb{L}\) and image domain \(\mathbb{I}\). Both \(G\) and \(F\) are parameterized with a deep neural network consisting of an encoder and a decoder, _i.e._\(G(\cdot)=G^{\text{D}}(G^{\text{E}}(\cdot))\) and \(F(\cdot)=F^{\text{D}}(F^{\text{E}}(\cdot))\). Contrastive loss is computed on features obtained from the mappings \(M^{\text{G}}(\cdot)=H(G^{\text{E}}(\cdot))\) or \(M^{\text{F}}(\cdot)=H(F^{\text{E}}(\cdot))\), where \(H\) maps the generator encoder latent to a feature space \(\mathbb{Z}\).
Photorealistic image generation directly from simulated scene layouts, so-called _label-maps_, would obviate any complex and computationally-intensive simulation/rendering process in real-time, by learning the internals of such rendering into a generative model during a non-time-critical offline learning stage. Such label-maps can typically be extracted easily from existing simulation pipelines, only given 3D objects models and a vantage point (thus a scene), _i.e._ without a need to tune model-specific parameters nor to compute complex physical interactions. To illustrate this further, a generic simulation pipeline is given in Figure 1(a). Given the above motivation, we aim to generate images \(I\) with _realistic_ appearance but _simulation_-controlled content, _i.e._\(I_{\text{R}}^{\text{S}}\) as appearance and content of a representation are hereafter indicated in sub- and super-script, respectively. With this convention, the methods from the literature mentioned earlier mostly target image-to-image translation of \(I_{\text{S}}^{\text{S}}\to I_{\text{R}}^{\text{S}}\). In comparison, label-to-image translation as we intend is often more challenging due to the large domain shift between these two representations. Generating _simulated_ images from labels, _i.e._\(L_{\text{S}}^{\text{S}}\to I_{\text{S}}^{\text{S}}\) translation, was studied in [9] for accelerating simulated image generation in real-time, for which a conditional GAN with supervised per-pixel loss was shown to provide promising results. This, however, relatively simpler compared to our goal of generating \(I_{\text{R}}^{\text{S}}\), since the former can be cast as a paired translation problem where the paired data (\(L_{\text{S}}^{\text{S}},I_{\text{S}}^{\text{S}}\)) is typically available from conventional simulation pipelines. In contrast for our desired target of \(I_{\text{R}}^{\text{S}}\), there exists no such paired label data. A large domain gap together with the lack of paired data make our intended label-to-realistic-image translation very challenging, and, to the best of our knowledge, without any working solution so far.
In this work we target the above problem of photorealistic image generation directly from label-maps. To facilitate the learning of appearance information from real images \(I_{\text{R}}^{\text{R}}\), we propose to utilize any available physics-based simulation to generate intermediate image representations \(I_{\text{S}}^{\text{S}}\). We utilize these as a stepping stone to help bridge the domain gap between the labels \(L_{\text{S}}^{\text{S}}\) and their real-image counterparts \(I_{\text{R}}^{\text{S}}\) as desired. To that end, we introduce a contrastive learning based image translation framework that leverages physics-based simulations/rendering in the training of unpaired label-to-image translation, but without needing such simulations during the real-time inference stage. Compared to the existing works [9, 26, 30, 31], our proposed method performs image generation and realism enhancement simultaneously in a single step. We demonstrate our method on enhancing medical image simulators for training, as well as car driving simulators for entertainment.
Our proposed solution builds on a bidirectional (cyclic) translation idea. As a by-product of this design, it can also perform the inverse operation of image-to-label translation, _i.e._ semantic image segmentation is also learned meanwhile in an unsupervised fashion without seeing any annotations of real images. We also evaluate such segmentation outcomes in this work, as they opens future possibilities of alleviating annotation efforts.
## 2 Results
### Compared methods
**Proposed method.** We call our proposed method for generating realistic images from simulated semantic label-maps, with the target style learned from real images while retaining overall content matching the simulated scene, as **Sim**ulation-based **I**mage **T**ranslation framework (SimIT). Realistic and scene-accurate translation given unpaired data is herein enabled by two major contributions:
1. To address missing label-image pair information, we leverage existing physics-based simulations by using the simulated images (that are inherently paired with corresponding label-maps) as surrogate targets for contrastive learning.
2. To enforce content/structure preservation, we devise a method that contrasts domain-specific image features extracted from a translation network that is trained using a cycle consistency loss. This further enables bidirectional translation, _i.e._ in both label-to-image and image-to-label directions.
**Compared methods.** We evaluate SimIT comparatively to the following three state-of-the-art unpaired image translation methods: **CycleGAN**[19] is a conventional approach with cyclic consistency loss by employing separate generators and discriminators in each direction. **CUT**[24] is a
unidirectional translation framework based on patch-based multi-scale contrastive loss computed on generator features. **ConPres**[30] is a multi-domain translation framework that leverages simulated label-image pairs to retain structural content. Together with cycle-consistency and contrastive losses, ConPres proposes a regularization loss for semantic consistency, which enforces a generator to create the same output for paired images and label-maps. Consequently, ConPres can be used for both image-to-image and label-to-image translation. The latter being the focus herein, we employ that use-case of ConPres in our comparisons. High-level conceptual schematics of above-mentioned three approaches are illustrated in Figure 1(b).
**Ablations.** To further evaluate our two major contributions listed above, we ablated them cumulatively from SimIT, resulting in the following reduced models for comparison: **SimIT-C** (SimIT without cycle loss) is a unidirectional version of SimIT, _i.e._ without learning an inverse translation from image to labels, where the contrastive loss is then computed using features from the label-to-image generator, c.f. SimIT-C in Figure 1(b). **SimIT-CS** (SimIT-C without leveraging simulations) does not utilize any simulation information, where the contrastive loss is then computed between semantic label-maps and translated images, similarly to CUT in Figure 1(b).
### Evaluation
All results are evaluated on unseen test data. We employ the non-parametric two-sided Wilcoxon signed-rank test to assess differences between paired test results and report statistical significance with p-value. Methodological and implementation details are given later in the Methods. We compared the methods on three different applications (more details given in the Methods):
**Laparoscopy training.** As physics-based simulation, computer-graphics rendering techniques were employed [36, 37], to simulate synthetic laparoscopic images from a 3D abdominal model. During the rendering of each frame, a camera projection of anatomical labels provided the corresponding semantic label-maps. This simulated dataset with paired image and label-maps are herein called as _LaparoSim_.
For the laparoscopy application, we employed two different datasets of real images, thus evaluating two different target styles, called herein: _Style-C_, represented by the public Cholec dataset containing 80 videos of cholecystectomy surgeries [38]; and _Style-H_, represented by a single, in-house laparoscopic surgery video clip with a length of 13 minutes. Sample images can be seen in Figure 2.
**Ultrasound training.** The simulated images were generated using a ray-tracing framework [6] from a second trimester fetal model, by emulating a convex ultrasound probe at multiple locations and orientations on the abdominal surface, with imaging settings following [30]. Semantic label-map is rendered as a cross-section through the anatomical surfaces at the ultrasound center imaging plane. We refer this simulated dataset as _USSim_.
For the targeted real-image style, sample ultrasound images were collected using a GE Voluson E10 machine during standard fetal screening exams of 24 patients. We refer this as _GE-E10 style_.
**Gaming.** As the gaming simulation, we used the _GTA_ dataset [39] containing image-label pairs from a car-driving game. For the real image style, we used the _Cityscapes_ dataset [40] containing images of street scenes from German cities.
### Experiments
**Label-to-image translation.** For the laparoscopy training application, we present the results in Figure 2 for separately training two different styles. As seen qualitatively in Figure 2(b), CycleGAN and CUT hallucinate inexistent tissue regions, _e.g._, fat tissues. ConPres achieves structural preservation by leveraging information from LaparoSim label-image pairs, but fails completely in generating tissue textures, which leads to highly unrealistic image style. Going from label-to-image, our method SimIT is seen to outperform the state-of-the-art in terms of anatomical content preservation as well as in achieving a realistic image appearance. This observation is substantiated by the quantitative evaluation in Table 1(a), where image realism is empirically measured using Frechet and Kernel Inception Distances (FID and KID, respectively) between translated and real image sets, and the content
Figure 2: (a) Examples of real laparoscopic images with two different appearances: Style-C for the public Cholec80 dataset and Style-H for an in-house single-video dataset. (b) Qualitative comparison of images translated from input LaparpSim label-maps, using the proposed SimIT and alternative methods. For reference purposes, conventionally simulated/rendered LaparoSim images are shown on the right. (c) Quantitative evaluation of structural preservation via Structure Similarity Index Metric (SSIM). Using a paired test, distributions of pair-wise differences over the test set are shown by comparing SimIT to each alternative method and ablated variant, _i.e._ the larger the positive difference is, the more superior SimIT is with respect to another method. Significance is indicated with respect to SimIT (represented with the dotted lines) or between different models (\(|\)—\(|\)), with P-values of \(\leq 0.0001\) (*******). (d) Qualitative comparison of our proposed method SimIT to its ablated variants, with translated images zoomed in on the white field-of-view shown in the simulated image as reference.
preservation is measured via the structural similarity index measure (SSIM) between translated and corresponding simulated images. Note that SimIT also achieves the lowest SSIM standard deviation, indicating its consistent content preservation over the test samples. A test-image wise paired comparison of all methods with respect to SimIT is presented in Figure 2(c), which shows ConPres as the closest contender in terms of content preservation (SSIM) but with largely unrealistic image translation, as also demonstrated qualitatively (Figure 2(b)) and tabulated empirically (Table 1(a)).
Compared to the proposed method SimIT, its ablated variants, SimIT-C and SimIT-CS perform substantially poorer as seen quantitatively in Table 1(a) and Figure 2(c), and qualitatively in Figure 2(d). This demonstrates the importance of our proposed method components. SimIT-CS lacks our proposed component for utilizing simulations with a contrastive loss in learning the label-to-image translation, and as such it can be seen as a variant of CUT implemented in our framework. With no explicit label-to-image pairs provided, SimIT-CS then learns to simply emulate all structures seen in the real examples, hence erroneously changing the image content as seen in the presented examples. Using simulated images as surrogate targets for contrastive loss (SimIT-C in Figure 2(d)) largely prevents such superfluous content generation. Still SimIT-C only uses the features from a label domain for contrasting, and such features cannot be well aligned with image features. With the proposed method SimIT, the addition of a custom cycle loss allows for training a bidirectional translation,
\begin{table}
\begin{tabular}{l l||c c c|c c c} \hline \hline \multirow{3}{*}{(a)} & \multirow{3}{*}{**Laparoscopy**} & \multicolumn{3}{c|}{Style-C} & \multicolumn{3}{c}{Style-H} \\ \cline{3-8} & & \multicolumn{2}{c|}{Content} & \multicolumn{2}{c|}{Realism} & \multicolumn{2}{c}{Content} & \multicolumn{2}{c}{Realism} \\ \cline{3-8} & & **Method** & **SSIM**[\%] \(\uparrow\) & **KID**\(\downarrow\) & **FID**\(\downarrow\) & **SSIM**[\%] \(\uparrow\) & **KID**\(\downarrow\) & **FID**\(\downarrow\) \\ \hline \multirow{8}{*}{
\begin{tabular}{c} **SIM** \\ **CSVICICGAN**[19] \\ \end{tabular} } & Simulation & — & 257.39 & 17.76 & — & 201.32 & 12.42 \\ & CycleGAN [19] & 39.21(6.80) & 254.61 & 14.46 & 50.50(10.62) & 212.42 & 13.73 \\ & CUT [24] & 49.79(13.75) & 234.65 & 12.85 & 58.74(6.77) & 222.81 & 13.42 \\ & ConPres [30] & 71.12(3.96) & 380.70 & 36.72 & 75.76(5.56) & 379.82 & 36.80 \\ & SimIT-CS & 41.77(7.98) & **202.24** & **10.40** & 56.15(5.23) & **147.06** & **7.03** \\ & SimIT-C & 58.05(7.34) & 210.94 & 12.65 & 72.87(2.12) & 175.38 & 11.61 \\ & SimIT & **75.56**(2.42) & 214.22 & 11.97 & **83.69**(**1.63**) & 161.29 & 7.13 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative metrics reported as _mean(std)_. Arrows indicate direction of superiority; \(\uparrow\) meaning the higher the better, and \(\downarrow\) the lower. KID is reported in \(10^{-2}\) unit. Best results are marked bold.
where the features from an encoder operating on images can then instead be used for contrasting. With such domain-consistent features, content preservation is further enhanced, as seen both quantitatively given the error of SimIT-C in Figure 2(c), and qualitatively by visually comparing these variants in Figure 2(d).
Evaluation on ultrasound training and gaming applications further confirms the superior performance of our proposed method on label-to-image translation task (Figures 3 and 4). Translated ultrasound image examples in Figure 3(b) demonstrate that the alternative methods are not always correct with the echogenecity (brightness) profile of different anatomical regions, _e.g._, outside the uterus is sometimes black and sometimes white, and the same for the amniotic fluid. ConPres preserves the echogenecity better than CycleGAN and CUT by leveraging simulated label-image pairs, however, it is biased towards interpreting input labels as pixel intensities. In comparison, SimIT can retain correct echogenecity of each region, which can be seen by comparing to the reference simulated images, while translating into a realistic appearance in the style of GE-E10 images. Furthermore, our method successfully preserves fine anatomical structures, _e.g._, the ribs in the top row example, whereas the other compared methods fail with such detail. We herein assess content preservation for ultrasound images based on the alignment of bone surfaces, delineated by a phase-symmetry based bone-surface estimation method [41] as exemplified in Figure 3(c). Alignment is then quantified by the intersection over union (IoU) score of bone surface pixels extracted from translated and simulated (reference) images (cf. Figure 3(e) and Table 1(b)). When comparing image realism via FID/KID scores, SimIT does not yield the best values, which is hypothetically caused by the large scene difference between real and simulated training images, as illustrated in Figure 3(d). Since the alternative methods do not enforce strict restrictions on content preservation, they hallucinate content in an unconstrained manner, which helps lower their FID/KID scores; nevertheless, such arbitrary content does not match the input label-maps, hence not fit for our purposes, as also quantified by the structural preservation scores, _i.e._ IoU for the ultrasound training experiment.
For the gaming experiment, similarly to ultrasound experiment above, CUT and CycleGAN largely hallucinate content, ignoring the input GTA label-maps and hence do not satisfy the desired content preservation criterion. For example, as seen in Figure 4(b), the sky is erroneously filled with trees, since the real Cityscapes images contain more trees and less sky compared to the simulated GTA dataset [31]. Then, the discriminator can easily differentiate between real and fake images by looking at the image top region, which in return encourages the generator to hallucinate more trees in the sky. In comparison to CycleGAN and CUT, the domain-consistent deep features used in our proposed SimIT explicitly encourages content preservation. To evaluate structural consistency, we apply on the translated images the pretrained semantic segmentation network DRN [42] following [24] and report the resulting segmentation metrics: mean average precision (mAP), pixel accuracy (pixAcc), class accuracy (classAcc) - see the Methods for details. SimIT achieves the best scores among all the methods for these content preservation metrics. Evaluating image realism using FID/KID scores (Table 1(c)), SimIT outperforms the state-of-the-art, while faring suboptimal compared to its ablated variants, which however in turn fail at successfully retaining content. This well indicates the conflicting nature between content preservation and image realism, especially in the presence of substantial layout differences between simulated and real image sets.
Image-to-label translation.By introducing the cyclic loss to train our framework with domain-consistent features, at the end of the training we also obtain a generator that can translate real images to label-maps, _i.e._ a semantic segmenter for real images. Note that such segmenter is trained truly unsupervised, _i.e._ without requiring _any_ annotation of any real image. To evaluate the segmentation outcome of image-to-label translations of SimIT, we compare resulting label-maps to semantic segmentations for the datasets where such annotations are available, _i.e._ the CholeSegk dataset as the Style-C target for our laparoscopy application and the Cityscapes dataset for our gaming application.
For laparoscopy comparison, we report _upper-bound_ segmentation results from a ResNet50 network trained on annotated images from the CholeceSeg8k dataset introduced by [43], which is a subset of Cholec data [38]. In Figure 5(a) a sample input image is shown together with its supervised ResNet50 segmentation as upper-bound; the semantic map predicted by SimIT used as a segmenter; and the ground-truth annotation for this test image. Average segmentation scores are reported in
Figure 3: Ultrasound training experiment results: (a) Examples of real ultrasound images with the style of GE-E10 ultrasound machine. (b) Visual examples of images translated using SimIT compared to alternative methods. (c) Qualitative results of SimIT compared to its ablated variants, with bone surfaces visualized in purple and green respectively for those from translated and simulated (reference) images. (d) Probability maps of containing content at each location, averaged over the training images. A big scene distribution difference can be observed between simulated and target images. (e) Quantitative evaluation of structural preservation. To this end, a paired test is employed to compare the IoU scores between the bone maps extracted from the simulated and translated images. The difference is computed by subtracting the score of other models from SimIT, _i.e_. the larger the positive difference is, the more superior SimIT is with respect to another method. Significance is indicated with respect to our proposed model SimIT (marked with dotted lines) or between different models (|—|), with P-values of \(\leq 0.0001\) (****).
Figure 5(c). For the gaming application, we compare SimIT with the segmentation network DRN [42], a standard technique for the Cityscapes dataset. DRN was trained on the labelled training set, acting as a supervised upper-bound herein. A qualitative sample comparison is shown in Figure 5(b) with quantitative results tabulated in Figure 5(d). SimIT presents a fair performance despite not having seen any labeled real images and while not specifically targeting such segmentation problem.
For the ultrasound application, SimIT was trained with gray-scale label-maps, as this performed well for the main focus of label-to-image translation and also using one-hot label encoding (performed for the other two applications) was less stable in the parametrization of ultrasound training. Without one-hot labels, our trained network fails to estimate meaningful label-maps from real ultrasound images. This is mainly due to having nearly 80 different tissue classes and some classes with similar ultrasound texture and appearance, which makes the segmentation problem very challenging and, given no one-hot labels, also ill-posed as then a grayscale regression problem.
## 3 Discussion
In this work we present a contrastive learning based unpaired image-label translation framework, by leveraging domain-specific simulations to generate photorealistic images from synthetic semantic label-maps. We demonstrate the superior content-preservation performance of our proposed method across several datasets. Our bidirectional framework as a by-product affords an image segmenter, which is demonstrated herein to provide approximate segmentations of real images. Note that such segmentation by our method requires no annotations of real images in training, and it utilizes merely existing computational simulation outputs. As demonstrated in Figures 2 to 4(b), the unsupervised losses between label and image representations, _i.e._ the cycle consistency loss [19] and the contrastive loss [24], may lead to scene modification and _semantic flipping_, _i.e._ the content not being preserved semantically consistently.
To mitigate these, we leverage simulated images as intermediate image representation to bridge the
Figure 4: Gaming experiment results: (a) Examples of images from Cityscapes dataset. (b) Visual examples of images translated using SimIT compared to alternative methods. (c) Qualitative results of SimIT compared to its ablated variants.
gap between the source and target domains. Among compared methods, ConPres is the closest to our work, as it also leverages simulated pairs to enforce content preservation. To encourage the generator encoder to extract content-related features, ConPres uses a unified generator for three translation tasks: label-to-image, image-to-image, and image-to-label. This, however, complicates the generator's task, leading to sub-optimal results in the label-to-image direction. In comparison, we suggest to utilize task-specific generators, relaxing the constraints of each generator. We accordingly leverage simulated images as surrogate targets only for loss computation, not as an additional generation task. During our preliminary experiments, we found out that using pixel-level supervised losses, _e.g._, L1/L2 loss, to assess content similarity between simulated and translated images is problematic due to the intrinsic appearance shift, lighting variations, and texture differences between two domains. We also experimented with employing the discriminator as a feature extractor for computing feature-level losses, _e.g._, the perceptual loss [44], but the results were again not satisfactory. In comparison to the above, the currently utilized patch-based contrastive loss is less affected by any appearance shift, and can therefore successfully discern and focus on content dissimilarity. Together with our utilization of adaptive discriminator augmentation and limited discriminator receptive field, the contrasting of image features from the proposed addition of an image-to-label network has endowed results with substantial content preservation without degrading image realism.
The above-mentioned challenge of measuring image content similarity during training also reflects on the evaluation of inference results for structural preservation. For the gaming application experiment, we have employed a pretrained segmentation network to assess the content difference between simulated and translated images. This approach is only feasible when a large amount of annotated images from the target domain are available to train a segmentation network. When such segmentation method is not available, choosing surrogate metrics for quantifying structural similarity is a non-trivial task. Compared to metrics based on pixel-wise difference, SSIM is relatively less sensitive
Figure 5: (a) A visual example of semantic label-maps predicted by SimIT compared with the supervised baseline ResNet50, with the label legend shown on the bottom. (b) Visual examples of semantic maps predicted by SimIT compared with DRN. (c) Quantitative segmentation results on the CholecSeg8k dataset. We report average F1 scores for six label classes over 213 test images over 213 test images. (d) Quantitative segmentation results on the Cityscapes dataset over 500 test images.
to appearance shifts. Thus, we used SSIM to capture structural differences between laparoscopic images. However, SSIM is less suitable for ultrasound images due to highly directional artifacts and the inherent speckle noise. In ultrasound, bone surfaces are major anatomical landmarks which appear consistently as hyperechoic bands due to their relatively higher acoustic impedance [45]. We thus use bone surfaces extracted from simulated and translated ultrasound images using a phase-symmetry based method (described further in the Methods) for assessing structural preservation in ultrasound.
As noted from the qualitative results shown in Figures 2 to 4, SimIT is not capable of recovering image features, which are not encoded in label-maps, such as lighting and depth information in laparoscopy and gaming applications, and the acoustic shadows in the ultrasound application. Auxiliary scene information, _e.g._, geometry and material information from the simulation, can be potentially integrated into SimIT as additional inputs, which can be a potential future direction to further improve content preservation.
Our proposed method can also have other uses. For instance, in fields where annotated data is scarce and/or cannot be distributed due to privacy concerns, such as for medical image segmentation, there have been generative methods [46, 47] that produce image and label-map pairs based on random vectors, _e.g._, to train supervised methods with large datasets. In such a problem setting, our method can generate label-conditional images to establish a control on the generated dataset, which can help in, _e.g._, creating balanced classes, removing biases in the original dataset, and emphasizing certain pathology.
## 4 Methods
Herein we use the notation \(X^{z}_{y}\) to represent the domain of any sample, where \(X\) is the representation from \(\{L\text{:label-map},I\text{:image}\}\), and \(y\) and \(z\) are, respectively, the style (appearance) and content of such representation from \(\{S\text{:simulated},R\text{:real}\}\). We aim to learn a generator \(G:\mathbb{L}^{\mathrm{S}}_{\mathrm{S}}\mapsto\mathbb{I}^{\mathrm{S}}_{ \mathrm{R}}\) which maps a simulated label-map \(L^{\mathrm{S}}_{\mathrm{S}}\in\mathbb{L}^{\mathrm{S}}_{\mathrm{S}}\) to a real-appearing image \(I^{\mathrm{S}}_{\mathrm{R}}\in\mathbb{I}^{\mathrm{S}}_{\mathrm{R}}\) while preserving the simulation-consistent semantic content, _i.e._\((\cdot)^{\mathrm{S}}\).
Generator \(G\) is divided into an encoder \(G^{\mathrm{E}}\) for extracting content-related features and a decoder \(G^{\mathrm{D}}\) for generating target appearance. It is possible to collect many real image examples \(\{I^{\mathrm{R}}_{\mathrm{R}}\in\mathbb{I}^{\mathrm{R}}_{\mathrm{R}}\}\) and also to simulate label-image pairs \((L^{\mathrm{S}}_{\mathrm{S}},I^{\mathrm{S}}_{\mathrm{S}})\), but paired data of the intended source-target translation, _i.e._\((L^{\mathrm{S}}_{\mathrm{S}},I^{\mathrm{R}}_{\mathrm{S}})\), is inexistent and very challenging to procure. The unpaired data described above does not allow for direct supervision in learning \(G\). Existing unpaired methods often change both content and style together, and the ones that aim content preservation only targets image-to-image translation, with methods we show herein not to simply extend to label-to-image translation targeted herein. An overview of the methods can be followed in Figure 6.
Generative adversarial training.For learning a generator \(G\) and its discriminator \(D_{I}\) differentiating images \(I\) as real or fake, a non-saturating GAN loss with the R1 regularization [48] is used, _i.e._ :
\[\mathcal{L}^{\mathrm{G}}_{\mathrm{GAN}}(\{L^{\mathrm{S}}_{\mathrm{S}}\},\{I^{ \mathrm{R}}_{\mathrm{R}}\})=\mathbb{E}_{I}[\log(D_{I}(I^{\mathrm{R}}_{\mathrm{ R}})-1)]+\mathbb{E}_{L}[\log D_{I}(G(L^{\mathrm{S}}_{\mathrm{S}}))]+\frac{\gamma_{I}}{2} \mathbb{E}_{I}[\|\nabla D_{I}(I^{\mathrm{R}}_{\mathrm{R}})\|^{2}] \tag{1}\]
with the regularization parameter \(\gamma_{I}\).
Label-to-image translation guided by simulation.Herein we propose to leverage information from simulations to achieve semantic preservation while learning \(G\). To that end, we utilize simulated (synthetic) images \(\{I^{\mathrm{S}}_{\mathrm{S}}\in\mathbb{I}^{\mathrm{S}}_{\mathrm{S}}\}\) during training, which can have paired input label-maps \(\{L^{\mathrm{S}}_{\mathrm{S}}\in\mathbb{L}^{\mathrm{S}}_{\mathrm{S}}\}\) generated from the existing simulation framework (Figure 1(a)) and available for training. We encourage scene-consistent translations using a contrastive loss [24] on image patches, where corresponding patches from the source and translated images (positive pairs) are brought closer in a learned feature space. This space is defined as a projection of the manifold learned by the generator encoder, as illustrated in Figure 6(b). Meanwhile non-corresponding (arbitrary) patches are treated as negative samples and hence pushed farther apart in that feature space. Compared to the pixel-wise supervised losses, contrastive loss is known to be less affected by image appearance. It was utilized in [24] for unpaired _image-to-image_ translation, _i.e_. when both source and target are of the same representation
being in image domain \(\mathbb{I}\). However, for _label-to-image_ translation, the source and target representations differ, _i.e._ while each pixel in \(L\) denotes a label, each pixel in \(I\) denotes a color. Thus, directly contrasting label and image features cannot successfully guide the network for the intended task; as also seen with the suboptimal performance of our ablation variant SimIT-CS. To alleviate this problem, herein we leverage available simulated images \(I_{\mathrm{S}}^{\mathrm{S}}\in\mathbb{I}_{\mathrm{S}}^{\mathrm{S}}\) as "surrogate" source images. This implicitly enables the use of existing simulated images \(I_{\mathrm{S}}^{\mathrm{S}}\)). Note that these images that require complex rendering operations are used for loss computation during the training of our method, so they are not needed during inference. This is in contrast to the earlier works [30, 31] where the rendered images are used as an input to the translation network and are thus complex rendering operations are still required during inference in real-time.
**Bidirectional label-image translation framework.** To extract domain-consistent feature representations, we propose to employ an additional generator \(F:\mathbb{I}_{\mathrm{R}}^{\mathrm{R}}\mapsto\mathbb{I}_{\mathrm{S}}^{\mathrm{ R}}\) with \(F(\cdot)=F^{\mathrm{D}}(F^{\mathrm{E}}(\cdot))\) consisting of an encoder \(F^{\mathrm{E}}\) and a decoder \(F^{\mathrm{D}}\), acting in the opposite direction for translating \(I\to L\), _i.e._ mapping an image back to a label-map. Unlike [24] which contrasts features from \(G^{\mathrm{E}}\) operating on the source domain \(\mathbb{L}\) with _labels_, we propose to contrast the features of the segmenter encoder \(F^{\mathrm{E}}\) trained to extract features for inferring semantic content from images, and it is thus more suited for comparing _image_ similarity.
**Patch-based contrastive loss.** For an input image \(x\) and its translated image \(\hat{x}=G^{\mathrm{D}}(G^{\mathrm{E}}(x))\), we contrast feature maps \(z_{j}^{\mathrm{F}}=H_{j}^{\mathrm{G}}(F^{\mathrm{E}}(x))\) and \(\hat{z}_{j}^{\mathrm{F}}=H_{j}^{\mathrm{G}}(F^{\mathrm{E}}(\hat{x}))\), with a light-weight projection head \(H_{j}^{\mathrm{G}}\) mapping the features of \(j\)-th hidden layer in \(F^{\mathrm{E}}\) for training \(G\). Given \(\hat{z}_{j}^{\mathrm{F},s}\) as a _query_ feature at a given spatial location \(s\) within the feature map \(\hat{z}_{j}^{\mathrm{F}}\) of the translated image, the corresponding input feature \(z_{j}^{\mathrm{F},s+}\) at the same location (denoted by \(+\)) is considered as a positive sample, while any other input feature \(z_{j}^{\mathrm{F},s-}\) at arbitrary locations act as negative samples. Noise contrastive loss (NCE) [24] for the \(j\)-th layer feature can then be computed as the sum of contrastive loss for several
Figure 6: (a) Schematic overview of our proposed method SimIT with (b) an illustration of contrastive loss. (c) Schematic overview of the ablated version SimIT-C, and (d) SimIT-CS. (e) Schematics of the generator architecture. The number below each convolutional block indicates the channel number. For \(G\) the input channel number is the number of classes (#class) and the output channel number is the number of image channels (3 for RGB images), for \(F\) vice versa.
\(S_{j}\) at randomly sampled spatial locations as follows:
\[\mathcal{L}_{\text{NCE},j}(z_{j}^{\text{F}},\hat{z}_{j}^{\text{F}})=\sum_{s=1}^{S _{j}}\mathcal{L}_{\text{CE}}(\hat{z}_{j}^{\text{F},s},z_{j}^{\text{F},s+},z_{j}^ {\text{F},s-}) \tag{2}\]
with
\[\mathcal{L}_{\text{CE}}(z,z^{+},z^{-})=\,-\log\left[\frac{\exp(d(z,z^{+})/ \tau)}{\exp(d(z,z^{+})/\tau)+\sum_{z^{-}}\exp(d(z,z^{-})/\tau)}\right], \tag{3}\]
where \(z\) is the feature vector of query; \(z^{+}\) and \(z^{-}\) are the feature vectors of positive and negative samples, respectively; \(d(z_{1},z_{2})\) is a distance metric between two latent vectors (herein the cosine distance); and \(\tau\) is a temperature parameter controlling the smoothing of joint likelihoods.
We herein propose to leverage the simulated image domain as surrogate source domain by computing contrastive loss between the translated image \(\tilde{I}_{R}^{\text{S}}=G^{\text{D}}(G^{\text{E}}(L_{\text{S}}^{\text{S}}))\) and simulated image \(I_{S}^{\text{S}}\) paired to \(L_{S}^{\text{S}}\), _i.e_ :
\[\mathcal{L}_{\text{CL}}^{\text{F}}\big{(}\{L_{\text{S}}^{\text{S}}\}\big{)}= \mathbb{E}_{L}\sum_{j=1}^{J}\mathcal{L}_{\text{NCE},j}\Big{(}H_{j}^{\text{G}} \big{(}F^{\text{E}}(I_{S}^{\text{S}})\big{)},H_{j}^{\text{G}}\big{(}F^{\text{ E}}(\tilde{I}_{\text{R}}^{\text{S}})\big{)}\Big{)} \tag{4}\]
computed over \(J\) feature layers to contrast information at different resolutions.
For a more expressive and distinctive label space, we herein encode label-maps as one-hot representations, which prevents misinterpretation of categorical labels suitable for class separation as pixel intensities for regression.
Learning image-to-label translation using cycle consistency loss.In this work, we treat and train the image-to-label translator \(F\) to perform pixel-wise semantic labeling task, known as image segmentation. Based on our cyclic framework, we use the existing label-to-image mapping network \(G\) to assess such segmentation accuracy, hence also obviating a need for pixel-wise annotations of real images, which are difficult to procure. With that, we compute a cycle reconstruction loss between a real image \(I_{\text{R}}^{\text{R}}\) and the image reconstructed from the predicted label-map \(\tilde{I}_{\text{R}}^{\text{R}}=G(F(I_{\text{R}}^{\text{R}}))\). Image similarity is measured using the NCE loss, as it is less sensitive to appearance shifts, as follows
\[\mathcal{L}_{\text{CYC}}\big{(}\{I_{\text{R}}^{\text{R}}\}\big{)}=\mathbb{E}_ {L}\sum_{j=1}^{J}\mathcal{L}_{\text{NCE},j}\Big{(}H_{j}^{\text{F}}\big{(}F^{ \text{E}}(I_{\text{R}}^{\text{R}})\big{)},H_{j}^{\text{F}}\big{(}F^{\text{E}} (\tilde{I}_{\text{R}}^{\text{R}})\big{)}\Big{)} \tag{5}\]
with the projection head \(H_{j}^{F}\) for the \(j\)-th layer feature of encoder \(F\). For \(F\) and its discriminator \(D_{L}\) for the label representation direction, we employ a GAN training objective \(\mathcal{L}_{\text{GAN}}^{\text{F}}(\{I_{\text{R}}^{\text{R}}\},\{L_{\text{S}}^ {\text{S}}\})\) similar to the original direction \(G\), but with a different regularization parameter \(\gamma_{L}\).
Training objective.A schematic illustration of the proposed method SimIT summarizing the above components is shown in Figure 6(a). Network training is performed by alternately optimizing the following two losses:
\[\mathcal{L}_{\text{G}}(\{L_{\text{S}}^{\text{S}}\},\{I_{\text{R}}^ {\text{R}}\}) =\mathcal{L}_{\text{GAN}}^{\text{G}}(L_{\text{S}}^{\text{S}},I_{ \text{R}}^{\text{R}})+\lambda_{G}\cdot\mathcal{L}_{\text{CL}}^{\text{F}}(L_{ \text{S}}^{\text{S}}) \tag{6}\] \[\mathcal{L}_{\text{F}}(\{L_{\text{S}}^{\text{S}}\},\{I_{\text{R}}^ {\text{R}}\}) =\mathcal{L}_{\text{GAN}}^{\text{F}}(L_{\text{S}}^{\text{S}},I_{ \text{R}}^{\text{R}})+\lambda_{F}\cdot\mathcal{L}_{\text{CYC}}(I_{\text{R}}^{ \text{R}}) \tag{7}\]
with the loss weighting parameters \(\lambda_{G}\) and \(\lambda_{F}\).
We compare our full model against the ablated variant SimIT-C by excluding the inverse mapping \(\mathcal{L}_{\text{F}}\) with the cyclic loss, as seen in Figure 6(c). Further ablating the paired simulation images yield the variant SimIT-CS that instead uses the labels for contrasting (Figure 6(d)). For both ablated variants, encoder \(G^{\text{E}}\) is used for computing the contrastive loss \(\mathcal{L}_{\text{CL}}^{\text{G}}\).
Network architecture.We build our method on the StyleGAN2 framework [12] for adversarial training. We accordingly use a ResNet-based generator architecture [24] with four down- and up-sampling layers and 6 residual blocks (Figure 6(d)). We use skip connections between the down
and upsampling layers to avoid information loss. For the image synthesis decoder \(G^{\text{E}}\) we use weight demodulation [12]
\[w^{\prime\prime}_{ijk}=\frac{w^{\prime}_{ijk}}{\sqrt{\sum_{i,k}w^{\prime}_{ijk^{2 }}+\epsilon}}\quad\text{with}\quad w^{\prime}_{ijk}=s_{i}\cdot w_{ijk} \tag{8}\]
where \(w_{ijk}\) is the convolution weight from the \(i\)-th input feature map to the \(j\)-th output feature map; \(k\) denotes the spatial footprint of the convolution; and the multiplier \(s_{i}\) is set to 1. To provide stochastic means for texture synthesis, especially important to generate the noisy speckle patterns of ultrasound images, we perturb each feature layer with an additive Gaussian (noise) image scaled by learned weights following [11]. The output layer for \(G\) and \(F\) is linear and sigmoid, respectively. We use ReLU activation for all intermediate layers. For both \(D_{I}\) and \(D_{L}\), we adopt the feedforward discriminator architecture in [12]. In training we use randomly cropped image patches, which enables the discriminator to ignore global scene differences between simulation and real domains.
**Experimental data utilization.** _LaparoSim_ consists of 1850 synthetic laparoscopic image-label pairs simulated from a 3D abdominal model. We randomly split this data into train-validation-test sets with 80-10-10% ratio. _Style-C_ consist of 2100 images from the Cholec dataset. We excluded all frames with surgical tools, since surgical tools are not handled in our simulation. For Style-C testing, we used the 213 frames that has ground-truth labels provided in [43], and the remaining frames were used as Style-C training data. _Style-H_ consists of 2262 frames in total, which was randomly split in 80-20% ratio, respectively, for training and testing. Some Style-H images had major blurring artifacts due to camera motion, so we manually removed any blurry frames, since we treat the frames separately without any temporal information and such temporal effect is also not represented in the label-maps. All the images were resized to \(256\times 432\).
_USSim_ consists of 6669 simulated image-label pairs, which we resized to \(256\times 354\) and randomly split into training-validation-test sets with 80-10-10% ratio. _GE-E10 style_ consists of 2328 ultrasound images from 24 patients. We randomly selected images from 20 patients for training and 4 for testing, resulting in 1902 training images and 426 test images.
_GTA_ dataset [39] contains 24 966 image-label pairs. We followed its original train-validation-test split. _Cityscapes_ dataset [40] contains 3475 image-label pairs of street scenes from German cities. We used its original training set for our network training, and its validation set with ground-truth labels for testing our semantic segmentation output. As in [24], we resized all the images to \(256\times 256\).
**Implementation.** We implemented our method in PyTorch [49]. We used Adam [50] optimizer with parameters \(\beta=(0,0.99)\) and a learning rate of \(10^{-3}\) for \(G\) and \(10^{-4}\) for \(F\). We applied adaptive discriminator augmentation using its default hyperparameters [51]. The generator is trained on image patches of size \(256\times 256\) while the discriminator receptive field is \(64\times 64\). Our network training involves alternating updates of \(G\) and \(F\). We trained our models for 400 epochs. To compute the contrastive loss, we extract features from the four (stride-2) convolution layers in the encoder at 256 locations randomly selected for each mini-batch. We use a two-layer MLP with 256 units at each layer and ReLU activation for \(H^{\text{G}}\), and the identity mapping for \(H^{\text{F}}\). NCE temperature parameter \(\tau\) is set to 0.07 following [24]. Generator loss weighting \(\lambda_{G}\) is set to 5 for all the experiments. \(\lambda_{F}\) is set to 1 for the laparoscopy and ultrasound, and 0.5 for the gaming experiment. R1 regularization parameters \(\gamma_{I}\) and \(\gamma_{L}\) are set to 0.01 and 1.0, respectively, for all the experiments. For all compared methods we used their public implementations provided by the corresponding authors with their default hyperparameters.
**Evaluation metrics.** We use the following for quantitative evaluation:
\(\bullet\)**Image realism.**_Frechet inception distance_ (FID) [52] is common for assessing the quality of images generated by GANs, by comparing the feature distribution between two sets of images, herein real and translated, using feature vectors of an ImageNet-pretrained Inception network. _Kernel inception distance_ (KID) [53] is an alternative metric to evaluate GAN performance. KID is computed as the squared maximum mean-discrepancy between the features of Inception network. KID is then not biased by the number of samples used, unlike FID.
\(\bullet\)**Content preservation.** For laparoscopy images, content preservation is assessed using structural similarity between simulated and translated images, quantified via _Structural similarity index_ (SSIM)
computed as SSIM(\(x,y\))=\(\frac{(2\mu_{x}\mu_{y}+c_{1})(2\sigma_{x}+c_{2})}{(\mu_{x}^{2}+\mu_{y}^{2}+c_{1}) (\sigma_{x}^{2}+\sigma_{y}^{2}+c_{2})}\) with regularization constants \(c_{1}\) and \(c_{2}\), local intensity means \(\mu_{x}\) and \(\mu_{y}\), local standard deviations \(\sigma_{x}\) and \(\sigma_{y}\), and \(\sigma_{xy}\) being covariance between \(x\) and \(y\). To compute this metric, we used the python package _scikit-image_ with its default parameters. For ultrasound images, due to potential artifacts, typical speckle noise, and a lack of sharp edges, we instead used the similarity of bone surfaces for assessing structure preservation. To that end, we extracted bone surfaces from each image using [41]. This method is based on local phase symmetry in B-mode images, and operates by aggregating images filtered by log-Gabor kernels with different orientations \(r\) and scales \(m\) defined as
\[G_{r,m}(\omega,\phi)=\exp\left(-\frac{\log(\omega/\omega_{0})^{2}}{2\log( \kappa_{m}/\omega_{0})^{2}}-\frac{(\phi-\phi_{r})^{2}}{2\sigma_{\phi}^{2}} \right), \tag{9}\]
where parameters \(\phi_{r}\), \(\omega_{0}\), \(\kappa_{m}\), and \(\sigma_{\phi}\) define the filter orientation, center frequency, scaling factor, and angular bandwidth of the employed filters, respectively. Following [45], we set \(\kappa_{m}/\omega_{0}=0.25\) and \(\phi_{r}=[\frac{1}{6}\pi,\frac{3}{6}\pi,\frac{5}{6}\pi]\). To assess preservation, we report _intersection over union_ (IoU) of pixels belonging to bone surfaces extracted from corresponding simulated and translated images. We exclude from computations the top 25 pixels of images corresponding to skin reflections.
\(\bullet\)**Segmentation.** For the laparoscopic CholecSeg8k dataset, we trained a semantic segmentation network with ResNet50 architecture initialized with weights pretrained on the ImageNet using the pytorch segmentation library [54], following the training settings from a public implementation on the Kaggle repository of this dataset. We randomly picked video24 for validation, video{09,17,26,28,43} for testing, and the rest for training. We report F1 score for six classes that are also in our simulation. For the Cityscapes dataset in the gaming application, we trained a segmentation network suggested for this dataset in [24], with the DRN-D22 architecture [42] at \(256\times 128\) resolution with the default parameters from its public implementation. Following [24], we report Cityscapes semantic segmentation results using _mean average precision_ (mAP) over the classes; _pixel-wise accuracy_ (pixAcc) as the percentage of correctly classified pixels; and _average class accuracy_ (classAcc) over given classes.
## Appendix
Additional sample images of image-to-label and label-to-image translations are shown in Figures 7 and 8, respectively.
|
2307.02442 | Robotic Sonographer: Autonomous Robotic Ultrasound using Domain
Expertise in Bayesian Optimization | Ultrasound is a vital imaging modality utilized for a variety of diagnostic
and interventional procedures. However, an expert sonographer is required to
make accurate maneuvers of the probe over the human body while making sense of
the ultrasound images for diagnostic purposes. This procedure requires a
substantial amount of training and up to a few years of experience. In this
paper, we propose an autonomous robotic ultrasound system that uses Bayesian
Optimization (BO) in combination with the domain expertise to predict and
effectively scan the regions where diagnostic quality ultrasound images can be
acquired. The quality map, which is a distribution of image quality in a
scanning region, is estimated using Gaussian process in BO. This relies on a
prior quality map modeled using expert's demonstration of the high-quality
probing maneuvers. The ultrasound image quality feedback is provided to BO,
which is estimated using a deep convolution neural network model. This model
was previously trained on database of images labelled for diagnostic quality by
expert radiologists. Experiments on three different urinary bladder phantoms
validated that the proposed autonomous ultrasound system can acquire ultrasound
images for diagnostic purposes with a probing position and force accuracy of
98.7% and 97.8%, respectively. | Deepak Raina, SH Chandrashekhara, Richard Voyles, Juan Wachs, Subir Kumar Saha | 2023-07-05T17:12:48Z | http://arxiv.org/abs/2307.02442v1 | # Robotic Sonographer: Autonomous Robotic Ultrasound using Domain Expertise in Bayesian Optimization
###### Abstract
Ultrasound is a vital imaging modality utilized for a variety of diagnostic and interventional procedures. However, an expert sonographer is required to make accurate maneuvers of the probe over the human body while making sense of the ultrasound images for diagnostic purposes. This procedure requires a substantial amount of training and up to a few years of experience. In this paper, we propose an autonomous robotic ultrasound system that uses Bayesian Optimization (BO) in combination with the domain expertise to predict and effectively scan the regions where diagnostic quality ultrasound images can be acquired. The quality map, which is a distribution of image quality in a scanning region, is estimated using Gaussian process in BO. This relies on a prior quality map modeled using expert's demonstration of the high-quality probing maneuvers. The ultrasound image quality feedback is provided to BO, which is estimated using a deep convolution neural network model. This model was previously trained on database of images labelled for diagnostic quality by expert radiologists. Experiments on three different urinary bladder phantoms validated that the proposed autonomous ultrasound system can acquire ultrasound images for diagnostic purposes with a probing position and force accuracy of \(98.7\%\) and \(97.8\%\), respectively.
## I Introduction
Ultrasound is the most frequently used imaging modality for diagnostic and surgical interventions due to its low cost, non-ionizing nature, portability and real-time feedback. Ultrasound offers several advantages over other imaging modalities, like Magnetic Resonance Imaging (MRI) and Computed Tomography (CT), however, the diagnosis by ultrasound is a highly operator-dependent modality [1]. This is because of the skills required for manual control of the probe and quality assessment of acquired images. Sonographers employ both directed as well as random explorations strategies to search for diagnostic-quality images. The ultrasound probe is moved within the region of interest through hand maneuvers initially and fine adjustments to the probe's translational and rotational motion later. These maneuvers also include the safe and precise adjustment of the pressure through the probe while simultaneously analyzing the quality of acquired images. Such an intricate procedure requires a great deal of skill, focus, experience and manual effort from sonographers. In rural settings, skilled sonographers availability is limited [2], and alternative solutions are required.
In order to reduce the burden on experts, a Robotic Ultrasound System (RUS) is introduced. RUS consists of a dexterous robotic arm and an ultrasound machine with its probe attached to the end effector of the robot, as shown in Fig. 1. RUS can help ensure the accuracy, safety and consistency of the ultrasound procedures. Recently, in order to address the aforementioned needs, several telerobotic or human-assisted ultrasound systems have been proposed [3, 4, 5, 6, 7]. Compared to these systems, a fully automated ultrasound system offers various potential benefits, including shorter procedure time, a shorter learning curve, minimal communication delays and a reduced cognitive load [8]. However, there are key challenges for effective autonomous RUS. One of the most important challenge has to do with the hand motions for ultrasound images acquisition. Such images exhibit considerable inter- and intra-subject variability and the image quality is highly dependent on the precise position, orientation and pressure of the ultrasound probe. With incorrect probe maneuvers, the resulting image presents noise, artifacts, blurred boundaries and poor visibility, thereby making it unacceptable for diagnosis. Sonographers rely on visual and haptic feedback, anatomical information, and diagnostic expertise from prior medical education to rapidly acquire the high-quality images. Therefore, the RUS must locate the regions with acceptable diagnostic image quality for inter- and intra-patient procedures in the fewest exploration steps.
In this paper, we present an autonomous robotic ultrasound system that uses the domain-expertise in Bayesian Optimization (BO)-based search to scan the anatomical regions for acquiring diagnostic quality ultrasound images, thereby eliminating the need to thoroughly scan the entire region. The _key contributions_ of our work are as follows:
1. We proposed a prior in BO, gleaned from the expert's demonstration of high image quality probing poses,
Fig. 1: Robotic ultrasound system with probe attached to its end-effector [3], conducting a urinary bladder ultrasound.
termed as _expert's prior_. BO then estimates the region's unknown image quality as a semi-parametric Gaussian process model with expert's prior.
2. A novel _image quality metric_ is proposed, trained using a dataset of ultrasound images labelled for diagnostic quality by expert radiologists, which provides image feedback of the region to the BO.
3. We experimentally validated using three urinary bladder phantoms requiring different probing maneuvers for acquiring high image quality. The results show that our systems consistently and autonomously acquire high-quality ultrasound images in all phantoms.
We believe that the use of BO combined with domain expertise to perform autonomous ultrasound scanning will lead to less reliance on expert availability and a wider application in remote and underserved populations.
### _Related Work_
**Autonomous Robotic Ultrasound Systems:** In recent years, a range of autonomous robotic ultrasound systems has been proposed to minimize human intervention. Earlier works used image features for ultrasound image-based visual servoing [9, 10, 11]. Later, various systems used pixel-based confidence map methods [12] and segmentation of structures for optimizing the probe poses and forces [13, 14, 15, 16]. However, these image feature- and pixel-based approaches are modality specific, computationally expensive and do not consider the significance of diagnostic aspects. Hennersper _et al._[17] developed the autonomous system using the pre-operative MRI scan, however, MRI is quite expensive to acquire. Ma _et al._[18] proposed autonomous lung scanning by localizing the target region using RGB-D sensor data. However, the system used only force feedback and did not rely on ultrasound image feedback, thereby limiting its diagnostic accuracy.
Recently, Li _et al._[19, 20] proposed a deep Reinforcement Learning (RL) framework to control the probe for spinal ultrasound, incorporating image quality optimization into the reward formulation. However, the success of these systems is limited to phantoms and patients whose data was included during training. Moreover, deploying RL in medical systems is quite challenging, as it requires vast amount of physical interaction with the human body and poses safety and ethical concerns. In contrast to these systems, the proposed autonomous ultrasound system narrows down the area to be scanned using BO, eliminating the need to thoroughly scan the entire region. We further propose using domain expertise gleaned from the experts in the form of BO prior and image quality metrics, in order to acquire diagnostic-quality ultrasound images.
**Bayesian Optimization for Medical Robots:** Due to the fast optimization capability, BO has been adopted for safety-critical robotic medical procedures, such as autonomous robotic palpation [21], semi-autonomous surgical robot [22], controller tuning of hip exoskeletons [23] and autonomous robotic ultrasound [24, 25]. Our work is a non-trivial extension to the work by Goel _et al._[25]. They proposed using BO for autonomous ultrasound utilizing segmentation of the vessel in the ultrasound image as feedback to the BO for scanning the region with high vessel density. They used hybrid position-force control to move the robot in \((x,y)\) plane while maintaining constant force along the \(z-\)direction to the point of contact. In contrast, our work suggests two technical improvements to enhance the practicality of this approach. First, we recommend using a deep learning model that generates quality scores for ultrasound images as feedback to the BO instead of relying on a segmented mask of the tissue or structure. The latter approach can be very time-consuming and labor-intensive for experts as they would need to annotate anatomical structures' boundaries, taking into account the ultrasound image noise and variability due to machine settings, probe pressure, and patient anatomy. Second, we expand the capabilities of the BO by enabling it to search for the optimal scanning region along the \((x,y,z)\)-axis. Notably, the \(z\)-axis is under variable force control to account for varying physiological conditions [26].
**Domain Expertise in BO:** BO can utilize the expert's knowledge in the form of priors (beliefs) that the expert (practitioner) has on the potential location of the optimum. Such techniques have been mostly used for hyper-parameter tuning of image and text datasets [27], open-source machine learning datasets [28] and robot simulation experiments [29]. A few recent works have utilized expert's knowledge in the form of prior for medical robots [30, 31]. Ayvali _et al._[30] propose robotic palpation to detect tissue abnormalities using BO. They modified the acquisition function of BO, whose value peaks at the user-provided locations. Zhu _et al._[31] proposed an autonomous robotic auscultation system for locating the optimal sound quality location using BO. They used visual registration of the patient to locate the anatomical landmarks for obtaining a prior observation model. Inspired by these works, we propose BO for autonomous ultrasound leveraging a prior quality map gleaned from expert's demonstrations.
## II Methodology
The pipeline of the autonomous robotic ultrasound system is shown in Fig. 2. In the _offline phase_, the expert will demonstrate the potential probing poses to acquire the diagnostic quality images. This demonstrated data would be used to build a _prior quality map_, which encodes prior anatomical approximation about expected image quality. We also built a dataset of urinary bladder ultrasound images of humans and phantoms with labelled image qualities and trained a deep learning model for image quality assessment metrics. In the _online phase_, we used BO to select the probe poses to find the optimal ultrasound image quality utilizing both the prior map and quality metric gleaned from the domain expertise.
### _Bayesian optimization formulation_
We use BO to search adaptively for probing poses that yield a high-quality ultrasound image within a specified
anatomical region. Let \(A\) be the region of interest on the human body enclosing the anatomical structure, then the objective of BO is to solve:
\[\max_{\mathbf{p}\in A}q(\mathbf{\mathcal{I}}(\mathbf{p})) \tag{1}\]
where \(q(\mathbf{\mathcal{I}}(\mathbf{p}))\) denotes the quality score of ultrasound Image \(\mathbf{\mathcal{I}}\) at probe pose \(\mathbf{p}\). The BO will compute the probabilistic estimate of the unknown quality map \(q(\mathbf{\mathcal{I}}(\mathbf{p}))\) across the human body using the domain expertise in the form of _prior_ and _image quality metric_. An _acquisition function_ is optimized to yield the new probing pose. Once the new observation is found, the estimate is re-fitted to the data and the process is repeated till the termination criteria is reached, which is either the maximum reasonable iteration \(N_{max}\) or the estimated quality score threshold required for adequate diagnosis. The overall algorithm is outlined in Algorithm 1.
#### Iii-B1 Expert's prior
A common estimator used in BO is Gaussian Process (GP) model, which defines an unknown function \(f\) by assigning a probe pose \(\mathbf{p}\) a random variable \(f(\mathbf{p})\), which jointly represent a Gaussian. A GP for unknown function \(f\) is defined by the mean function \(\mathbf{\mu}(\cdot)\) and covariance or kernel function \(\mathbf{\kappa}(\cdot,\cdot)\). Given the function value estimates \(\bar{\mathbf{f}}=[f(\mathbf{p}_{1}),\cdots,f(\mathbf{p}_{n})]\) at probe poses \(\bar{\mathbf{p}}=[\mathbf{p}_{1},\cdots,\mathbf{p}_{n}]\), GP regression can predict the function \(f\) at new probe pose \(\mathbf{p}^{*}\) as the Gaussian distribution and is given by:
\[\mathcal{P}(f(\mathbf{p}^{*})|\mathbf{p}^{*},\bar{\mathbf{p}},\bar{\mathbf{f}})=\mathcal{N}( \mathbf{k}\mathbf{K}^{-1}\bar{\mathbf{f}},\mathbf{\kappa}(\mathbf{p}^{*},\mathbf{p}^{*})-\mathbf{k}\mathbf{K}^ {-1}\mathbf{k}^{T}) \tag{2}\]
where,
\[\mathbf{k}=\begin{bmatrix}\mathbf{\kappa}(\mathbf{p}_{*},\mathbf{p}_{1})&\cdots&\mathbf{\kappa}( \mathbf{p}_{*},\mathbf{p}_{n})\end{bmatrix}\]
\[\mathbf{K}=\begin{bmatrix}\mathbf{\kappa}(\mathbf{p}_{1},\mathbf{p}_{1})&\cdots&\mathbf{\kappa}( \mathbf{p}_{1},\mathbf{p}_{n})\\ \vdots&\ddots&\vdots\\ \mathbf{\kappa}(\mathbf{p}_{n},\mathbf{p}_{1})&\cdots&\mathbf{\kappa}(\mathbf{p}_{n},\mathbf{p}_{n}) \end{bmatrix}\]
We opted to use a combination of two kernel functions, namely the radial basis function and white noise function, as their combination improved estimations for structures present in ultrasound images [25]. The formulation of the kernel is:
\[\kappa(\mathbf{p}_{i},\mathbf{p}_{j})=\sigma_{r}\exp\left(\frac{-||\mathbf{p}_{i}-\mathbf{p}_ {j}||^{2}}{2l^{2}}\right)+\sigma_{w}\mathbf{I} \tag{3}\]
where \(\sigma_{r}\) is the overall variance, \(l\) is the length-scale, \(\sigma_{w}\) is the variance of noise and \(\mathbf{I}\) is the identity matrix. We further denote the set of image qualities as \(\bar{\mathbf{q}}=[q_{1},\cdots,q_{n}]\).
In GP, we propose using prior knowledge gleaned from expert's demonstrations to reduce the explorations and capture the variations of probe poses on the magnitude of ultrasound image quality corresponding to different human anatomy. Inspired from work in [31], we formulated the GP as a semi-parametric GP model, with its prior \(\mathcal{E}(\mathbf{\theta})\) modeled as a Gaussian process with latent parameters \(\mathbf{\theta}\), representing the mean \(\mathbf{\mu}_{\mathbf{\theta}}\) and covariance function \(\mathbf{\kappa}\). The parameters \(\mathbf{\theta}\) is initially inferred from observed probe poses and ultrasound image qualities, which the expert will provide by maneuvering the probe at the potential poses of the optimum image quality across different subjects. During online BO, \(\mathbf{\theta}\) will be inferred using the history of points in \((\bar{\mathbf{p}},\bar{\mathbf{q}})\) and prior \(\mathcal{E}(\mathbf{\theta})\) with Maximum A Posteriori (MAP) estimation, using an L-BFGS solver as:
\[\mathbf{\theta}^{*}=\arg\max_{\mathbf{\theta}}\mathcal{L}(\mathbf{\theta}|\bar{\mathbf{p}}, \bar{\mathbf{q}})\mathcal{E}(\mathbf{\theta}) \tag{4}\]
where \(\mathcal{L}(\mathbf{\theta}|\bar{\mathbf{p}},\bar{\mathbf{q}})=\prod\mathbb{P}(q_{i}|\mathbf{ \mu}_{\mathbf{\theta}}(\mathbf{p}_{i}),\mathbf{K})\) is the likelihood function and \(\mathbb{P}(.)\) denotes the probability density function of Gaussian distribution \(\mathcal{N}(\mathbf{q}_{i}|\mathbf{\mu}_{\mathbf{\theta}}(\mathbf{p}_{i}),\mathbf{K})\). Since GP models the residual function \(f(\mathbf{p})\) with respect to the prior, we subtract the prior from image quality as \(f(\mathbf{p}_{i})=q_{i}-\mu_{\mathbf{\theta}}(\mathbf{p})\), before re-estimating the GP.
#### Iii-B2 Acquisition Function
In each iteration of BO, the next probe pose to observe the image quality is determined using
Fig. 2: Overview of the pipeline for autonomous robotic ultrasound using online Bayesian optimization (BO), and offline domain expertise to obtain a prior quality map and to learn image quality assessment metric for providing feedback to BO.
an acquisition function. We have used an Expected Improvement (\(EI\)), which is the most commonly used acquisition function. If the posterior mean and variance of GP is given by \(\boldsymbol{\mu_{\tilde{f}}}(\boldsymbol{x}),\boldsymbol{\sigma_{\tilde{f}}^{2}} (x)\), then \(EI\) can be formulated as:
\[EI(\boldsymbol{p})=\begin{cases}\{\boldsymbol{\mu_{\tilde{f}}}( \boldsymbol{p})-f^{+}(\boldsymbol{p})-\xi\boldsymbol{\Phi}(\boldsymbol{Z})+ \boldsymbol{\sigma_{\tilde{f}}^{2}}(\boldsymbol{p})\boldsymbol{\phi}( \boldsymbol{Z})\quad\text{if}\;\boldsymbol{\sigma_{\tilde{f}}^{2}}( \boldsymbol{p})>0\\ 0\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \
consisting of a \(7\)-DoF Sawyer collaborative robotic arm (Rethink Robotics, Germany) with Micro Convex MC10-5R10S-3 transducer attached to its end-effector. The US image is captured by the Telemed Ultrasound machine (Telemed Medical Systems, Italy) and is transferred to the laptop. The ultrasound was performed on a urinary bladder phantom (YourDesignMedical, USA). We customized this phantom with the \(0.39\) inches thick (subjected to manual cutting error) rectangular layers of ballistic gel in order to approximately represent the patient's body with physiological differences. Thus, we present our results using three phantoms, termed as \(P0\), \(P1\) and \(P2\), having \(0\), \(1\) and \(2\) layers, as shown in Fig. 3.
The BO and image quality model have been implemented in Python \(3.8\) and PyTorch \(1.11\). ROS has been used to integrate and establish communication among all components of the setup. For BO Algorithm 1, we used \(\xi=0.1\), \(N_{max}=50\), \(A\in((0,0.15)m,(0,0.15)m,(8-20)N)\) for \((x,y,f_{z})\). The prior \(\mathcal{E}(\boldsymbol{\theta})\) has been modeled using GP by fitting it to \(10\) potential probing poses and corresponding image qualities.
### _Performance of quality assessment model_
We trained the ultrasound image quality assessment model, explained in Section II-B, using the Categorical Cross Entropy (CCE) as a loss function. We split the dataset in a \(90:10\) ratio as a training and testing dataset. We also used the transfer learning approach [37] and used the proposed model pre-trained on ImageNet. The stochastic gradient descent has been used as an optimizer with a learning rate of \(0.005\), momentum of \(0.9\) and weight decay of \(0.0005\). The size of the input image to the model is \(224\times 224\), batch size is \(16\) and the network is trained for \(100\) epochs. The results in Table I shows that the proposed model (ResNet50+MS+BP) achieved an increase in accuracy by \(3.01\%\) on a test set when compared to the ResNet50+BP model proposed in [34].
### _Comparing different BO strategies_
In order to analyze the effectiveness of the proposed methodology, we have compared the BO with zero prior to the BO with the proposed expert's prior. We illustrated these search strategies using the image feedback having a mean of the segmented mask of the bladder in the ultrasound image (\(q_{S}\)) as used in [25] and having proposed ultrasound image quality metric learned from expert's rating (\(q_{E}\)). For segmentation, we used a U-net-based segmentation model proposed in [38]. Further, each of the feedback strategies has been compared with different search spaces, first considering the probe motion along \(x\) and \(y\)-axis of the phantom, second along \(x\), \(y\) and \(z-\)axis of the phantom, where \(z-\)axis is under the force control \((f_{z})\). The estimated quality maps obtained using these strategies for \(P0\) are shown in Fig. 4, where red region shows the high-quality region and blue region shows the low-quality region. The black dots over the map represents the queried probe positions over the phantom during the optimization. The first column in Fig. 4 shows the quality map obtained using the uniform movement of the probe over the phantom, which has been considered as the approximate ground truth quality map. For both the quality types, the ground truth has been obtained using the approximate desired force (\(f_{d}\)) of \(14N\), \(16N\) and \(18N\) for \(P0\), \(P1\) and \(P2\), respectively, which gives the best image quality in these phantoms. We present results for \(3\) cases to illustrate the effect of searching with appropriate force in these phantoms: (i) \(f_{z}<f_{d}\): \(fz\) is constant but equal to \(f_{d}-4\), (ii) \(f_{z}=f_{d}\) and (iii) when \(f_{z}\) is variable.
We compared the quality maps of these strategies by doing quantitative analysis using three metrics: (i) Sum of quality difference of top \(n\) points, (ii) Top quality, and (iii) Zero Normalized Cross Correlation (ZNCC), as shown in Table II. The numbers in the table represent the average value of the matrices for the \(3\) tests on each phantom. These metrics have been computed with respect to the approximate ground truth for the phantom. The sum of the difference between the top \(n-\)points compares the quality of images acquired from the top-n highest quality values, top quality compares the highest value of image quality score and ZNCC evaluates the overall similarity of the acquired quality map during the search. The value of quality differences close to \(0\), and top quality and ZNCC value close to \(1\) indicates a better estimation of the quality map. The quality maps in Fig. 4 with less scattered probe points (less exploration) and more points in the high-quality region (red) represent a better search strategy.
From the result in Fig. 4 and Table II, it has been found that the BO using the segmented image as quality score in \((x,y)\) space with \(f_{z}\leq f_{d}\) have resulted in being too exploratory (low ZNCC) with a lot of points spread over the low-quality region of the phantom. However, the quality maps obtained using the expert's quality metric of the image have fewer explorations, with most of the probe positions in the high-quality region of the phantom. Due to noise and shadows in the ultrasound image, the segmentation results
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Image quality quality** & \multicolumn{3}{c}{**ResNet50+BP [34]**} & \multicolumn{3}{c}{**ResNet50+MS+BP (Proposed)**} \\ \cline{2-7}
**score** & **Precision** & **Recall** & **Accuracy** & **Precision** & **Recall** & **Accuracy** \\ \hline
**1** & \(92.00\) & \(97.87\) & \(94.84\) & \(93.88\) & \(97.87\) & \(95.88\) \\
**2** & \(83.33\) & \(68.96\) & \(75.47\) & \(83.87\) & \(89.65\) & \(86.67\) \\
**3** & \(65.22\) & \(75.00\) & \(69.77\) & \(88.24\) & \(75.00\) & \(81.08\) \\
**4** & \(93.33\) & \(85.71\) & \(89.36\) & \(91.11\) & \(83.67\) & \(87.23\) \\
**5** & \(90.48\) & \(97.44\) & \(93.83\) & \(90.48\) & \(97.44\) & \(93.83\) \\ \hline
**Average** & \(\boldsymbol{87.67}\) & \(\boldsymbol{87.48}\) & \(\boldsymbol{87.34}\) & \(\boldsymbol{90.23}\) & \(\boldsymbol{90.17}\) & \(\boldsymbol{90.05}\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparison of ultrasound image quality assessment model predictions on testing set, where accuracy values close to \(100\) indicate similarity to the expert’s quality score.
Fig. 3: Experimental setup of robotic ultrasound system with three phantoms of the urinary bladder.
are prone to errors, resulting in a large number of probe evaluations in low-quality regions, whereas expert's image quality score, which is based on the holistic assessment of the image, pinpoints the focus on anatomical structures rather than getting distracted by noise. The search strategies using \(fz<f_{d}\) could not find the high-quality region and instead converged to the local maxima rather than the global maxima. However, with \(f_{z}=f_{d}\), the high-quality regions have been acquired. When the quality region is searched using \(f_{z}\) as a variable in BO with zero prior, the quality maps and top quality score show that the high-quality regions can be located with a varying force too, which is essential for in-human ultrasound procedures. However, the search is quite exploratory, reporting low ZNCC values of \(0.733\) and \(0.821\) for quality \(q_{S}\) and \(q_{E}\), respectively. When the expert's prior is used, all BO strategies have significantly improved, including the search space with three variables (\(x,y,f_{z}\)). The exploration steps of BO usually increase as the search space dimension increases. However, BO with expert's prior reported a top quality of \(0.910\) with a ZNCC score of \(0.889\), which is \(9.6\%\) and \(7.6\%\) more than the BO with zero prior.
### _Validating the convergence of probe positions and forces_
Since our study involves phantom experiments, the approximate probe positions and forces that yield the best-quality images are known. The search strategy should converge to these approximate probe poses and forces to acquire high-quality images. The proposed strategy has reached the desired probe position with an average mean value accuracy of \(98.73\%\) across all phantoms. To emphasize the convergence of force, we compared the probe forces explored by different BO search strategies, as shown in Fig. 5. The proposed formulation of BO using the expert's prior and image quality metric has resulted in the mean value accuracy of \(99.28\%\), \(98.25\%\), and \(96.11\%\) for \(P0\), \(P1\), and \(P2\), respectively. Comparatively, the other BO search strategies using zero-prior and segmentation-based quality maps (\(q_{S}\)) have shown significant errors in mean values and greater standard deviation due to the noise in image feedback and the inability to adapt to the profile of the scanning region.
## IV Conclusion
We proposed an autonomous Robotic Ultrasound System (RUS) to perform the ultrasound as per clinical protocols. We used Bayesian Optimization (BO) to search for high-quality regions leveraging the domain expertise in the form of a prior quality map and ultrasound image quality. The prior map has been gleaned using expert's demonstration of the potential high-quality probing maneuvers. A novel image quality metric has been learned from the expert-labelled dataset of ultrasound images. Three phantom experiments validated that incorporating domain expertise into BO effectively improves the system performance, resulting in acquiring diagnostic quality ultrasound images while adapting to desired probing maneuvers. Since phantom results are promising, we would like to validate its capability for _in-vivo_ study using our RUS in India [3], which is our future work. We would also expand the search space in BO from \([x,y,f_{z}]\) to include \([roll,pitch, yaw]\) in order to orient the probe for scanning patients with complex physiological conditions.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline \multicolumn{2}{c}{**Image quality**} & \multicolumn{6}{c}{**BO with zero prior**} & \multicolumn{6}{c}{**BO with expert’s prior**} \\ \cline{3-13} \multicolumn{2}{c}{**estimation**} & \multicolumn{2}{c}{**Variables**} & \multicolumn{2}{c}{**Sum of quality difference of \(n\) points**} & \multicolumn{2}{c}{**Top**} & \multicolumn{2}{c}{**ZNCC**} & \multicolumn{2}{c}{**Sum of quality difference of \(n\) points**} & \multicolumn{2}{c}{**Top**} & \multicolumn{2}{c}{**ZNCC**} \\ \multicolumn{2}{c}{**method**} & & \(n=1\) & \(n=5\) & \(n=10\) & \(n=20\) & **Quality** & \(n=50\) & \(n=1\) & \(n=5\) & \(n=10\) & \(n=20\) & **Quality** & \(n=50\) \\ \hline Segmentation & \(x,y,f_{z}<f_{d}\) & \(0.382\) & \(0.944\) & \(1.430\) & \(1.931\) & \(0.684\) & \(0.689\) & \(0.291\) & \(0.692\) & \(0.941\) & \(1.205\) & \(0.782\) & \(0.782\) \\ (\(q_{S}\)) & \(x,y,f_{z}=f_{d}\) & \(0.132\) & \(0.609\) & \(0.963\) & \(1.651\) & \(0.911\) & \(0.811\) & \(0.103\) & \(0.531\) & \(0.785\) & \(1.308\) & \(0.911\) & \(0.920\) \\ ([38] & \(x,y,f_{z}\) & \(0.396\) & \(1.016\) & \(1.919\) & \(3.336\) & \(\mathbf{0.711}\) & \(\mathbf{0.733}\) & \(0.404\) & \(0.991\) & \(1.379\) & \(2.158\) & \(\mathbf{0.799}\) & \(\mathbf{0.801}\) \\ \hline Expert’s image & \(x,y,f_{z}<f_{d}\) & \(0.280\) & \(0.370\) & \(0.600\) & \(1.030\) & \(0.690\) & \(0.717\) & \(0.130\) & \(0.290\) & \(0.570\) & \(1.010\) & \(0.750\) & \(0.817\) \\ quality metric & \(x,y,f_{z}=f_{d}\) & \(0.120\) & \(0.240\) & \(0.390\) & \(0.710\) & \(0.950\) & \(0.876\) & \(0.050\) & \(0.090\) & \(0.180\) & \(0.970\) & \(0.980\) & \(0.959\) \\ (\(q_{E}\)) & \(x,y,f_{z}\) & \(0.130\) & \(0.270\) & \(0.820\) & \(1.320\) & \(\mathbf{0.823}\) & \(\mathbf{0.821}\) & \(0.040\) & \(0.280\) & \(0.760\) & \(1.600\) & \(\mathbf{0.910}\) & \(\mathbf{0.889}\) \\ \hline \hline \end{tabular}
\end{table} TABLE II: Quantitative comparison of different BO strategies for three different urinary bladder phantoms \(P0,P1\) and \(P2\)
Fig. 4: The estimated ultrasound image quality map of urinary bladder phantom \(P0\) using different BO strategies. Black dots are positions where probe evaluated the quality. The corresponding ultrasound images are available in the attached media.
Fig. 5: Force \(f_{z}\) profiles with different BO search strategies |
2310.10489 | Efficient Representation of Lattice Path Matroids | Efficient deterministic algorithms to construct representations of lattice
path matroids over finite fields are presented. They are built on known
constructions of hierarchical secret sharing schemes, a recent characterization
of hierarchical matroid ports, and the existence of isolating weight functions
for lattice path matroids whose values are polynomial on the size of the ground
set. | Carles Padró | 2023-10-16T15:13:11Z | http://arxiv.org/abs/2310.10489v3 | # Efficient Representation of Lattice Path Matroids
###### Abstract
An efficient deterministic algorithm to construct representations of lattice path matroids over finite fields is presented. Its running time is polynomial in the size of the ground set.
## 1 Introduction
This paper deals with efficient deterministic constructions of representations of matroids over finite fields. Specifically, given a family of representable matroids, we search for _deterministic_ algorithms that provide, for each matroid in the family, a representation over some finite field \(\mathbf{F}_{q}\). Both the running time and the size \(\log q\) of the elements in the finite field must be polynomial in the number of elements in the ground set.
The existence of such algorithms is well known for uniform and graphic matroids, and it has been proved for matroids with two clonal classes [1]. Operations on matroids like duality and direct sums provide constructions for other families as, for example, co-graphic and partition matroids. Even though efficient _randomized_ algorithms are known, the problem has not been solved for transversal matroids. Some results on that question are found in [17, 20].
In this paper, we present an efficient deterministic algorithm for lattice path matroids, a family of transversal matroids introduced in [7]. Even though that result has not been explicitly stated before, it directly follows from previous works in secret sharing. Namely, the constructions of ideal linear hierarchical secret sharing schemes in [8, 12, 26] and a recent characterization of the matroids determined by those schemes, which were proved to coincide with lattice path matroids [21]. In addition to pointing out and explaining that connection, the main contribution in this paper is a simpler and more general description of the constructions in [8, 12]. Specifically, by adapting the application of isolating weight functions in [17], a general method to find representations of transversal matroids is presented. It is efficient if the values of the isolating weight functions are polynomial in the size of the ground set. The existence of such functions is proved for lattice path matroids. The application to other families of transversal matroids remains an open question.
## 2 Preliminaries
### Polymatroids, Matroids, and Matroid Ports
The reader that is unfamiliar with matroid theory is referred to [22]. Most of the times we use here the terminology and notation from that textbook. More information about polymatroids and matroid ports, and their application to secret sharing, is found in [18].
A _set function_\(f\colon 2^{E}\to{\bf R}\) on a finite set \(E\) is _monotone_ if \(f(X)\leq f(Y)\) whenever \(X\subseteq Y\subseteq E\), and it is _submodular_ if \(f(X)+f(Y)-f(X\cup Y)-f(X\cap Y)\geq 0\) for all \(X,Y\subseteq E\). A _polymatroid_ is a pair \((E,f)\) formed by a _ground set_\(E\) and a _rank function_\(f\). The former is a finite set and the latter is a monotone, submodular set function on \(E\) with \(f(\emptyset)=0\). _Integer polymatroids_ are those with integer-valued rank functions. From now on, only integer polymatroids are considered.
An integer polymatroid \(M=(E,r)\) with \(r(\{x\})\leq 1\) for each \(x\in E\) is a _matroid_. The _independent sets_ of the matroid \(M\) are the subsets of the ground set with \(r(X)=|X|\). A _basis_ is a maximal independent set and a _circuit_ is a minimal dependent set.
Given a matrix \(A\) over a field \(K\) with columns indexed by a set \(E\), the sets \(X\subseteq E\) corresponding to linearly independent sets of columns of \(A\) form the collection of independent sets of a matroid \(M\) with ground set \(E\). In that situation, the matroid \(M\) is _representable_ over \(K\), or \(K\)-representable, and the matrix \(A\) is a _representation_ of \(M\) over \(K\).
For an element \(p_{o}\) in the ground set \(E\), the _port of the matroid \(M\) at \(p_{o}\)_ is formed by the sets \(X\subseteq E\smallsetminus\{p_{o}\}\) such that \(r(X\cup\{p_{o}\})=r(X)\). Observe that the minimal sets in the matroid port are the ones such that \(X\cup\{p_{o}\}\) is a circuit. As a consequence of [22, Theorem 4.3.3], a _connected_ matroid is determined by any of its ports.
While representations of matroids are collections of vectors (the columns of a matrix) some polymatroids can be represented by collections of vector subspaces. A polymatroid \((E,f)\) is _\(K\)-representable_ if there exists a collection \((V_{x})_{x\in E}\) of subspaces of a \(K\)-vector subspace \(V\) such that \(f(X)=\dim\sum_{x\in X}V_{x}\) for every \(X\subseteq E\).
### Transversal Matroids and Boolean Polymatroids
We discuss next some basic facts about transversal matroids, Boolean polymatroids, and lattice path matroids. The reader is referred to [4, 5, 6, 7, 19, 22] for additional information on those topics.
For an integer polymatroid \((S,f)\), consider the family formed by the subsets \(X\subseteq S\) such that \(|Y|\leq f(Y)\) for every \(Y\subseteq X\). By [22, Corollary 11.1.2], that is the family of independent sets of a matroid, which is called the _matroid induced by the polymatroid_\((S,f)\).
Let \(G\) be a bipartite graph with vertices in the parts \(J\) and \(S\). For a set \(X\) of vertices, \(N(X)\) denotes the set of neighbors of the vertices in \(X\). If \(B\subseteq S\), we notate \(G_{B}\) for the subgraph of \(G\) induced by \(J\cup B\). The _biadjacency matrix_ of the bipartite graph \(G\) is a \((0,1)\)-matrix whose rows and columns are indexed by the sets \(J\) and \(S\), respectively, and the entries equal to \(1\) mark the edges of \(G\).
The graph \(G\) determines two sequences of sets. Namely, \((C_{x}\,:\,x\in S)\) with \(C_{x}=N(\{x\})\subseteq J\) and \((A_{j}\,:\,j\in J)\) with \(A_{j}=N(\{j\})\subseteq S\). Observe that the sets in those sequences may not be distinct.
The _partial transversals_ of the sequence \((A_{j}\,:\,j\in J)\) are the independent sets of a _transversal matroid_\(M\) with ground set \(S\). Observe that \(X\subseteq S\) is an independent set of \(M\) if and only if there is a matching in \(G\) covering all vertices in \(X\). The sequence \((A_{j}\,:\,j\in J)\) of subsets of \(S\) or, equivalently, the graph \(G\) provide a _presentation_ of the transversal matroid \(M\). A transversal matroid may admit different presentations, but there exist presentations such that the size of \(J\) equals the rank of the matroid [4, Theorem 2.6]. From now on, we always assume that this is the case, that is, we assume that there is a matching in \(G\) with \(|J|\) edges. Observe that \(B\subseteq S\) is a basis of \(M\) if and only if the subgraph \(G_{B}\) has a perfect matching.
The sequence \((C_{x}\,:\,x\in S)\) of subsets of \(J\) determines a _Boolean polymatroid_ with ground set \(S\). Namely, the polymatroid \((S,f)\) with \(f(X)=|N(X)|=\left|\bigcup_{x\in X}C_{x}\right|\) for every \(X\subseteq S\). By Hall's marriage theorem, \(X\subseteq S\) is an independent set of the transversal matroid \(M\) determined
by \(G\) if and only if \(|Y|\leq f(Y)\) for every \(Y\subseteq X\). Therefore, a matroid is transversal if and only if it is induced by a Boolean polymatroid.
_Lattice path matroids_, which were introduced in [7], are a special class of transversal matroids. As a consequence of [4, Lemma 4.7] the following definition is equivalent to the one in [7]. For positive integers \(m,n\), with \(m\leq n\), we notate \([m,n]=\{m,m+1,\ldots,n\}\) and \([n]=[1,n]\).
**Proposition 2.1**.: _Let \(G\) be a bipartite graph with parts \(J=[r]\) and \(S=[n]\). Then the following conditions are equivalent._
1. _There are sequences_ \((a_{1},\ldots,a_{r})\) _and_ \((b_{1},\ldots,b_{r})\) _in_ \(S\) _with_ \(1=a_{1}\leq a_{2}\leq\cdots\leq a_{r}\) _and_ \(b_{1}\leq b_{2}\leq\cdots\leq b_{r}=n\) _such that_ \(A_{j}=[a_{j},b_{j}]\) _for every_ \(j\in J\)_._
2. _There are sequences_ \((c_{1},\ldots,c_{n})\) _and_ \((d_{1},\ldots,d_{n})\) _in_ \(J\) _with_ \(1=c_{1}\leq c_{2}\leq\cdots\leq c_{n}\) _and_ \(d_{1}\leq d_{2}\leq\cdots\leq d_{n}=r\) _such that_ \(C_{x}=[c_{x},d_{x}]\) _for every_ \(x\in S\)_._
**Definition 2.2**.: A _lattice path matroid_ is a transversal matroid that admits a presentation in the conditions of Proposition 2.1. It is a _nested matroid_ (also called _generalized Catalan matroid_) in the particular case that it admits such a presentation with \(b_{1}=n\) or, equivalently, \(c_{n}=1\).
### Vector Secret Sharing Schemes
The reader is referred to [2] for a comprehensive survey on secret sharing. In a _secret sharing scheme_, a secret value is distributed into _shares_ among some _players_ in such a way that only some _qualified_ sets of players are able to recover the secret from their shares. The qualified sets form the _access structure_, which is a _monotone_ family of sets of players. That is, every set containing a qualified set is qualified. A secret sharing scheme is _perfect_ if the shares from an unqualified set do not provide any information on the secret value, and it is _ideal_ if, in addition, each share has the same size as the secret value, which is the optimal case. Brickell and Davenport [9] proved that the access structure of every ideal secret sharing scheme is a matroid port.
_Vector secret sharing schemes_ are ideal schemes determined by linear codes. A _linear code_ of _length_\(n\) over a finite field \(K\) is a vector subspace \(C\subseteq K^{n}\). The rows of a _generator matrix_ form a basis of \(C\). Such a linear code \(C\) determines a _vector secret sharing scheme_ as follows. Given a _secret value_\(s\in K\), choose uniformly at random a code word \(c=(c_{1},c_{2},\ldots,c_{n})\in C\) with \(c_{1}=s\), and distribute the _shares_\(c_{2},\ldots,c_{n}\) among the \(n-1\)_players_ in the scheme. A set is qualified if and only if the first column of the generator matrix is a linear combination of the columns corresponding to the players in \(X\). Let \(M\) be the \(K\)-representable matroid associated to the linear code \(C\), that is, the matroid represented by the generator matrix. The access structure of the secret sharing scheme is the port of \(M\) at the element in the ground set corresponding to the first column. Therefore, the access structures of vector secret sharing schemes are the ports of representable matroids. Each representation of a matroid over a finite field provides vector secret sharing schemes for its ports and, conversely, a representation of a matroid over a finite field is obtained from a vector secret sharing scheme for any of its ports.
### Matroids with Large Clonal Classes
Two elements in the ground set of a matroid are _clones_ if the map that interchanges them and let all other elements fixed is an automorphism of the matroid. The equivalence classes of that equivalence relation are the _clonal classes_ of the matroid, For example, uniform matroids are those having only one clonal class.
In a secret sharing scheme, players \(x\) and \(y\) are _clones_ if, for every set \(A\) of players with \(x,y\notin A\), the set \(A\cup\{x\}\) is qualified if and only if so is \(A\cup\{y\}\). That is, they play the same role in the scheme. If the access structure is a matroid port, two players are clones if and only if they are clones in the matroid.
Secret sharing schemes whose set of players is partitioned into large clonal classes have been studied by several authors [3, 8, 10, 11, 12, 14, 15, 23, 25, 26, 27]. The main examples are _compartmented_ and _hierarchical_ secret sharing schemes.
**Definition 2.3**.: A matroid \(M\) is \(\Pi\)_-uniform_ for some partition \(\Pi=(S_{i}\,:\,i\in P)\) of the ground set \(S\) if all elements in the same part are clones. That is, each \(S_{i}\) is a subset of a clonal class. If \(|P|=m\), we say that \(M\) is \(m\)_-uniform_.
Let \(M=(S,r)\) be a \(\Pi\)-uniform matroid with \(\Pi=(S_{i}\,:\,i\in P)\). Associated to \(M\), consider the integer polymatroid \((P,g)\) with
\[g(I)=r\left(\bigcup_{i\in I}S_{i}\right)\]
for every \(I\subseteq P\). The matroid \(M\) is determined by the integer polymatroid \((P,g)\). Indeed, consider the map \(\pi\colon S\to P\) with \(\pi(x)=i\) if \(x\in S_{i}\) and the polymatroid \((S,f)\) with \(f(X)=g(\pi(X))\) for each \(X\subseteq S\). Then \(M\) is the matroid induced by the polymatroid \((S,f)\). Alternative proofs are given in [14, Section 4] and [13, Section 2]. The following result was proved in [14, Theorem 6.1].
**Proposition 2.4**.: _Consider a \(\Pi\)-uniform matroid \(M=(S,r)\) with \(\Pi=(S_{i}\,:\,i\in P)\) and its associated polymatroid \((P,g)\). There exists an integer \(q(M)\) such that \(M\) is \(K\)-representable if the field \(K\) has at least \(q(M)\) elements and \((P,g)\) is \(K\)-representable._
Nevertheless, no efficient methods are known to find representations of matroids with large clonal classes from representations of the associated polymatroids. which lead to the open problem posed in [14, Open Problem 6.9] and [16, Section VII]. Preliminary versions of that problem, and some solutions, are found in [8, 26, 27].
### Hierarchical Secret Sharing and Lattice Path Matroids
In an access structure, a player \(x\) is _hierarchically inferior_ to a player \(y\) if, for every set \(A\) of players with \(x,y\notin A\), the set \(A\cup\{y\}\) is qualified if so is the set \(A\cup\{x\}\). In that situation, we write \(x\preceq y\). Observe that \(x,y\) are clones if and only if \(x\preceq y\) and \(y\preceq x\). An access structure is _hierarchical_ if that preorder in the set of players is total. _Hierarchical secret sharing schemes_ are those having a hierarchical access structure.
Efficient deterministic constructions of vector secret sharing schemes were presented in [8, 26] for the so-called _hierarchical threshold access structures_. The construction in [8] was generalized in [12] to all hierarchical matroid ports, which had been previously characterized in [15] in terms of multi-uniform matroids induced by Boolean polymatroids. An alternative characterization has been recently found [21], which is summarized in the following. Hierarchical matroid ports coincide with the ports of lattice path matroids at one of the extreme elements in the ground set. In particular, hierarchical threshold access structures are ports of nested matroids. Moreover, the hierarchical order is compatible with the natural order in the ground set. Specifically, if \(M\) is a lattice path matroid with ground set \(S=[n]\), then in the port of \(M\) at the element \(1\in S\), a player \(x\) is hierarchically inferior to a player \(y\) if \(1<y\leq x\leq n\). The hierarchical order is reversed in the port of \(M\) at the element \(n\in S\).
Proposition 2.1 clarifies the connection between those two characterizations of hierarchical matroid ports. While the characterization in [15] uses the Boolean polymatroid determined by the sets \(C_{x}\), the one in [21] focuses on the lattice path matroid determined by the sets \(A_{j}\).
Therefore, the constructions from [8, 26] and the ones from [12] provide efficient deterministic algorithms to find representations over finite fields for nested matroids and, respectively, lattice path matroids. The method in [8, 12] provides representations over algebraic field extensions of large degree, while the one in [26], which applies only to nested matroids, it is based on Birkhoff interpolation and yields representations over large prime fields. Extending the second construction to all lattice path matroids is an open problem. In the next section, we present an alternative description of the first one.
## 3 Representations of Transversal Matroids
### Isolating Weight Functions
Representations for a transversal matroid \(M\) are obtained by modifying the biadjacency matrix of a presentation \(G\). Indeed, take an arbitrary field \(K\), replace each nonzero entry with a variable \(\alpha_{j,x}\) and assume that the entries of the matrix are polynomials over \(K\) in the variables \(\alpha_{j,x}\). Clearly, the determinant of the square submatrix formed by the columns corresponding to a set \(B\subseteq S\) with \(|B|=r\) is a nonzero polynomial if \(B\) is a basis of \(M\) and it is zero otherwise. At this point, representations for \(M\) are obtained by assigning values to the variables \(\alpha_{j,x}\). One possibility is considering that \(\alpha_{j,x}\) are algebraically independent elements over \(K\) in some extension field. In addition, for every sufficiently large field \(K\), it is possible to substitute the variables \(\alpha_{j,x}\) by elements in \(K\) in such a way that the value of every polynomial corresponding to a basis of \(M\) is nonzero. Nevertheless, it is not clear how to efficiently choose those elements. We describe next a method to assign values to the variables \(\alpha_{j,x}\) from a weight function on the edges of the graph.
**Definition 3.1**.: A weight function with non-negative integer values on the edges of \(G\) is _isolating_ if, for every basis \(B\) of the transversal matroid \(M\), among the perfect matchings of \(G_{B}\) there is only one with minimum weight.
Every bipartite graph admits an isolating weight function. Indeed, enumerate the edges \(\{e_{0},e_{1}.\ldots,e_{m-1}\}\) and take \(w(e_{k})=2^{k}\). Nevertheless, the method that is described in the following provides efficient representations of transversal matroids only if the values of the isolating weight functions are polynomial in the size of the ground set.
Let \(w\) be an isolating weight function for \(G\). Take a variable \(\alpha\) and put \(\alpha_{j,x}=\alpha^{w(j,x)}\) for every edge \((j,x)\) of \(G\). Take a prime \(p\) and assume that the entries of the matrix are polynomials in the variable \(\alpha\) over the finite field \(\mathbf{F}_{p}\). Then the determinant of the submatrix corresponding to a basis \(B\) is a nonzero polynomial in the variable \(\alpha\). Indeed, the coefficient of the minimum degree term is equal to \(1\) because it corresponds to the unique perfect matching in \(G_{B}\) with minimum weight. For each basis, the degree of that polynomial is at most the maximum weight of a perfect matching. Take an integer \(s\) larger than that quantity and \(q=p^{s}\). Consider the finite field \(\mathbf{F}_{q}\), an algebraic extension of \(\mathbf{F}_{p}\), and substitute \(\alpha\) by an element in \(\mathbf{F}_{q}\) whose minimal polynomial over \(\mathbf{F}_{p}\) is of degree \(s\). Finding such an element is equivalent to finding an irreducible polynomial over \(\mathbf{F}_{p}\) of degree \(s\), which can be done in time polynomial in \(p\) and \(s\) by the algorithm proposed in [24]. The resulting matrix is a representation of the matroid \(M\) over \(\mathbf{F}_{q}\). That method is efficient for every family of transversal matroids admitting isolating weight functions whose values are polynomial in the size of the ground set.
The most computationally expensive step is the algorithm to find an irreducible polynomial over \(\mathbf{F}_{p}\) of degree \(s\), which runs in time \(O(p^{1/2}s^{4})\) ignoring the powers of \(\log s\) and \(\log p\)[24]. Since \(p\) can be the same for all matroids in the family, the computation time depends almost exclusively on the value of \(s\), and hence on the maximum weight of the perfect matchings in the subgraphs \(G_{B}\). In addition, the value of \(s\) determines the size of the representations, and hence the efficiency of their applications as, for example, secret sharing schemes.
### Efficient Representations of Lattice Path Matroids
In this section, we consider only transversal matroids with a representation \(G\) with \(J=[r]\) and \(S=[n]\). For a set \(B\subseteq S\), we notate \(B=(x_{1},\ldots,x_{r})\) to indicate that its elements are arranged in increasing order.
We present in Proposition 3.3 a sufficient condition for the existence of isolating weight functions with polynomial weights. It provides efficient representations for lattice path matroids. The following technical result is a consequence of the _rearrangement inequality_.
**Lemma 3.2**.: _Let \((p_{1},\ldots,p_{r})\) and \((q_{1},\ldots,q_{r})\) be sequences of real numbers such that the first one is non-decreasing and the second one is non-increasing. Then_
\[p_{1}q_{1}+\cdots+p_{r}q_{r}\leq p_{1}q_{\sigma 1}+\cdots+p_{r}q_{\sigma r}\leq p _{1}q_{r}+\cdots+p_{r}q_{1}\]
_for every permutation \(\sigma\). Moreover, each of those bounds is attained only by one permutation if each sequence has distinct terms._
**Proposition 3.3**.: _Let \(M\) be a transversal matroid such that, for each basis \(B=(x_{1},\ldots,x_{r})\), all pairs \((j,x_{j})\) with \(j\in J\) are edges of \(G\). Then \(G\) admits an isolating weight function with maximum weight at most \((r-1)(n-1)\). In addition, for each basis \(B\), the maximum weight of the perfect matchings in \(G_{B}\) is less than \(r(r-1)(n-1)/2\)._
Proof.: For \(j\in[r]\) and \(x\in[n]\), take \(p_{j}=j-1\) and \(q_{x}=n-x\). For every edge \((j,x)\), take the weight \(w(j,x)=p_{j}q_{x}\). This is an isolating weight function because, by Lemma 3.2, the perfect matching formed by the edges \((j,x_{j})\) is the only one in \(G_{B}\) with minimum weight. Finally, by Lemma 3.2 again, the weight of a perfect matching in \(G_{B}\) is at most
\[(r-1)(n-1)+(r-2)(n-2)+\cdots+1\cdot(n-r+1)\]
and hence less than \(r(r-1)(n-1)/2\).
By the following two propositions, lattice path matroids are the only transversal matroids satisfying the sufficient condition in Proposition 3.3.
**Proposition 3.4**.: _Let \(M\) be a lattice path matroid and let \(G\) be a presentation of \(M\) in the conditions of Proposition 2.1. If \(B=(x_{1},\ldots,x_{r})\) is a basis of \(M\), then \((j,x_{j})\) is an edge of \(G\) for every \(j\in J\)._
Proof.: Suppose that there is a basis \(B\) without that property. Let \(P\) be a perfect matching in \(G_{B}\) with the maximum number of edges of the form \((j,x_{j})\) and take the minimum \(k\in J\) such that \((k,x_{k})\) is not in \(P\). Since \(P\) is a perfect matching, \(k\leq r-1\) and there exist \(\ell_{1},\ell_{2}\in[k+1,r]\) such that \((\ell_{1},x_{k})\) and \((k,x_{\ell_{2}})\) are edges in \(P\). Then
\[a_{k}\leq a_{\ell_{1}}\leq x_{k}<x_{\ell_{2}}\leq b_{k}\leq b_{\ell_{1}}\]
which implies that \((k,x_{k})\) and \((\ell_{1},x_{\ell_{2}})\) are edges of \(G\). Then
\[P^{\prime}=(P\smallsetminus\{(k,x_{\ell_{2}}),(\ell_{1},x_{k})\})\cup\{(k,x_{k }),(\ell_{1},x_{\ell_{2}})\}\]
is a perfect matching in \(G_{B}\) having more edges of the form \((j,x_{j})\) than \(P\)
**Proposition 3.5**.: _Let \(M\) be a transversal matroid without loops that admits a presentation \(G\) such that, for every basis \(B=(x_{1},\ldots,x_{r})\) and for every \(j\in J\), the pair \((j,x_{j})\) is an edge. Then \(M\) is a lattice path matroid._
Proof.: Let \(B_{1}=(a_{1},\ldots,a_{r})\) and \(B_{2}=(b_{1},\ldots,b_{r})\) be the first and last bases of \(M\) in the lexicographic order. We are going to prove that the sequence of sets \(([a_{j},b_{j}]\,:\,j\in J)\) is a presentation of \(M\), and hence it is a lattice path matroid. For two distinct bases \(B,B^{\prime}\), we rotate \(B\ll B^{\prime}\) if \(B\) precedes \(B^{\prime}\) in the lexicographic order. We prove first that \(x_{j}\in[a_{j},b_{j}]\) for each \(j\in[r]\) if \((x_{1},\ldots,x_{r})\) is a basis. Suppose that there is a basis with \(x_{j}<a_{j}\) for some \(j\in[r]\). Take \(B\) the first such basis in the lexicographic order and the minimum \(j\in[r]\) with \(x_{j}<a_{j}\). Since \(B_{1}\ll B\), the minimum \(k\in[r]\) with \(x_{k}>a_{k}\) satisfies \(k<j\). Then \(B^{\prime}=(B\smallsetminus\{x_{k}\})\cup\{a_{k}\}\) is another basis in the same situation with \(B^{\prime}\ll B\), a contradiction. Symmetrically, \(x_{j}\leq b_{j}\) for each \(j\in[r]\). We prove next that \((j,x)\) is an edge if \(x\in[a_{j},b_{j}]\). Since \(x\) is not a loop, there is an edge \((k,x)\). If \(k>j\) and \(x\neq b_{j}\), consider the basis \(B=(a_{1},\ldots,a_{j-1},b_{j},\ldots,b_{r})\). Then \((B\smallsetminus\{b_{k}\})\cup\{x\}\) is a basis and \(x\) is its \(j\)-th element, which implies that \((j,x)\) is an edge. Symmetrically, the same happens if \(k<j\) and \(x\neq a_{j}\).
The following result summarizes the discussion in this section and the previous one.
**Theorem 3.6**.: _There exists a deterministic algorithm that, given a presentation of a lattice path matroid \(M\) in the conditions of Proposition 2.1, provides a representation of \(M\) over a finite field with \(q=p^{s}\) elements, where \(p\) is an arbitrarily chosen prime number and \(s=r(r-1)(n-1)/2\). The running time of the algorithm is polynomial in \(p\) and the size \(n\) of the ground set._
### Lattice Path Matroids with Large Clonal Classes
We present next an improvement to the algorithm in Theorem 3.6 that applies to lattice path matroids with large clonal classes. It is very similar to the constructions of hierarchical vector secret sharing schemes in [8, 12].
Take \(J=[r]\), \(S=[n]\), and integers \(t_{i}\) with \(1=t_{1}<t_{2}<\cdots<t_{m}<t_{m+1}=n+1\). Consider the partition \(\Pi=(S_{i}\,:\,i\in[m])\) of \(S\) with \(S_{i}=[t_{i},t_{i+1}-1]\). For every \(x\in S\), put \(\pi(x)=i\) if \(x\in S_{i}\). Consider a bipartite graph \(G\) in the conditions of Proposition 2.1 such that, for each \(i\in[m]\), all vertices in \(S_{i}\) have the same neighbors. Then \(G\) is a presentation of a \(\Pi\)-uniform lattice path matroid \(M\). Observe that the port of \(M\) at the element \(1\in S\) is a hierarchical access structure in which all players in the same part are hierarchically equivalent.
As we did before, we replace the nonzero entries of the biadjacency matrix of \(G\) with polynomials in the variable \(\alpha\) over some finite field. Take a prime number \(p\) such that \(p>|S_{i}|=t_{i+1}-t_{i}\) for every \(i\in[m]\). For each \(i\in[m]\), take \(t_{i+1}-t_{i}\) distinct nonzero elements \((\beta_{x}\,:\,x\in S_{i})\) in the finite field \(\mathbf{F}_{p}\). For \(j\in J\) and \(x\in S\), take \(p_{j}=j-1\) and \(q_{x}=m-\pi(x)\), and consider on the edges of \(G\) the weight function \(w(j,x)=p_{j}q_{x}\). Finally, consider the matrix \(H\) that is obtained by replacing the entry in the biadjacency matrix of \(G\) corresponding to the edge \((j,x)\) with \(\beta_{x}^{j-1}\alpha^{w(j,x)}\).
We prove next that, for every basis \(B\) of \(M\), the determinant of the submatrix \(H_{B}\) formed by the corresponding columns is a nonzero polynomial. Even though the chosen weight function is not isolating, we can check that the coefficient of the minimum degree term is nonzero. Indeed, let \(B=(x_{1},\ldots,x_{r})\) be a basis of \(M\). By Lemma 3.2, the perfect matching \(((j,x_{j})\,:\,j\in J)\) has minimum weight, but there are other perfect matchings in \(G_{B}\) with the same weight, namely the ones of the form \(((j,x_{\sigma j})\,:\,j\in J)\), where \(\sigma\) is any permutation such that \(\pi(x_{\sigma j})=\pi(x_{j})\) for every \(j\in J\). The entries corresponding to the edges of \(G_{B}\) involved in those perfect matchings lie on square submatrices on the diagonal of \(H_{B}\), one for each \(i\in[m]\) with \(B\cap S_{i}\neq\emptyset\). The
determinant of each of those submatrices is of the form \(\alpha^{\ell_{i}}\Delta_{i}\), where \(\Delta_{i}\) is the determinant of a Vandermonde-like matrix, and hence nonzero. Therefore, the coefficient of the minimum degree term of \(\det H_{B}\) is equal to \(\prod_{i}\Delta_{i}\neq 0\). Observe that the weight of a perfect matching in any subgraph \(G_{B}\) is less than \(r(r-1)(m-1)/2\). At this point, the following result has been proved.
**Proposition 3.7**.: _There exists a deterministic algorithm that, given a an \(m\)-uniform lattice path matroid \(M\) in the conditions above, provides a representation of \(M\) over a finite field with \(q=p^{s}\) elements, where \(p\) is a prime larger than the number of elements in each part and \(s=r(r-1)(m-1)/2\). The running time of the algorithm is polynomial in \(p\) and the size \(n\) of the ground set._
This algorithm improves on the one in Theorem 3.6 if the number of parts \(m\) is small in relation to the size of the ground set. Even though the prime \(p\) is larger, the degree \(s\) of the extension can be much smaller and, as we discussed before, this is the main parameter to be taken into account.
If we need representations over fields with small characteristic, we can replace in Proposition 3.7 the prime number \(p\) with a prime power, but in this case the search of an irreducible polynomial is slightly more expensive (see [24]).
Every bi-uniform matroid (that is, \(m=2\)) is a lattice path matroid. and hence the algorithm in Proposition 3.7 provides representations with \(s=r(r-1)/2\). Nevertheless, the algorithm proposed in [1] is in general more efficient because the degree of the extension is \(s=d(d-1)/2\), where \(d=r(S_{1})+r(S_{2})-r\).
## Acknowledgment
Thanks to Anna de Mier for an enlightening discussion about transversal matroids. The author's work was supported by the Spanish Government under projects PID2019-109379RB-I00 and PID2021-124928NB-I00.
|
2304.04563 | Phase diagram of the ionic Hubbard model with density-dependent hopping | We obtain the quantum phase diagram of the ionic Hubbard model including
electron-hole symmetric density-dependent hopping. The boundaries of the phases
are determined by crossing of excited levels with particular discrete
symmetries, which coincide with jumps of charge and spin Berry phases with a
topological meaning. Reducing the magnitude of the hopping terms that do not
change the total number of singly occupied sites with respect to the other one,
the region of the phase diagram occupied by the fully gapped spontaneously
dimerized insulator (which separates the band insulating and Mott insulating
phases) is enlarged, particularly for small values of the alternating on-site
energy. This result might be relevant for experiments in cold atoms in which
topological charge pumping is observed when alternation in the hopping is
included. | P. Roura Bas, A. A. Aligia | 2023-04-10T13:02:48Z | http://arxiv.org/abs/2304.04563v1 | # Phase diagram of the ionic Hubbard model with density-dependent hopping
###### Abstract
We obtain the quantum phase diagram of the ionic Hubbard model including electron-hole symmetric density-dependent hopping. The boundaries of the phases are determined by crossing of excited levels with particular discrete symmetries, which coincide with jumps of charge and spin Berry phases with a topological meaning. Reducing the magnitude of the hopping terms that do not change the total number of singly occupied sites with respect to the other one, the region of the phase diagram occupied by the fully gapped spontaneously dimerized insulator (which separates the band insulating and Mott insulating phases) is enlarged, particularly for small values of the alternating on-site energy. This result might be relevant for experiments in cold atoms in which topological charge pumping is observed when alternation in the hopping is included.
## I Introduction
Ultracold quantum gases provide a versatile platform as universal quantum simulators of many-body problems [1]. Cold atoms as well as other platforms have been used to study quantized topological charge pumping in driven systems [2]. A time dependent adiabatic evolution in a closed cycle in a certain space of parameters constitute a Thouless pump, in which a quantized amount of charge or spin is transported, which is topologically protected [3; 4]. Simulating the non-interacting Rice-Mele model (RMM) [5] with ultracold atoms, quantized charge pumping has been achieved for bosons [6] and fermions [7]. More recently, charge pumping in the fermionic interacting RMM (IRMM) has been studied experimentally [8] and theoretically [9; 10], including spin pumping [10].
The IRMM is a one-dimensional Hubbard model which includes alternating on-site energies \(\pm\Delta\) and hopping \(t\pm\delta\) [Eq. (1) with \(t_{\alpha\beta}=t\)]. Ideally, in a Thouless pump, a critical point of degeneracy is surrounded in the adiabatic time cycle without closing a gap. For the IRMM, and fixed on-site interaction \(U\), the cycles which lead to non-trivial charge pumping enclose critical points lying at \(\delta=0\) and \(\Delta=\pm\Delta_{c}\). For \(\delta=0\), the IRMM is equivalent to the ionic Hubbard model (HIM) [11; 12; 13; 14; 15; 16] and it is known that at \(\Delta=\pm\Delta_{c}\) there is a charge transition in which the topologically protected charge Berry phase jumps between the values \(0\) and \(\pi\)[12], implying a transport of one charge when a time cycle is performed in the plane \((\delta,\Delta)\) enclosing the point \((0,\pm\Delta_{c})\)[10; 17].
Similarly at \(\Delta=\pm\Delta_{s}\) there is a spin transition in the IHM (\(\delta=0\)) with a jump in the spin Berry phase and a closing of the spin gap for \(|\Delta|\leq\Delta_{s}<\Delta_{c}\). The IHM has three phases. The system is a Mott insulator (MI) for \(0\leq\Delta<\Delta_{s}\), a band insulator (BI) for \(\Delta>\Delta_{c}\) and a spontaneously dimerized insulator (SDI) in a narrow region between the other two. The phase diagram (which is symmetric by a change of sign in \(\Delta\)) has been constructed in Ref. [12] using the method of crossing of excited energy levels (MCEL) based on conformal field theory [21; 22; 23; 24; 25]. The spin gap opens as \(|\delta|^{2/3}\) leading to a dimerized phase for finite \(\delta\)[10].
The fact that the spin gap vanishes in the MI phase (that corresponds to \(\delta=0\), \(-\Delta_{s}<\Delta<\Delta_{s}\)) brings problems for the charge pumping. A cycle in the plane \(\delta,\Delta\) that encloses a critical point \(\Delta=\pm\Delta_{c}\) and passes far from it, necessarily traverses the MI phase, because \(\Delta_{c}\) and \(\Delta_{s}\) are very near each other. Traversing with a finite velocity a gapless point produces spin excitation at finite energy, which in turn lead to charge excitations because of the mixing of both sectors at finite energy [11; 14]. This leads to oscillations in the charge pumping and loss of quantization with the number of cycles as determined theoretically [10] and experimentally [26]. While addition of a staggered magnetic field or Ising spin interactions lead to opening of the spin gap and robust charge pumping [10], these terms are not experimentally feasible at present.
Another possibility to enlarge the spin gap and the region of stability of the SDI phase is to add a density-dependent hopping (DDH). Such a term in an electron-hole symmetric form has been realized in cold atoms using Floquet engineering [27; 28; 29; 30; 31; 32]. The Hubbard model with nearest-neighbor hopping dependent on the occupancy of the sites involved (also called correlated hopping) has been derived and studied as an effective model for the superconducting cuprates [33; 34; 35], which leads to enhancement of superconductivity for certain parameters [36; 37; 38; 39]. In one dimension, it has been found that when the hopping term that changes the number of singly occupied sites [\(t_{AB}\) in Eq. (1)] is larger than the other two, a dimerized phase with a spin gap is favored [18; 19; 20; 21], which is the desired effect.
The goal of this work is to study to what extent the region of the phase diagram of the IHM occupied by the fully gapped SDI phase can be enlarged including DDH. We use the method of crossing of excited energy levels (MCEL) based on conformal field theory [21; 22; 23; 24; 25], already used in Ref. [12] for the standard IHM. For this model including DDH, the method also coincides with that of
jumps of charge and spin Berry phases used in Ref. [20].
The paper is organized as follows. In Section II we explain the model and methods. The resulting phase diagram is contained in Section III. Section IV contains a summary and discussion.
## II Model and methods
The IRMM including DDH has the form
\[H = \sum_{j\sigma}\left[-1+\delta\;(-1)^{j}\right]\left(c_{j\sigma}^{ \dagger}c_{j+1\sigma}+\text{H.c.}\right) \tag{1}\] \[\times[t_{AA}(1-n_{j\bar{\sigma}})(1-n_{j+1\bar{\sigma}})+t_{B}n_ {j\bar{\sigma}}n_{j+1\bar{\sigma}}\] \[+t_{AB}(n_{j\bar{\sigma}}+n_{j+1\bar{\sigma}}-2n_{j\bar{\sigma}}n _{j+1\bar{\sigma}})]\] \[+\Delta\sum_{j\sigma}(-1)^{j}n_{j\sigma}+U\sum_{j}n_{j\uparrow}n _{j\downarrow}.\]
The first term is the DDH, which is alternating for \(\delta\neq 0\). The amplitude \(t_{AA}\) corresponds to the situation in which only the particle that hops occupies the two nearest-neighbor sites involved in the hopping. For \(t_{AB}\) and \(t_{BB}\) the total occupancy is 2 and 3 respectively. In the following we assume the electron-hole symmetric case \(t_{BB}=t_{AA}\), which is the one implemented experimentally with cold atoms [27; 28; 29; 30; 31; 32]. \(\Delta\) is the alternating on-site energy and \(U\) is the on-site Coulomb repulsion, both characteristic of the IHM [11; 12; 13; 14; 15; 16].
Our conclusions, and our discussions below on the effect of the alternation of the hopping \(\delta\) are the same if \(\delta\) affects only the hopping part proportional to \(t_{AB}\), and not the other two.
In experiment usually the pump cycles are done in a two-dimensional space (\(\delta,p\)) in which both \(\delta\) and another parameter \(p\) (like \(\Delta\) or \(U\)) depend on time and return to the original value after the cycle. In the adiabatic limit, the charge (spin) pumped in the cycle is determined by the evolution of the charge (spin) Berry phase \(\gamma_{c}\) (\(\gamma_{s}\)) in the cycle (see for example Ref. [10]). Non trivial quantized charge (spin) pumping takes place when a critical point at which \(\gamma_{c}\) (\(\gamma_{s}\)) jumps, is surrounded in the cycle. The critical points lie on the line \(\delta=0\), because for \(\delta=0\), the system has inversion symmetry at each site and as a consequence, the Berry phases can only be either 0 or \(\pi\) (mod \(2\pi\)). In other words, \(\gamma_{c}/\pi\) and \(\gamma_{s}/\pi\) become topological numbers protected by inversion symmetry [17]. In addition, the MI phase in which the spin gap vanishes is also restricted to \(\delta=0\). Then to identify a possible cycle that encloses the charge critical point with a ground state separated from the rest of the spectrum in the whole cycle, one can keep \(\delta=0\), where all ground-state degeneracies lie. This is what we do in the rest of the work. The model becomes the IHM with electron-hole symmetric DDH.
To calculate the phase diagram of the model we use the MCEL [21; 22; 23; 24; 25]. The idea of the method is that the dominant correlations at large distances correspond to the smallest excitation energies. The crossing of excited levels in different symmetry sectors therefore correspond to phase transitions. The method has been used before for similar models [20; 21; 12]. For our model, this method and the jumps in the values of the Berry phases give the same information [12], but the MCEL is easier to implement.
The crossing for both charge and spin transitions are determined using open-shell boundary conditions (periodic if the number of sites \(L\) is multiple of 4, antiperiodic for \(L\) even not multiple of 4). The charge transition is determined by a crossing in the ground state of the two singlets of lowest energy with opposite parity under inversion. In the BI phase the ground state is even under inversion, while it is odd in the other two phases. The spin transition between SDI and MI phases, is determined by the crossing of the excited even singlet with lowest energy and the lowest excited odd triplet, which has less energy in the MI phase. In the actual calculation we have not evaluated the total spin \(S\) of the states, but used the parity under time reversal (the singlet is even and the triplet with total spin projection \(S_{z}=0\) is odd). All these states have wave vector 0 for \(\Delta\neq 0\).
To determine the phase diagram we have set \(t_{AB}=1\) as the unit of energy. Then for a given value of \(t_{AA}\) and \(\Delta\) we have calculated the values of \(U\) that correspond to the charge (\(U_{c}\)) and spin (\(U_{s}\)) transitions using the MCEL for all even number of sites \(L\) in the range \(6\leq L\leq 14\). The results were extrapolated to \(L\rightarrow\infty\) using a quadratic polynomial in \(1/L\). Examples of the extrapolation are shown in Fig. 1. The curves fits well the data and the finite-size effects are in general small, except for the charge transition for small values of \(\Delta\). In any case, a deviation of the value of \(U_{c}\) for \(\Delta=0.2\) for up to 20% is very unlikely from the trend of the curve and does not modify our conclusions.
## III Results
In Fig. 2, we compare the phase diagram of the standard IHM with that in which the hopping terms that do not alter the total number of singly occupied sites \(t_{AA}=t_{BB}\) is reduced. For fixed \(\Delta\) the system is a BI for low \(U\) and a MI for large \(U\). Both phases are separated by a narrow region of the SDI phase. Increasing \(U\), the charge transition at \(U=U_{c}\) (with a jump in \(\gamma_{c}\) from 0 to \(\pi\)[12]) corresponds to the change from the BI to the SDI, and at the spin transition for \(U=U_{s}\) (with a jump in \(\gamma_{c}\) from 0 to \(\pi\)[12]) the SDI changes to the MI.
For \(\Delta>\sim 3t_{AB}\) the width of the SDI phase is of the order of a fraction of \(t_{AB}\). Naturally, keeping the three hopping terms equal \(t_{\alpha\beta}=t\) and reducing \(t\) the SDI phase shrinks and both \(U_{c},U_{s}\to 2\Delta\) for \(t\to 0\). It is therefore noticeable that reducing only \(t_{AA}=t_{BB}\), the extension of the SDI phase is _increased_ and by about 20% for \(\Delta>3t_{AB}\).
As it is apparent in Fig. 3, this effect is more dramatic for \(\Delta<0.5t_{AB}\). In fact, contrary to the case of equal
\(t_{\alpha\beta}=t\), there is a finite spin gap for small \(U\) even at \(\Delta=0\) when \(t_{AB}>t_{AA}=t_{BB}\). This result has been found before [18; 19; 20; 21] and can be understood from analytical calculations using bosonization [18; 19] which coincide very well with numerical calculations [20] for small vales of \(U\).
The particular features of the phase diagram for small \(\Delta\), render it possible to perform time evolutions around a critical point for the charge transition that transport a quantized unit of charge per cycle with open charge and spin gaps in the whole cycle. For example for \(\Delta=0.3\), \(t_{AB}=1\), and \(t_{AA}=t_{BB}=0.5\), we find \(U_{c}=0.91\) and \(U_{s}=2.14\). Similarly for \(\Delta=0.4\) we find \(U_{c}=1.22\) and \(U_{s}=2.38\). Performing a time dependent cycle in the plane either \((\delta,\Delta)\) or \((\delta,U)\) with center at the charge critical point (with \(\delta=0\)) and amplitude in \(\Delta\) of about \(\pm 0.25\) or in \(U\) near \(\pm 0.5\), the cycle never reaches the MI phase and therefore, the spin gap is always open. One point that should be taken into account is that the spin transition is of the Kosterlitz-Thouless type, and therefore the spin gap is exponentially small in the SDI phase near the transition boundary [20]. Therefore it might be convenient to move the time cycle away from the MI-SDI boundary, keeping the critical point inside it.
In the previous figures we have taken \(t_{AA}=t_{BB}=t_{AB}/2\). In Fig. 4 we show how the values of \(U\) at both transitions change with \(t_{AA}=t_{BB}\) for a small value of \(\Delta\).
Figure 1: (Color online) Critical values of \(U\) for the charge and spin transitions for \(t_{AB}=1\), and other parameters indicated inside each figure.
Figure 3: Same as Fig. 2 in a smaller region of \(\Delta\).
Figure 2: (Color online) Phase diagram of the IHM with DDH in the \(\Delta,U\) plane for \(t_{AB}=1\), and two values of \(t_{AA}=t_{BB}\). The region between the full and dashes lines corresponds to the SDI.
We can see that the change is more rapid for \(t_{AA}\) near \(t_{AB}\) and the increase in \(U_{s}\) is already large for \(t_{AA}/t_{AB}=3/4\). Note also that when \(t_{AA}/t_{AB}\) exceeds 1 for a significant amount \(U_{c}\) becomes larger than \(U_{s}\) giving rise to a new phase in between. The properties of this phase are beyond the scope of the present work. For \(\Delta=0\), this phase corresponds to a Tomonaga-Luttinger liquid with triplet superconducting and bond spin-density wave correlations dominating at large distances [18; 19; 20], but \(\Delta\) is a relevant perturbation that modifies the physics.
## IV Summary and discussion
We have calculated the quantum phase diagram of the IHM including electron-hole symmetric DDH, which corresponds to Eq. (1) with \(\delta=0\) and \(t_{AA}=t_{BB}<t_{AB}\), using the MCEL in rings of up to 14 sites. We obtain that a reduction of \(t_{AA}=t_{BB}\) with respect to \(t_{AB}\), increases the region of the phase diagram occupied by the fully gapped SDI phase, particularly for \(|\Delta|<t_{AB}\) and \(U<2t_{AB}\).
This result is of possible relevance to experiments with cold atoms for which quantized pumping is observed, but crossing the spin gapless MI phase leads to oscillation and the breakdown of topological pumping after the first cycle. Floquet engineering renders it possible to achieve the region \(t_{AA}=t_{BB}<t_{AB}\) and enlarge the region of the gapped SDI phase. To confirm the possibilities of this proposal, it would be useful to calculate the spin gap and the internal gap between even and odd singlets in the SDI phase. This would require a study of longer chains using density-matrix renormalization group. It would also be useful to simulate the time dependence in pumping cycles similar to the ones suggested here, using infinite time-evolving block decimation.
###### Acknowledgements.
We thank Konrad Viebahn, Eric Bertok and Fabian Heidrich-Meisner for useful discussions. AAA acknowledges financial support provided by PICT 2017-2726 and PICT 2018-01546 of the ANPCyT, Argentina.
|
2306.12578 | Tree-level UV completions for $N_R$SMEFT $d=6$ and $d=7$ operators | We study ultra-violet completions for operators in standard model effective
field theory extended with right-handed neutrinos ($N_R$SMEFT). Using a
diagrammatic method, we generate systematically lists of possible tree-level
completions involving scalars, fermions or vectors for all operators at $d=6$
and $d=7$, which contain at least one right-handed neutrino. We compare our
lists of possible UV models to the ones found for pure SMEFT. We also discuss
how the observation of LNV processes via $N_R$SMEFT operators at the LHC can be
related to Majorana neutrino masses of the standard model neutrinos. | Rebeca Beltrán, Ricardo Cepedello, Martin Hirsch | 2023-06-21T21:27:22Z | http://arxiv.org/abs/2306.12578v1 | # Tree-level UV completions for \(N_{r}\)Smeff \(d=6\) and \(d=7\) operators
###### Abstract
We study ultra-violet completions for operators in standard model effective field theory extended with right-handed neutrinos (\(N_{R}\)SMEFT). Using a diagrammatic method, we generate systematically lists of possible tree-level completions involving scalars, fermions or vectors for all operators at \(d=6\) and \(d=7\), which contain at least one right-handed neutrino. We compare our lists of possible UV models to the ones found for pure SMEFT. We also discuss how the observation of LNV processes via \(N_{R}\)SMEFT operators at the LHC can be related to Majorana neutrino masses of the standard model neutrinos.
SMEFT, UV completions, right-handed neutrinos +
Footnote †: : [FOOTNO
## 1 Introduction
Experimental searches for heavy neutral leptons (HNLs) have gained a lot of momentum in the past few years, for recent reviews see for example [1; 2]. Minimal models of HNLs assume only that some nearly singlet fermion exists with a small coupling to gauge bosons. This setup is motivated experimentally as it gives a simple two parameter extension (one mixing angle and one mass) of the standard model (SM), describing the experimental sensitivity, which can then be easily compared among different searches.
At the same time, there has also been a revival of interest in HNLs from theory. Here, the main motivation is usually the connection HNLs might have with the non-zero neutrino masses observed in oscillation experiments. In the minimal type-I seesaw [3; 4; 5; 6; 7], Majorana right-handed neutrinos (\(N_{R}\)'s) have only one coupling to SM particles: The Yukawa coupling to leptons and the Higgs field. After electro-weak symmetry breaking, this setup leads to Majorana neutrino masses for the active neutrinos and HNL states with a coupling suppressed by a small mixing angle, typically \(|V|^{2}\propto m_{\nu}/M_{M}\), where \(M_{M}\) is the mass of the right-handed neutrino.
Compared to the expected experimental sensitivities this mixing is quite small, but prospects are much better in many non-minimal models: Inverse [8] and linear seesaw [9; 10], also models with a new \(Z^{\prime}\)[11; 12] or leptoquarks [13], to mention a few examples. However, given the large number of possible BSM extensions involving \(N_{R}\)'s and the absence (so far) of any BSM physics at the LHC, a more practical ansatz to parametrise the phenomenolgy
of \(N_{R}\)'s is to use effective field theory, in particular \(N_{R}\)SMEFT, i.e. standard model effective theory extended with right-handed neutrinos.
The operator basis for \(N_{R}\)SMEFT is known now up to \(d=9\)[14]. Lower dimensional operators have been studied in a number of papers earlier, for example \(d=5\)[15; 16; 17], \(d=6\)[15; 18; 19] and \(d=7\)[20; 21]. The phenomenology for \(N_{R}\)SMEFT is also a very active area of research. Ref. [22], studied constraints on the four-fermion operators involving \(N_{R}\) assuming a stable \(N_{R}\). For promptly decaying \(N_{R}\)'s see [23], other decay modes have been studied in [24; 25]. \(N_{R}\)'s as long-lived particles at the LHC in \(N_{R}\)SMEFT have been discussed in [26; 27; 28; 29]. The very recent paper [30] gives a reinterpretation of many previous HNL searches in terms of \(N_{R}\)SMEFT operators. Further collider studies of \(N_{R}\)SMEFT include Refs. [31; 32; 33; 34; 35].
While different UV models for \(N_{R}\)SMEFT operators at \(d=5\) and \(d=6\) have been mentioned in the literature, what is still lacking is a systematic decomposition of all \(d=6\) and \(d=7\) operators, i.e. an attempt to give the complete list of one and two particle extensions of the SM, that can generate the complete set of operators in the ultra-violet. To provide such a systematic particle "dictionary" constitutes the basic motivation for our current work. In this context we need to mention [36], which has some overlap with our paper. In [36] a list of leptoquark states for \(N_{R}\)SMEFT operators at \(d=6\) has been derived and we agree with these results. Also important for us is [37]. Here, the authors have presented the complete dictionary of one field extensions of the SM particle content for pure SMEFT at \(d=6\). We will compare our results to [37] and comment on the differences between \(N_{R}\)SMEFT and SMEFT dictionaries in section 2.2.
The list of \(d=7\) operators and their tree-level completions is presented in section 2.3. At \(d=7\) level, all operators with \(N_{R}\)'s violate lepton number by two units.1 Lepton number violation (LNV) in two units should always generate Majorana neutrino masses for the active neutrinos of the SM. The most famous example of this connection is the so-called "black-box theorem" [38], where it was shown that the observation of neutrinoless double beta decay (\(0\nu\beta\beta\) decay) guarantees that Majorana neutrino masses are generated at some level in perturbation theory. Similarly, if LNV is observed in a process involving \(N_{R}\)'s at the LHC, the active neutrinos must have Majorana masses. We will discuss this in detail in section 3.
Footnote 1: The \(N_{R}\)SMEFT dictionary is not only a direct consequence of the \(N_{R}\)SMEFT dictionary.
The rest of this paper is organised as follows. In the next section, we first discuss the basics of the diagrammatic method. We then give the list of \(d=6\) operators and their decomposition in section 2.2 and for \(d=7\) operators in 2.3. Baryon number violating operators are discussed separately in 2.4. Section 3 is then devoted to a discussion of LNV in \(N_{R}\)SMEFT, before we close with a short summary. In the appendix we provide Lagrangians for all the UV models we discussed in the main text.
## 2 Decompositions
In this section, we introduce the diagrammatic method used to systematically decompose \(N_{R}\)SMEFT operators at tree-level. The same method has been used for studying the Weinberg operator at different loop orders in [39; 40; 41; 42] and for 1-loop openings of SMEFT four-fermion operators in [43; 44]. We will therefore be brief in this description. Subse
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Name & \(\mathcal{S}\) & \(\mathcal{S}_{1}\) & \(\varphi\) & \(\Xi\) & \(\Xi_{1}\) \\ \hline Irrep & \((1,1,0)\) & \((1,1,1)\) & \(\left(1,2,\frac{1}{2}\right)\) & \((1,3,0)\) & \((1,3,1)\) \\ \(d=6\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) & \(\circ\) \\ \hline \hline Name & \(\omega_{1}\) & \(\omega_{2}\) & \(\Pi_{1}\) & \(\Pi_{7}\) & \(\xi\) \\ Irrep & \(\left(3,1,-\frac{1}{3}\right)\) & \(\left(3,1,\frac{2}{3}\right)\) & \(\left(3,2,\frac{1}{6}\right)\) & \(\left(3,2,\frac{7}{6}\right)\) & \(\left(3,3,-\frac{1}{3}\right)\) \\ \(d=6\) & \(\circ\) & \(\circ\) & \(\circ\) & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: New scalars contributing to \(d=6\) and \(d=7\)\(N_{R}\)SMEFT operators at tree-level. Only fields marked with a circle contribute to \(d=6\), the remaining ones appear in models for \(d=7\) operators. Field names follow the conventions of [37].
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Name & \(\mathcal{B}\) & \(\mathcal{B}_{1}\) & \(\mathcal{W}\) & \(\mathcal{W}_{1}\) & \(\mathcal{L}_{1}\) & \(\mathcal{L}_{3}\) \\ Irrep & \((1,1,0)\) & \((1,1,1)\) & \((1,3,0)\) & \((1,3,1)\) & \(\left(1,2,\frac{1}{2}\right)\) & \(\left(1,2,-\frac{3}{2}\right)\) \\ \(d=6\) & \(\circ\) & \(\circ\) & & & \(\circ\) \\ \hline \hline Name & \(\mathcal{U}_{1}\) & \(\mathcal{U}_{2}\) & \(\mathcal{Q}_{1}\) & \(\mathcal{Q}_{5}\) & \(\mathcal{X}\) \\ Irrep & \(\left(3,1,-\frac{1}{3}\right)\) & \(\left(3,1,\frac{2}{3}\right)\) & \(\left(3,2,\frac{1}{6}\right)\) & \(\left(3,2,-\frac{5}{6}\right)\) & \(\left(3,3,\frac{2}{3}\right)\) \\ \(d=6\) & \(\circ\) & \(\circ\) & \(\circ\) & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: New vector-like fermions contributing to \(d=6\), \(d=7\)\(N_{R}\)SMEFT at tree-level.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Name & \(\mathcal{N}\) & \(E\) & \(\Delta_{1}\) & \(\Delta_{3}\) & \(\Sigma\) & \(\Sigma_{1}\) \\ \hline Irrep & \((1,1,0)\) & \((1,1,-1)\) & \(\left(1,2,-\frac{1}{2}\right)\) & \(\left(1,2,-\frac{3}{2}\right)\) & \(\left(1,3,0\right)\) & \(\left(1,3,-1\right)\) \\ \(d=6\) & \(\circ\) & & \(\circ\) & \(\circ\) & \(\circ\) \\ \hline \hline Name & \(U\) & \(D\) & \(Q_{1}\) & \(Q_{5}\) & \(Q_{7}\) & \(T_{1}\) & \(T_{2}\) \\ Irrep & \(\left(3,1,\frac{2}{3}\right)\) & \(\left(3,1,-\frac{1}{3}\right)\) & \(\left(3,2,\frac{1}{6}\right)\) & \(\left(3,2,-\frac{5}{6}\right)\) & \(\left(3,2,\frac{7}{6}\right)\) & \(\left(3,3,-\frac{1}{3}\right)\) & \(\left(3,3,\frac{2}{3}\right)\) \\ \(d=6\) & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: New scalars contributing to \(d=6\) and \(d=7\)\(N_{R}\)SMEFT operators at tree-level. Only fields marked with a circle contribute to \(d=6\), the remaining ones appear in models for \(d=7\) operators. Field names follow the conventions of [37].
quently, we apply the diagrammatic method to \(N_{R}\)SMEFT operators which include at least one \(N_{R}\) at both \(d=6\) (section 2.2) and \(d=7\) (section 2.3). Operators at \(d=6\) and \(d=7\) that violate baryon number are discussed separately in section 2.4.
### Diagrammatic method basics
The procedure of _opening up_ or _exploding_2 EFT operators can be summarized in three main steps. First, given an effective operator with \(n\) light fields, one constructs all possible _topologies_ that can generate the operator at some loop order. A topology is made up of \(n\) external legs connected through internal lines using only renormalisable interactions, i.e. 3- and 4-point interaction vertices. The number of internal lines depends on the topology and on loop order. In our case, we consider only tree-level openings and all internal lines are identified with BSM _heavy_ fields.
Footnote 2: This term was introduced in [45] and refers to the process of expanding an operator into a series of UV renormalisable models generating the initial operator at tree-level.
In the second step, one assigns each light field in the operator to an external leg of the topology, and Lorentz invariance fixes the Lorentz nature of the internal lines to either scalar (\(S\)), fermion (\(F\)) or vector (\(V\)). All possible permutations of the external light fields give rise to different combinations for the internal fields. At this stage, the output consists of a set of _diagrams_ where every line has definite Lorentz nature.
Finally, by imposing gauge invariance, in our case the SM symmetry \(SU(3)_{C}\times SU(2)_{L}\times U(1)_{Y}\), in all the interaction vertices, one determines the heavy fields' quantum numbers, which are uniquely fixed for tree-level openings. The diagrams are then promoted to _model diagrams_ and the lists of particles that can be accommodated as internal lines constitute our _models_.
In summary, for each effective operator of dimension \(d\), the output of the diagrammatic process is a set of models consisting of heavy BSM particles, each of which gives a tree-level opening of the operator. We collect in tables 1 - 3 all the BSM fields that appear in the model diagrams found for \(N_{R}\)SMEFT \(d=6\) and \(d=7\) operators, classified according to their Lorentz nature. Fields contributing to \(d=6\) operators are marked, the remaining fields appear only in models for \(d=7\) operators.
Some additional comments are in order. We have implemented the described method in a Mathematica code. The input for this code is the list of fields and the number of derivatives contained in the considered operator. Neither the Lorentz structure nor the field contractions have to be specified. As a consequence, in cases where the basis of operators allow more than one operator with the same field content (and number of derivatives) the method yields a list of model diagrams, each of which will contribute to at least one operator in this set, but does not provide the information, to which specific operator (or operators) the model will be matched.
We choose to do so, because the model lists found in this way are complete, once one scans over all possible operators at a given level of \(d\). After this has been done, many models are found more than once, reflecting the fact that in most cases BSM extensions will generate not only one specific operator, but contribute to several operators, when the
correct matching for the model is calculated. Matching of tree-level models (as we consider here) could of course easily be calculated "by hand", but recently the code Matchete[46] has been published, with which the matching can be done also automatically.
Our code does not fix the operator list to any particular on-shell basis. In consequence, we do not consider diagrams with light bridges. Once the operator list, to which any given model is matched, is reduced to the on-shell basis, operators containing diagrams with light bridges will be automatically included. It is important to note, however, that diagrams with light bridges do not present new models, i.e. it is guaranteed that with the diagrammatic method, as discussed here, no models are lost.
For tables 1 - 3, we follow the naming conventions of [37]. Most fields in our list of possible BSM particles have already appeared in that reference in a different context. Note that (i) \(\Xi_{1}\) in neutrino physics is usually denoted with the symbol \(\Delta\) and (ii) we use \(\mathcal{N}\) for the heavy fermion \(F(1,1,0)\) to distinguish it from the light \(N_{R}\). However, there are two special cases, the vectors \(\mathcal{L}_{1}\) and \(\mathcal{U}_{1}\). The latter does not appear in pure SMEFT at \(d=6\)[37], but this field is listed in [36].
On the other hand, \(\mathcal{L}_{1}\) is more subtle. One can choose to treat vector fields as either gauge vectors or not. For general vector fields, one can write down a general list of interactions obeying simply Lorentz and gauge symmetries of the model under consideration. However, for heavy gauge vectors a certain set of these interactions is not allowed, see the discussion in [47]. In pure SMEFT at \(d=6\), \(\mathcal{L}_{1}\) can contribute to the matching only if the interaction term \(\mathcal{L}_{1,\mu}^{\dagger}D^{\mu}H\) is included [37]. Such a term is not allowed for a gauge vector [47]. However, in \(N_{R}\)SMEFT a \(\mathcal{L}_{1}\) will contribute to the matching, even if it is forced to be a gauge vector. We note that while we do not give the list of possible gauge groups for the vectors in table 3, \(\mathcal{L}_{1}\) appears, for example, in 331 models [48; 49; 50; 51].
Finally, all heavy fermions appearing in the models are considered to be vector-like under the SM gauge group, even though just one of the chiralities might be needed to open up an operator. This assumption is motivated by experimental data. The observation of a SM-like Higgs boson at ATLAS [52] and CMS [53] rules out the existence of a fourth chiral generation (in the minimal SM).
### Dictionary for \(d=6\)
In table 4 we show the list of operators at \(d=6\) that can be generated at tree-level. The operators are classified into three classes: \(\psi^{2}H^{3}\), \(\psi^{2}H^{2}D\), and \(\psi^{4}\).3 At \(d=6\) there is a fourth operator class, \(\psi^{2}HX\), whose operators can only be realised at loop level and hence are not listed here. For a complete basis of on-shell operators at \(d=6\), see [21].
Footnote 3: The full list of four-fermion operators includes two additional operators that violate baryon number (\(B\)), these are discussed separately in section 2.4.
Figure 1 shows all possible diagrams that generate the three operator classes mentioned above. The first class, \(\psi^{2}H^{3}\), can be obtained through two different topologies, one of which contains two heavy fields (depending on their Lorentz nature we find three diagrams corresponding to the possibilities: \(SS\), \(FF\), \(SF\)), leading in most cases to two-particle extensions of the SM. The other topology includes a 4-point interaction vertex and
it contains just one heavy scalar field. The four-fermion class, \(\psi^{4}\), can only be generated at tree-level with a scalar or a vector heavy propagator, depending on the chirality of the external fields. Finally, there are two openings for the operator class \(\psi^{2}H^{2}D\) with just one internal field, one of which contains the derivative in the \(VSS\) interaction vertex, and in the other, the derivative comes from the fermion propagator. Thus, all operators in table 4 can be generated at tree-level with one-particle SM extensions, except for the operator \(\mathcal{O}_{LNH^{3}}\) belonging to \(\psi^{2}H^{3}\), for which some openings require two-particle models.
Here, our approach differs from the philosophy followed in [37], since we insist on using only renormalisable vertices. The authors of [37], on the other hand, consider only one particle extensions of the SM. However, already at \(d=6\) in pure SMEFT there are some operators which require two BSM fields. The list of decompositions given in [37] for pure SMEFT is nevertheless complete, since [37] add also non-renormalisable operator (NRO) interactions at \(d=5\) to their Lagrangian.4
Footnote 4: Our Mathematica code can handle also non-renormalisable interactions. We have checked for some concrete cases that we can reproduce the one-heavy-particle at a time results of [37], once we include \(d=5\) terms.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{\(\psi^{2}H^{3}\) (+h.c.)} & \multicolumn{2}{|c|}{(\(\overline{R}R\))(\(\overline{R}R\))} & \multicolumn{2}{|c|}{(\(\overline{L}L\))(\(\overline{R}R\))} \\ \hline \(\mathcal{O}_{LNH^{3}}\) & \((\overline{L}N_{R})\tilde{H}(H^{\dagger}H)\) & \(\mathcal{O}_{NN}\) & \((\overline{N_{R}}\gamma^{\mu}N_{R})(\overline{N_{R}}\gamma_{\mu}N_{R})\) & \(\mathcal{O}_{LN}\) & \((\overline{L}\gamma^{\mu}L)(\overline{N_{R}}\gamma_{\mu}N_{R})\) \\ \multicolumn{2}{|c|}{\(\psi^{2}H^{2}D\) (+h.c.)} & \(\mathcal{O}_{eN}\) & \((\overline{e}_{R}\gamma^{\mu}e_{R})(\overline{N_{R}}\gamma_{\mu}N_{R})\) & \(\mathcal{O}_{QN}\) & \((\overline{Q}\gamma^{\mu}Q)(\overline{N_{R}}\gamma_{\mu}N_{R})\) \\ \hline \(\mathcal{O}_{NH^{2}D}\) & \((\overline{N_{R}}\gamma^{\mu}N_{R})(H^{\dagger}\overleftrightarrow{D_{\mu}^{ \ast}H})\) & \(\mathcal{O}_{uN}\) & \((\overline{u}_{R}\gamma^{\mu}u_{R})(\overline{N_{R}}\gamma_{\mu}N_{R})\) & \((\overline{L}R)(\overline{L}R)\) (+h.c.) \\ \(\mathcal{O}_{NeH^{2}D}\) & \((\overline{N_{R}}\gamma^{\mu}e_{R})(\tilde{H}^{\dagger}iD_{\mu}H)\) & \(\mathcal{O}_{dN}\) & \((\overline{d}_{R}\gamma^{\mu}d_{R})(\overline{N_{R}}\gamma_{\mu}N_{R})\) & \(\mathcal{O}_{LNLe}\) & \((\overline{L}N_{R})\epsilon(\overline{L}e_{R})\) \\ \hline \((\overline{L}R)(\overline{R}L)\) (+h.c.) & \(\mathcal{O}_{duNe}\) & \((\overline{d}_{R}\gamma^{\mu}u_{R})(\overline{N_{R}}\gamma_{\mu}e_{R})\) & \(\mathcal{O}_{LNQd}\) & \((\overline{L}N_{R})\epsilon(\overline{Q}d_{R})\) \\ \hline \(\mathcal{O}_{QuNL}\) & \((\overline{Q}u_{R})(\overline{N_{R}}L)\) & \(\mathcal{O}_{NNNN}\) & \((\overline{N_{R}^{\ast}}N_{R})(\overline{N_{R}^{\ast}}N_{R})\) & \(\mathcal{O}_{LdQN}\) & \((\overline{L}d_{R})\epsilon(\overline{Q}N_{R})\) \\ \hline \end{tabular}
\end{table}
Table 4: \(N_{R}\)SMEFT operators at \(d=6\) which can be generated at tree-level. Baryon number violating operators are discussed separately in section 2.4.
Figure 1: Operator classes at \(d=6\) and their respective tree openings at the diagram level. Solid lines correspond to fermions, dashed lines to scalars and wavy lines to vectors. Red lines are for heavy fields.
We present in table 5 the list of one-particle models found at \(d=6\) and the corresponding operators they open at tree-level, while the two-particle models for \(\mathcal{O}_{LNH^{3}}\) are shown separately in table 6, where we categorize the models based on the Lorentz nature of the involved fields. We note that the new field \(\mathcal{U}_{1}\) contributes to only one \(d=6\) operator in \(N_{R}\)SMEFT, \(\mathcal{O}_{dN}\). This is due to the existence of a single interaction vertex involving light fields and the heavy vector, given by \(\mathcal{L}\propto\left(\overline{N_{R}}\gamma_{\mu}d_{R}\right)\mathcal{U}_{1 }^{\mu\dagger}\).
The Lagrangian terms of heavy fields including a \(N_{R}\) are given in appendix A. Furthermore, we also collect there all gauge-invariant renormalisable terms that can be written down for \(\mathcal{U}_{1}\) and the BSM fields appearing in tables 1 - 3, as well as for the vector \(\mathcal{L}_{1}\). Recall, that the field \(\mathcal{L}_{1}\) was also included in [37], but as a non-gauge vector. For \(N_{R}\)SMEFT, we find additional renormalisable interactions of this vector. We also present them in appendix A. The term containing solely light fields and \(\mathcal{L}_{1}\), which contributes to the matching of the
\begin{table}
\begin{tabular}{|c|l l|} \hline \(\psi^{2}H^{3}\) & Two-particle models & \\ \hline \multirow{3}{*}{\(\mathcal{O}_{LNH^{3}}\)} & \(SS:\) & \((\mathcal{S},\varphi)\), \((\Xi_{1},\varphi)\), \((\Xi,\varphi)\) \\ & \(FF:\) & \((\Delta_{1},\mathcal{N})\), \((\Delta_{1},\Sigma_{1})\), \((\Delta_{1},\Sigma)\) \\ & \(FS:\) & \((\mathcal{N},\mathcal{S})\), \((\Delta_{1},\mathcal{S})\), \((\Delta_{1},\Xi_{1})\), \((\Sigma_{1},\Xi_{1})\), \((\Delta_{1},\Xi)\), \((\Sigma,\Xi)\) \\ \hline \end{tabular}
\end{table}
Table 6: Two-particle decompositions for the \(d=6\) operator \(\mathcal{O}_{LNH^{3}}\). There are three types of models according to the Lorentz nature of the heavy fields.
\begin{table}
\begin{tabular}{|c l|} \hline Models & Operators \\ \hline \(\mathcal{S}\) & \(\mathcal{O}_{NN}\), \(\mathcal{O}_{NNNN}\) \\ \(\mathcal{S}_{1}\) & \(\mathcal{O}_{LNLe}\), \(\mathcal{O}_{eN}\) \\ \(\varphi\) & \(\mathcal{O}_{QuNL}\), \(\mathcal{O}_{LNLe}\), \(\mathcal{O}_{LNQd}\), \(\mathcal{O}_{LN}\), \(\mathcal{O}_{LNH^{3}}\) \\ \(\omega_{1}\) & \(\mathcal{O}_{LNQd}\), \(\mathcal{O}_{dN}\), \(\mathcal{O}_{duNe}\) \\ \(\omega_{2}\) & \(\mathcal{O}_{uN}\) \\ \(\Pi_{1}\) & \(\mathcal{O}_{LNQd}\), \(\mathcal{O}_{QN}\) \\ \hline \(\Delta_{1}\) & \(\mathcal{O}_{NH^{2}D}\), \(\mathcal{O}_{NeH^{2}D}\) \\ \hline \(\mathcal{B}\) & \(\mathcal{O}_{NH^{2}D}\), \(\mathcal{O}_{NN}\), \(\mathcal{O}_{eN}\), \(\mathcal{O}_{uN}\), \(\mathcal{O}_{dN}\), \(\mathcal{O}_{LN}\), \(\mathcal{O}_{QN}\) \\ \(\mathcal{B}_{1}\) & \(\mathcal{O}_{NeH^{2}D}\), \(\mathcal{O}_{eN}\), \(\mathcal{O}_{duNe}\) \\ \(\mathcal{L}_{1}\) & \(\mathcal{O}_{LN}\) \\ \(\mathcal{U}_{1}\) & \(\mathcal{O}_{dN}\) \\ \(\mathcal{U}_{2}\) & \(\mathcal{O}_{QuNL}\), \(\mathcal{O}_{uN}\), \(\mathcal{O}_{duNe}\) \\ \(\mathcal{Q}_{1}\) & \(\mathcal{O}_{QuNL}\), \(\mathcal{O}_{QN}\) \\ \hline \end{tabular}
\end{table}
Table 5: One-particle decompositions for \(d=6\)\(N_{R}\)SMEFT operators. We differentiate between scalar, fermion and vector models. The first column gives the particle, the second the operators that are generated at tree-level.
operator \(\mathcal{O}_{LN}\), is given by \(\mathcal{L}\propto\left(\overline{N_{R}^{c}}\gamma_{\mu}L\right)\mathcal{L}_{1}^{\mu}\).
Given the complete UV Lagrangian, one could perform the tree-level matching onto the set of \(N_{R}\)SMEFT operators. While we do not perform explicitly this matching, the process can be automated with computer tools, such as Matchete[46]. We added a notebook with some example models as an auxiliary file to this paper.
### Dictionary for \(d=7\)
We next discuss the results for \(d=7\)\(N_{R}\)SMEFT operators. There are eight different operator classes, but only five of them can be generated at tree-level. Operators with two fermions containing derivatives or strength field tensors can only come at tree-level if they have at least two scalars [47]. Operators with four fermions and a derivative cannot be generated either at tree-level, since the potential diagrams with four external legs (see diagrams for class \(\psi^{4}\) in figure 1) do not contain any derivative in the interaction vertices. Hence the classes \(\psi^{2}H^{3}D\), \(\psi^{2}H^{2}D^{2}\) and \(\psi^{2}H^{2}X\) are tree-level generated, whereas \(\psi^{2}HDX\), \(\psi^{4}D\) and \(\psi^{2}X^{2}\) can only be opened via loops. The remaining classes generated at tree-level contain only fermions and scalars: \(\psi^{4}H\) and \(\psi^{2}H^{4}\). All the operators which can be decomposed at tree-level are shown in table 7.5 The complete list of on-shell operators at \(d=7\) can be found, for example, in [21].
Footnote 5: Again, baryon number violating operators, in class \(\psi^{4}H\), are treated separately in section 2.4.
The diagrams generating the listed operators are represented in figure 2, covering all possible configurations for the considered operators in \(N_{R}\)SMEFT. As before, we adopt the convention of using solid, dashed, and wavy lines to denote fermion, scalar, and vector fields, respectively, and black for light fields, while red lines denote heavy fields.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \multicolumn{2}{c|}{\(\psi^{2}H^{3}D\)} & \multicolumn{2}{c|}{\(\psi^{4}H\)} & \multicolumn{1}{c|}{\(\psi^{4}H\)} \\ \hline \multirow{3}{*}{\(\mathcal{O}_{NLH^{3}D}\)} & \(\epsilon_{ij}(\overline{N_{R}^{c}}\gamma_{\mu}L^{i})(iD^{\mu}H^{j})(H^{\dagger}H)\) & \multirow{3}{*}{\(\mathcal{O}_{LNLH}\)} & \multirow{3}{*}{\(\epsilon_{ij}(\overline{L}\gamma_{\mu}L)(\overline{N_{R}^{c}}\gamma^{\mu}L^{i})H^{j}\)} & \multirow{3}{*}{\(\mathcal{O}_{LNeH}\)} & \multirow{3}{*}{\((\overline{L}N_{R})(\overline{N_{R}^{c}}e_{R})H\)} \\ & \(\epsilon_{ij}(\overline{N_{R}^{c}}\gamma_{\mu}L^{i})H^{j}(H^{\dagger}\widehat{ L^{\mu}H})\) & & & \\ \cline{1-1} \cline{3-3} & \(\psi^{2}H^{2}D^{2}\) & & & \(\epsilon_{ij}(\overline{Q}\gamma_{\mu}Q)(\overline{N_{R}^{c}}\gamma^{\mu}L^{i})H ^{j}\) & \(\mathcal{O}_{eLNH}\) & \(H^{\dagger}(\overline{e}\overline{h}L)(\overline{N_{R}^{c}}N_{R})\) \\ \cline{1-1} \cline{3-3} & \(\psi^{2}H^{2}D^{2}\) & & & \(\epsilon_{ij}(\overline{Q}\gamma_{\mu}Q^{i})(\overline{N_{R}^{c}}\gamma^{\mu}L^{ j})H\) & \(\mathcal{O}_{QNdH}\) & \((\overline{Q}N_{R})(\overline{N_{R}^{c}}d_{R})H\) \\ \hline \multirow{3}{*}{\(\mathcal{O}_{NeH^{2}D^{2}}\)} & \(\epsilon_{ij}(\overline{N_{R}^{c}}\gamma_{D^{c}}^{\mu}e_{R})(H^{\dagger}D^{\mu}H ^{j})\) & \multirow{3}{*}{\(\mathcal{O}_{CNLH}\)} & \multirow{3}{*}{\(\epsilon_{ij}(\overline{e}\overline{R}\gamma_{\mu}e_{R})(\overline{N_{R}^{c}} \gamma^{\mu}L^{i})H^{j}\)} & \multirow{3}{*}{\(\mathcal{O}_{dQNH}\)} & \multirow{3}{*}{\(H^{\dagger}(\overline{d}\overline{d}Q)(\overline{N_{R}^{c}}N_{R})\)} \\ & \((\overline{N_{R}^{c}}N_{R})(D_{\mu}H^{\dagger})D^{\mu}H\) & & & \(\epsilon_{ij}(\overline{u}\overline{R}\gamma_{\mu}u_{R})(\overline{N_{R}^{c}} \gamma^{\mu}L^{i})H^{j}\) & \(\mathcal{O}_{uQNH}\) & \(\tilde{H}^{\dagger}(\overline{u}\overline{s}Q)(\overline{N_{R}^{c}}N_{R})\) \\ \hline \multirow{3}{*}{\(\mathcal{O}_{NH^{2}W}\)} & \(\psi^{2}H^{2}X\) & \multirow{3}{*}{\(\mathcal{O}_{duNLH}\)} & \multirow{3}{*}{\(\epsilon_{ij}(\overline{d}\overline{n}\gamma_{\mu}u_{R})(\overline{N_{R}^{c}} \gamma^{\mu}L^{i})\tilde{H}^{j}\)} & \multirow{3}{*}{\(\mathcal{O}_{LNNH}\)} & \multirow{3}{*}{\((\overline{L}N_{R})(\overline{N_{R}^{c}}N_{R})\tilde{H}\)} \\ & \((\overline{N_{R}^{c}}N_{R})(D_{\mu}H^{\dagger})D^{\mu}H\) & & & \\ \cline{1-1} \cline{3-3} & \(\psi^{2}H^{2}X\) & \multirow{3}{*}{\(\mathcal{O}_{duNLH}\)} & \multirow{3}{*}{\(\epsilon_{ij}(\overline{d}\overline{n}\gamma_{\mu}u_{R})(\overline{N_{R}^{c}} \gamma^{\mu}L^{i})\tilde{H}^{j}\)} & \multirow{3}{*}{\(\mathcal{O}_{LNNH}\)} & \multirow{3}{*}{\((\overline{L}N_{R})(\overline{N_{R}^{c}}N_{R})\tilde{H}\)} \\ \cline{1-1} \cline{3-3} & \(\mathcal{O}_{NeH^{2}W}\) & \((\epsilon\tau^{I})_{ij}(\overline{N_{R}^{c}}\sigma^{\mu\nu}e_{R})(H^{\dagger}H ^{j})W_{\mu\nu}^{I}\) & & \(\epsilon_{ij}(\overline{d}\overline{s}Q^{i})(\overline{N_{R}^{c}}e_{R})H^{j}\) & \(\mathcal{O}_{NLNH}\) & \(\tilde{H}^{\dagger}(\overline{N_{R}}L)(\overline{N_{R}^{c}}N_{R})\) \\ \cline{1-1} \cline{3-3} & \(\mathcal{O}_{NH^{2}B}\) & \((\overline{N_{R}^{c}}\sigma^{\mu\nu}N_{R})(H^{\dagger}H)B_{\mu\nu}\) & & \(\mathcal{O}_{QuNeH}\) & & \(\psi^{2}H^{4}\) \\ \cline{1-1} \cline{3-3} & \(\mathcal{O}_{NH^{2}W}\) & \((\overline{N_{R}^{c}}\sigma^{\mu\nu}N_{R})(H^{\dagger}\tau^{I}H)W_{\mu\nu}^{I}\) & & \((\overline{Q}\sigma_{\mu\nu}u_{R})(\overline{N_{R}^{c}}\sigma^{\mu\nu}e_{R})H\) & \(\mathcal{O}_{NH^{4}}\) & \((\overline{N_{R}^{c}}N_{R})(H^{\dagger}H)^{2}\) \\ \hline \end{tabular}
\end{table}
Table 7: \(N_{R}\)SMEFT operators at \(d=7\) that admit tree-level decompositions. All operators are non-hermitian, (+h.c.) is implicitly assumed.
n most diagrams, there are now two or even three heavy propagators. However, the operator class \(\psi^{2}H^{2}D^{2}\) can be generated with just one heavy propagator. Consequently, at \(d=7\), we encounter models with up to three new particles. Regarding the Lorentz nature of the heavy fields appearing in the different models, we find that for the operators belonging to the \(\psi^{2}H^{2}X\) class, only heavy fermions are allowed and for the ones in \(\psi^{2}H^{2}D^{2}\) and \(\psi^{2}H^{4}\), only scalars and fermions can be drawn as internal lines. In the remaining classes, \(\psi^{2}H^{3}D\) and \(\psi^{4}H\), the models consist of a pair of fields that are different combinations of scalars, fermions and vectors.
As the number of internal lines increases in the opening of diagrams of higher dimensional operators, so does the possible number of models. In total we count 224 UV-decompositions for the \(d=7\) operators in table 7, but the same particle content can give rise to more than one \(d=7\) operator in many cases. The total number of different particle combinations for the listed operators is then found to be 112. We present the results for each operator individually in tables 8 - 9, classifying the models by the number of BSM particles they contain (one, two or three). Additionally, we group the models based on the Lorentz nature of the heavy propagators depicted in the diagrams.
Among these models, there are only two one-particle models: the scalar \(\mathcal{S}\) and the fermion \(\Delta_{1}\). The number of two-particle models increases to 102, consisting of 42 \(FS\)
Figure 2: Diagrams generating different \(d=7\) operator classes at tree-level. External light fields appear as black lines and heavy propagators are marked in red.
models and 45 \(FV\) models. The remaining models include three \(FF\), nine \(SS\), and three \(SV\) configurations. As we indicated in figure 2, we did not find any \(VV\) model. Finally, there are eight three-particle models, which generate the operator \(\mathcal{O}_{NH^{4}}\). It is worth noting that subsets of particles within the three-particle models can also generate other operators.
\begin{table}
\begin{tabular}{|c|l|l|l|} \hline \(\psi^{2}H^{2}D^{2}\) & Models & \(\psi^{2}H^{2}X\) & Models \\ \hline \(\mathcal{O}_{NeH^{2}D^{2}}\) & \(F\ :\ \ \Delta_{1}\) & \(\mathcal{O}_{NeH^{2}W}\) & \(F\ :\ \ \Delta_{1}\) \\ \hline \(\mathcal{O}_{NH^{2}D^{2}}\) & \(S\ :\ \ \mathcal{S}\) & \(\mathcal{O}_{NH^{2}B}\) & \(F\ :\ \ \Delta_{1}\) \\ \cline{2-4} \(\mathcal{O}_{NH^{2}D^{2}}\) & \(F\ :\ \ \Delta_{1}\) & \(\mathcal{O}_{NH^{2}W}\) & \(F\ :\ \ \Delta_{1}\) \\ \hline \(\psi^{4}H\) & Models & & \\ \hline \multirow{4}{*}{\(\mathcal{O}_{LNLH}\)} & \(SS\ :\ \ (\mathcal{S}_{1},\varphi)\ (\varphi,\Xi_{1})\) \\ & \(FS\ :\ \ (E,\mathcal{S}_{1})\ (\Sigma_{1},\Xi_{1})\ (\Delta_{1},\mathcal{S}_{1})\ (\Delta_{1},\Xi_{1})\ (\mathcal{N},\varphi)\ (\Sigma,\varphi)\) \\ & \(FV\ :\ \ (\mathcal{N},\mathcal{B})\ (\Sigma,\mathcal{W})\ (\mathcal{N},\mathcal{L}_{1})\ (\Sigma,\mathcal{L}_{1})\ (\Delta_{1},\mathcal{B})\ (\Delta_{1},\mathcal{W})\ (E,\mathcal{L}_{1})\ (\Sigma_{1},\mathcal{L}_{1})\) \\ \hline \multirow{4}{*}{\(\mathcal{O}_{QNLH}\)} & \(SS\ :\ \ (\omega_{1},\Pi_{1})\ (\Pi_{1},\zeta)\) \\ & \(FS\ :\ \ (D,\omega_{1})\ (T_{1},\zeta)\ (\Delta_{1},\omega_{1})\ (\Delta_{1},\zeta)\ (\mathcal{N},\Pi_{1})\ (\Sigma,\Pi_{1})\ (U,\Pi_{1})\ (T_{2},\Pi_{1})\) \\ & \(FV\ :\ \ (U,\mathcal{U}_{2})\ (T_{2},\chi)\ (U,\mathcal{L}_{1})\ (T_{2},\mathcal{L}_{1})\ (\mathcal{N},\mathcal{B})\ (\Sigma,\mathcal{W})\ (\mathcal{N},\mathcal{Q}_{1})\ (\Sigma,\mathcal{Q}_{1})\) \\ & \((\Delta_{1},\mathcal{U}_{2})\ (\Delta_{1},\chi)\ (D,\mathcal{L}_{1})\ (T_{1},\mathcal{L}_{1})\ (\Delta_{1},\mathcal{B})\ (\Delta_{1},\mathcal{W})\ (D,\mathcal{Q}_{1})\ (T_{1}, \mathcal{Q}_{1})\) \\ \hline \multirow{4}{*}{\(\mathcal{O}_{eNLH}\)} & \(SS\ :\ \ (\mathcal{S}_{1},\varphi)\) \\ & \(FS\ :\ \ (\mathcal{N},\mathcal{S}_{1})\ (\Delta_{1},\varphi)\ (\Delta_{3},\mathcal{S}_{1})\) \\ & \(FV\ :\ \ (\mathcal{N},\mathcal{B})\ (\mathcal{N},\mathcal{B}_{1})\ (\Delta_{3},\mathcal{S}_{ 3})\ (\Delta_{3},\mathcal{L}_{1})\ (\Delta_{1},\mathcal{B})\ (\Delta_{1},\mathcal{B})\ (\Delta_{1},\mathcal{L}_{3})\ (\Delta_{1},\mathcal{L}_{1})\) \\ \hline \multirow{4}{*}{\(\mathcal{O}_{dNLH}\)} & \(SS\ :\ \ (\omega_{1},\Pi_{1})\) \\ & \(FS\ :\ \ (Q_{1},\Pi_{1})\ (\Delta_{1},\Pi_{1})\ (Q_{5},\omega_{1})\ (\mathcal{N},\omega_{1})\) \\ & \(FV\ :\ \ (\mathcal{N},\mathcal{B})\ (\mathcal{N},\mathcal{U}_{1})\ (Q_{5},\mathcal{Q}_{5})\ (Q_{5},\mathcal{L}_{1})\ (\Delta_{1},\mathcal{B})\ (Q_{1},\mathcal{U}_{1})\ (\Delta_{1},\mathcal{Q}_{5})\ (Q_{1}, \mathcal{L}_{1})\) \\ \hline \multirow{4}{*}{\(\mathcal{O}_{uNLH}\)} & \(SS\ :\ \ (\omega_{2},\Pi_{7})\) \\ & \(FS\ :\ \ (Q_{7},\Pi_{7})\ (\Delta_{1},\Pi_{7})\ (\mathcal{N},\omega_{2})\ (Q_{1},\omega_{2})\) \\ & \(FV\ :\ \ (Q_{1},\mathcal{Q}_{1})\ (Q_{1},\mathcal{L}_{1})\ (\mathcal{N},\mathcal{B})\ (\mathcal{N},\mathcal{U}_{2})\ (\Delta_{1},\mathcal{Q}_{1})\ (Q_{7},\mathcal{L}_{1})\ (\Delta_{1},\mathcal{B})\ (Q_{7}, \mathcal{U}_{2})\) \\ \hline \multirow{4}{*}{\(\mathcal{O}_{duNLH}\)} & \(SS\ :\ \ (\omega_{2},\Pi_{1})\) \\ & \(FS\ :\ \ (Q_{1},\Pi_{1})\ (\Delta_{1},\Pi_{1})\ (Q_{1},\omega_{2})\ (E,\omega_{2})\) \\ & \(FV\ :\ \ (E,\mathcal{B}_{1})\ (E,\mathcal{U}_{1})\ (Q_{1},\mathcal{Q}_{1})\ (Q_{1},\mathcal{L}_{1})\ (\Delta_{1},\mathcal{B}_{1})\ (Q_{1},\mathcal{U}_{1})\ (\Delta_{1},\mathcal{Q}_{1})\) \\ \hline \multirow{4}{*}{\(\mathcal{O}_{dQNeH}\)} & \(SS\ :\ \ (\mathcal{S}_{1},\varphi)\) \\ & \(FS\ :\ \ (\Delta_{1},\varphi)\ (Q_{5},\mathcal{S}_{1})\ (U,\mathcal{S}_{1})\) \\ & \(FV\ :\ \ (U,\mathcal{U}_{2})\ (U,\mathcal{U}_{1})\ (Q_{5},\mathcal{Q}_{5})\ (Q_{5},\mathcal{Q}_{1})\ (\Delta_{1},\mathcal{U}_{2})\ (\Delta_{1},\mathcal{U}_{1})\ (\Delta_{1},\mathcal{Q}_{5})\ (\Delta_{1},\mathcal{Q}_{1})\) \\ \hline \multirow{4}{*}{\(\mathcal{O}_{QuNeH}\)} & \(SS\ :\ \ (\mathcal{S}_{1},\varphi)\ (\omega_{1},\Pi_{1})\ (\omega_{2},\Pi_{7})\) \\ & \(FS\ :\ \ (Q_{7},\Pi_{7})\ (\Delta_{1},\Pi_{7})\ (D,\omega_{1})\ (\Delta_{1},\omega_{1})\ (D,\mathcal{S}_{1})\ (Q_{7},\mathcal{S}_{1})\ (\Delta_{1},\varphi)\ (\Delta_{1},\Pi_{1})\) \\ & \((Q_{7},\Pi_{1})\ (\Delta_{1},\omega_{2})\ (D,\omega_{2})\) \\ \hline \end{tabular}
\end{table}
Table 8: \(N_{R}\)SMEFT \(d=7\) operators in classes \(\psi^{2}H^{2}D^{2}\), \(\psi^{2}H^{2}X\) and \(\psi^{4}H\) and the corresponding models generating them at tree-level.
\begin{tabular}{|c|c c|} \hline \(\psi^{2}H^{3}D\) & Models \\ \hline & \(SV\ :\ \ ({\cal S},{\cal L}_{1})\ (\Xi,{\cal L}_{1})\ (\Xi_{1},{\cal L}_{1})\) \\ & \(FS\ :\ \ ({\cal N},{\cal S})\ (\Sigma,\Xi)\ (\Sigma_{1},\Xi_{1})\ (\Delta_{1},{\cal S})\ (\Delta_{1},\Xi)\ (\Delta_{1},\Xi_{1})\) \\ \({\cal O}_{NLH^{3}D}\) & \(FV\ :\ \ ({\cal N},{\cal B})\ (\Sigma,{\cal W})\ (\Sigma_{1},{\cal W}_{1})\ (\Delta_{1},{\cal B})\ (\Delta_{1},{\cal W})\ (\Delta_{1},{\cal B}_{1})\ (\Delta_{1},{\cal W}_{1})\) \\ & \(FF\ :\ \ ({\cal N},\Delta_{1})\ (\Delta_{1},\Sigma)\ (\Delta_{1},\Sigma_{1})\) \\ \hline \hline \(\psi^{2}H^{4}\) & Models \\ \hline & \(SS\ :\ \ {\cal S}\) \\ & \(SS\ :\ \ ({\cal S},\varphi)\ ({\cal S},\Xi_{1})\ ({\cal S},\Xi)\) \\ & \(FS\ :\ \ ({\cal N},{\cal S})\ (\Sigma,\Xi)\ (\Sigma_{1},\Xi_{1})\ (\Delta_{1},{\cal S})\ (\Delta_{1},\Xi)\ (\Delta_{1},\Xi_{1})\ (\Delta_{1},\varphi)\) \\ \({\cal O}_{NH^{4}}\) & \(FF\ :\ \ ({\cal N},\Delta_{1})\ (\Delta_{1},\Sigma)\ (\Delta_{1},\Sigma_{1})\) \\ & \(SSS\ :\ \ ({\cal S},\varphi,\Xi_{1})\ ({\cal S},\varphi,\Xi)\) \\ & \(FFS\ :\ \ (\Delta_{1},\Sigma_{1},\Xi_{1})\ ({\cal N},\Delta_{1},{\cal S})\ (\Delta_{1},\Sigma,\Xi)\) \\ & \(FSS\ :\ \ (\Delta_{1},{\cal S},\varphi)\ (\Delta_{1},\varphi,\Xi)\ (\Delta_{1},\varphi,\Xi_{1})\) \\ \hline \hline \(\psi^{4}H\) & Models \\ \hline & \(SS\ :\ \ ({\cal S},\varphi)\ ({\cal S}_{1},\varphi)\) \\ & \(FS\ :\ \ (E,{\cal S})\ (E,{\cal S}_{1})\ (\Delta_{1},\varphi)\ (\Delta_{1},{\cal S})\ (\Delta_{1},{\cal S}_{1})\) \\ \hline & \(SS\ :\ \ ({\cal S},\varphi)\) \\ \({\cal O}_{eLNH}\) & \(FS\ :\ \ (\Delta_{1},{\cal S})\ (\Delta_{1},\varphi)\ (E,{\cal S}_{1})\) \\ & \(FV\ :\ \ (E,{\cal B}_{1})\ (\Delta_{1},{\cal L}_{1})\ (\Delta_{1},{\cal B}_{1})\) \\ \hline & \(SS\ :\ \ ({\cal S},\varphi)\ (\omega_{1},\Pi_{1})\) \\ & \(FS\ :\ \ (\Delta_{1},\varphi)\ (D,\omega_{1})(\Delta_{1},\omega_{1})\ (Q_{1},\Pi_{1})\ (\Delta_{1},\Pi_{1})\ (Q_{1},{\cal S})\ (D,{ \cal S})\) \\ \hline & \(SS\ :\ \ ({\cal S},\varphi)\) \\ \({\cal O}_{dQNH}\) & \(FS\ :\ \ (\Delta_{1},\varphi)\ (Q_{1},{\cal S})\ (D,{\cal S})\) \\ & \(FV\ :\ \ (\Delta_{1},{\cal U}_{1})\ (\Delta_{1},{\cal Q}_{1})\ (D,{\cal U}_{1})\ (Q_{1},{\cal Q}_{1})\) \\ \hline & \(SS\ :\ \ ({\cal S},\varphi)\ (\omega_{2},\Pi_{1})\) \\ & \(FS\ :\ \ (Q_{1},\Pi_{1})\), \((\Delta_{1},\Pi_{1})\), \((U,\omega_{2})\), \((\Delta_{1},\omega_{2})\), \((U,{\cal S})\), \((Q_{1},{\cal S})\) \\ \hline & \(SS\ :\ \ ({\cal S},\varphi)\) \\ \({\cal O}_{uQNH}\) & \(FS\ :\ \ (\Delta_{1},\varphi)\ (U,{\cal S})\ (Q_{1},{\cal S})\) \\ & \(FV\ :\ \ (Q_{1},{\cal Q}_{1})\ (U,{\cal U}_{2})\ (\Delta_{1},{\cal Q}_{1})\ (\Delta_{1},{\cal U}_{2})\) \\ \hline & \(SS\ :\ \ ({\cal S},\varphi)\) \\ & \(FS\ :\ \ ({\cal N},{\cal S})\ (\Delta_{1},{\cal S})\ (\Delta_{1},\varphi)\) \\ \hline & \(SS\ :\ \ ({\cal S},\varphi)\) \\ \({\cal O}_{NLNH}\) & \(FS\ :\ \ ({\cal N},{\cal S})\ (\Delta_{1},{\cal S})\ (\Delta_{1},\varphi)\) \\ & \(FV\ :\ \ ({\cal N},{\cal B})\ (\Delta_{1},{\cal B})\ (\Delta_{1},{\cal L}_{1})\) \\ \hline \end{tabular}
**Table 9**: Continuation of table 8. \(N_{R}\)SMEFT \(d=7\) operators in \(\psi^{2}H^{3}D\), \(\psi^{2}H^{4}\), \(\psi^{4}H\) and their tree-level decompositions.
### Baryon number violating operators
In this subsection we comment on baryon number violation (BNV) in \(N_{R}\)SMEFT. We give the decompositions for the BNV operators here only for completeness, since proton decay constraints (see below) usually render these operators uninteresting for collider phenomenology. The BNV operators at \(d=6\) and \(d=7\) that can be opened at tree-level are presented in table 10.6 Next to each operator, we provide the list of decompositions, classified according to the Lorentz nature of the fields. At \(d=6\), there are two four-fermion operators that can be opened at tree-level. At \(d=7\) there are five BNV operators, two of them belong to the \(\psi^{4}D\) class and can only be generated at loop level, hence we don't list them. The three remaining operators are in the \(\psi^{4}H\) category and can be opened through three different diagrams, as was shown in figure 2.
Footnote 6: When defining the operators, \(SU(3)\) contractions have been omitted. In all cases there should be the total antisymmetric tensor in colour indices, \(\varepsilon_{\alpha\beta\sigma}\), contracted with the three quark fields’ colour indices.
For the \(d=6\) operators we find three one-particle decompositions: the two scalars \(\omega_{1}\), \(\omega_{2}\), and the vector \(\mathcal{Q}_{1}\). Regarding the models for \(d=7\) operators, we found 22 two-particle models (3 \(SS\), 10 \(FS\) and 9 \(FV\) models). Out of these 22, 17 models appear also in the opening of other \(d=7\)\(N_{R}\)SMEFT operators, while 3 \(FS\) and 2 \(FV\) models are new. These are \((D,\Pi_{1})\), \((T_{1},\Pi_{1})\), \((Q_{5},\Pi_{1})\) and \((Q_{5},\mathcal{U}_{2})\), \((D,\mathcal{U}_{2})\), respectively.
Note that the non-observation of proton decay sets stringent limits on parameters of the models that allow for baryon number violating processes. Let us discuss this first for
\begin{table}
\begin{tabular}{|c|c|c|} \hline \multicolumn{2}{|c|}{\(\psi^{4}\)\((d=6)\)} & \multicolumn{2}{c|}{Models} \\ \hline \(\mathcal{O}_{QQdN}\) & \(\varepsilon_{ij}\left(\overline{Q_{i}^{e}}Q_{j}\right)\left(\overline{d_{R}^{c} }N_{R}\right)\) & \(S\ :\ \ \omega_{1}\) & \\ & \(V\ :\ \ \mathcal{Q}_{1}\) & \\ \hline \(\mathcal{O}_{uddN}\) & \(\left(\overline{u_{R}^{e}}d_{R}\right)\left(\overline{d_{R}^{e}}N_{R}\right)\) & \(S\ :\ \ \omega_{1}\), \(\omega_{2}\) & \\ \hline \hline \multicolumn{2}{|c|}{\(\psi^{4}H\)\((d=7)\)} & \multicolumn{2}{c|}{Models} \\ \hline \(\mathcal{O}_{QNddH}\) & \(\varepsilon_{ij}\left(\overline{Q_{i}}N_{R}\right)\left(\overline{d_{R}}d_{R}^{ c}\right)\tilde{H}_{j}\) & \(SS\ :\ \ \left(\omega_{2},\Pi_{1}\right)\) \\ & \(FS\ :\ \ \left(U,\omega_{2}\right)\)\(\left(\Delta_{1},\omega_{2}\right)\)\(\left(Q_{1},\Pi_{1}\right)\) \\ & \(FV\ :\ \ \left(Q_{1},\mathcal{Q}_{1}\right)\)\(\left(Q_{1},\mathcal{U}_{1}\right)\)\(\left(\Delta_{1},\mathcal{Q}_{1}\right)\)\(\left(U,\mathcal{U}_{1}\right)\) \\ \hline \(\mathcal{O}_{QNQH}\) & \(\varepsilon_{ij}\left(\overline{Q_{i}}N_{R}\right)\left(\overline{Q_{j}}Q^{c} \right)H\) & \(SS\ :\ \ \left(\omega_{1},\Pi_{1}\right)\)\(\left(\Pi_{1},\zeta\right)\) \\ & \(FS\ :\ \ \left(D,\omega_{1}\right)\)\(\left(\Delta_{1},\omega_{1}\right)\)\(\left(T_{1},\zeta\right)\)\(\left(\Delta_{1},\zeta\right)\) \\ & \(\ \
a specific example. For simplicity, we consider the scalar decomposition \(\omega_{1}\) of the \(d=6\) operator \(\mathcal{O}_{uddN}\). The relevant part of the UV Lagrangian for our discussion reads
\[\mathcal{L}\propto y_{ue}^{\omega_{1}}\left(\overline{u_{R}^{c}}e_{R}\right)\omega_{1} ^{\dagger}+y_{Nd}^{\omega_{1}}\left(\overline{d_{R}^{c}}N_{R}\right)\omega_{1} ^{\dagger}+y_{QL}^{\omega_{1}}\left(\overline{Q^{c}}L\right)\omega_{1}^{\dagger}\] \[+y_{ud}^{\omega_{1}}\left(\overline{u_{R}^{c}}d_{R}\right)\omega_ {1}+y_{Q}^{\omega_{1}}\left(\overline{Q^{c}}Q\right)\omega_{1}+\text{h.c.}+\dots \tag{1}\]
The contribution to the operator \(\mathcal{O}_{uddN}\) is given by the matching relation
\[c_{udN}=\frac{y_{Nd}^{\omega_{1}}y_{ud}^{\omega_{1}}}{\Lambda^{2}}\,, \tag{2}\]
where we assumed \(m_{\omega_{1}}=\Lambda\). Note that again we have suppressed flavour indices; in full generality the Wilson coefficients are matrices in flavour space. This operator triggers the decay process \(p\to\pi^{+}N\).7 Experimental limits on the proton lifetime impose stringent constraints on the combination \(y_{Nd}^{\omega_{1}}y_{ud}^{\omega_{1}}/\Lambda^{2}\). For the current experimental bound on this proton decay mode [54] the upper limit is roughly
Footnote 7: While \(\omega_{1}\) can trigger decays such as \(p\to\pi^{+}N\) and \(p\to K^{+}N\), \(\omega_{2}\) can only cause the decay \(p\to K^{+}N\), because the coupling \(y_{dd}^{\omega_{2}}\) is anti-symmetric in flavour space.
\[y_{Nd}^{\omega_{1}}\,y_{ud}^{\omega_{1}}\lesssim 10^{-26}\left(\frac{\Lambda}{ \text{TeV}}\right)^{2}\,. \tag{3}\]
Clearly, this constraint renders \(\mathcal{O}_{uddN}\) unobservable for any foreseeable collider experiment. This does not, however, imply that the particle \(\omega_{1}\) can not appear in other operators. Consider the following. In this simple model, BNV arises from the simultaneous presence of the two couplings, \(y_{Nd}^{\omega_{1}}\) and \(y_{ud}^{\omega_{1}}\), since there is no baryon number (B) that can be assigned to \(\omega_{1}\) in a way that preserves B in both interaction terms. For example, if we assign \(B(\omega_{1})=1/3\), so that B is respected in the terms of the first row of eq. (1), the yukawa interactions of the second row violate B. However, if we impose B conservation, either \(y_{Nd}^{\omega_{1}}\) or \(y_{ud}^{\omega_{1}}\) could be non-zero, depending on which \(B(\omega_{1})\) is assigned. In this case, \(\omega_{1}\) can appear in the decomposition of other \(N_{R}\)SMEFT operators, relevant for collider experiments.
The previous discussion can also be extended to decompositions for \(d=7\) operators, although additional subtleties arise as a larger number of couplings enter in the matching relations. In summary, in two particle models for \(d=7\) operators there are three couplings entering the matching conditions and usually it is sufficient to forbid one of the three in order to ensure baryon number conservation.
Finally, we briefly comment on lepton number violation in BNV operators. The above discussion has not assumed any value for the lepton number of \(N_{R}\), see also the discussion in the next section. If we use the standard assignment of \(L(N_{R})=1\), one can easily see that \(d=6\) BNV operators violate \(B+L\), while conserving \(B-L\). Operators at \(d=7\), on the other hand, violate \(B-L\). This follows exactly the same pattern as found in SMEFT.
## 3 Lepton number violation (LNV)
In this section we will discuss lepton number violation in \(N_{R}\)SMEFT and how it is connected to _observable LNV processes_ with charged leptons at the LHC and to the Majorana masses
of the SM neutrinos. Before we turn to \(N_{R}\)SMEFT operators, however, it is instructive to discuss the "black box theorem" of neutrinoless double beta decay (\(0\nu\beta\beta\) decay) [38]. We will recapitulate the basics in section 3.1.
In subsection 3.2, using very similar arguments, we will discuss how the observation of LNV @ LHC in processes involving a \(N_{R}\) and Majorana neutrino masses are related at the operator level. For \(N_{R}\)SMEFT at \(d=7\) we have given the possible decompositions at tree-level in section 2.3. In subsection 3.3 we examine Majorana neutrino masses in the renormalisable UV models, found in section 2.3.
### The black box theorem
It is known for a long time that the observation of \(0\nu\beta\beta\) decay will guarantee that at least one neutrino has a Majorana mass term [38]. This is usually called the "black box theorem", since the mechanism (or "model") underlying the \(0\nu\beta\beta\) decay amplitude needs not to be known for this conclusion to hold, see figure 3, left: At the quark level \(0\nu\beta\beta\) decay is a \(d=9\) operator of the form \((\bar{d}\bar{d}uuee)\). As the figure shows, this operator can always be "dressed" with \(W\)-bosons to draw a diagram that generates a radiative contribution to the Majorana neutrino mass of a SM neutrino.
\(0\nu\beta\beta\) decay is a low-energy process, thus the black box diagram is usually drawn in the mass eigenstate basis, as done in figure 3, left. It is instructive, however, to discuss the black box theorem in terms of SMEFT operators. The neutrino Majorana mass matrix is generated from the famous Weinberg operator, \(\mathcal{O}_{W}=LLHH\), while contributions to the \(0\nu\beta\beta\) decay amplitude start at the \(d=9\) level. \(\Delta L=2\) operators (disregarding operators with derivatives or field strength tensors) have been listed up to \(d=11\) in [55]. At \(d=9\) there are six operators relevant for \(0\nu\beta\beta\) decay.8 The simplest six fermion SMEFT operator for \(0\nu\beta\beta\) decay is \(\mathcal{O}^{9}_{ud\acute{e}}\propto u_{R}^{2}\overline{d_{R}}^{2}e_{R}^{2}\). The operator and its black box connection is shown in figure 3, right. The SM Yukawa couplings \(\bar{L}e_{R}H\), \(\bar{Q}d_{R}H\) and \(\overline{u_{R}}QH\) are used to close the
Figure 3: Black box theorem of \(0\nu\beta\beta\) decay graphically [38]: Whatever is the underlying mechanism generating a non-zero neutrinoless double beta decay amplitude, will also generate a Majorana mass term for at least one of the SM neutrinos. Cutting the diagrams at the thinner lines leaves an operator contributing to \(0\nu\beta\beta\) decay. To the left, diagram in the mass eigenstate basis. To the right, example diagram for the operator \(\mathcal{O}^{9}_{ude}=u_{R}^{2}\overline{d_{R}}^{2}e_{R}^{2}\) in the gauge basis.
loops. This example results in a 4-loop diagram, the same as the black box diagram in the mass basis. The other five \(d=9\) operators will result either in 2-loop or in 3-loop black box diagrams.
Note that black box diagrams are in general divergent. Thus, one expects that neutrino masses are generated at a lower loop level than indicated by the respective black box diagram. However, at the level of NROs it is not possible to decide at which level neutrino masses are indeed generated, for this one needs to know the underlying UV model. We will come back to this question in subsection 3.3.
A 4-loop diagram will give only a tiny contribution to the neutrino mass [56] and, thus, the guaranteed, but minimal contribution from the \(0\nu\beta\beta\) decay black box diagram to the neutrino masses is numerically much smaller than what is required to explain neutrino oscillation data. However, given the finite number of operators at \(d=9\), one can open up the \(0\nu\beta\beta\) decay operator(s) in all possible ways, say, at tree-level [57]. The list of possible UV "models" found in this exercise can then be examined one-by-one and neutrino mass models from the tree- to the 4-loop level emerge [58]. Whether any particular of these models can or can not be the _main_ contribution to the neutrino masses - as required for an explanation of oscillation data - is then mainly a question of the loop level at which the neutrino masses are generated in the respective model, but in all cases a non-zero Majorana mass for the SM neutrinos is guaranteed.
### A black box for \(N_{r}\)SMEFT and Majorana neutrino masses
Before turning to \(N_{R}\)SMEFT operators, let us briefly discuss the renormalisable Lagrangian for a simple model that adds a \(N_{R}\) to the SM. In the minimal type-I seesaw, there is a Majorana mass term and the Lagrangian involving \(N_{R}\) contains also a Yukawa coupling:
\[\mathcal{L}^{\rm Type-I}=-y_{\nu}\overline{N_{R}}LH-\frac{1}{2}M_{M}\overline{ N_{R}^{c}}N_{R}+\text{h.c.} \tag{10}\]
Note that the Majorana nature of the mass term is specific for the type-I seesaw. The mass term for \(N_{R}\) could also be of Dirac type, the best-known example is the inverse seesaw [8]. Adding a second Weyl fermion to the model, call it \(N_{L}\), one can write a mass term \(M_{R}\overline{N_{L}}N_{R}\) instead of \(M_{M}\) in eq. (10). It is obvious that, assigning lepton number \(L(N_{R})=1\) (and \(L(N_{L})=1\)), the Yukawa term and \(M_{R}\) conserve lepton number, while \(M_{M}\) violates \(L\) by \(\Delta(L)=2\). However, in the absence of \(y_{\nu}\) one could assign \(L(N_{R})=0\) and this would eliminate LNV from \(M_{M}\). This may seem trivial, but we would like to stress that LNV is always related to a mismatch in the lepton number assignment of two or more terms in the Lagrangian in any model, \(N_{R}\)SMEFT is no exception to this statement.
After electro-weak symmetry breaking, the simple model, defined by eq. (10), introduces a non-zero coupling of the (mostly) singlet state \(N\)9 to the gauge bosons, \(gV_{IN}\), where \(V_{IN}\) is the mixing angle between active and sterile states. At the LHC \(W\)-bosons are produced, which can then decay to \(N+l\) via this non-zero coupling.10\(N\) will decay itself
via an off-shell \(W\) to another lepton plus jets. For a Majorana neutrino, the probability to decay to either lepton or anti-lepton is the same (at tree-level), thus the final state for the whole process will contain two leptons and two jets and the ratio \(R=\#events\)(same-sign di-lepton plus jets)/\(\#events\)(opposite-sign di-lepton plus jets) is equal to one. Same-sign di-lepton events are obviously \(\Delta(L)=2\) processes, formally the same as a Majorana neutrino mass. The diagram for this LHC process is shown in figure 4 on the left. From this diagram, one can cut off the quarks and draw the 2-loop Majorana neutrino mass diagram shown on the right of figure 4.
Clearly, if the diagram on the left is present in the model, the diagram on the right must also exist. Thus, the observation of this LNV process at the LHC guarantees that neutrinos are Majorana states _for this particular model_. Several comments are in order.
First of all, the connection between the LHC process and Majorana neutrino masses in this particular example can be trivially understood. The model as discussed is, after all, a type-I seesaw and type-I seesaw of course generates Majorana neutrino masses. However, Majorana neutrino masses in the seesaw are generated at tree-level, thus the 2-loop diagram in figure 4 just represents a "minimal link" between the LHC process and the existence of a Majorana neutrino mass. Numerically it is only a very minor correction to the total neutrino mass in this model. This is very similar to the black box theorem for \(0\nu\beta\beta\) decay discussed above.
But, different from the black box theorem for \(0\nu\beta\beta\) decay, the discussion as presented so far is _model dependent_, since we have assumed that the Lagrangian terms of eq. (10) exist. To make statements as model-independent as possible, we should not a priori assume that \(N\) has a coupling to SM \(W\)-bosons,11 nor that the mass of the \(N\) is of Majorana type, but instead consider EFT operators.
Footnote 11: We will come back to this point near the end of this subsection.
In the mass eigenstate basis, after electro-weak symmetry breaking, the simplest purely fermionic operator one can write down for the production of a fermion singlet is either \(\bar{d}u\overline{N}e\) or \(\bar{d}u\overline{N^{c}}e\). Assigning lepton number to \(N\) as \(L(N)=1\) (\(L(N)=-1\)), the former (latter) operator is lepton number conserving, while the latter (former) represents a LNV object. If both types of operators are present, LNV processes will be gener
Figure 4: Lepton number violation at the LHC in a minimally extended variant of the SM (left) and a 2-loop neutrino mass diagram that will necessarily be generated at the same time.
the nature of the mass term of \(N\). However, \(N\) needs to be a massive particle, its mass term could be either Dirac, i.e. lepton number conserving (LNC), or Majorana (LNV), as discussed above. Thus, overall, one can find \(\Delta(L)=2\) processes either with two LNC operators and a Majorana propagator or with one LNV and one LNC operator, along with a LNC propagator (no mass flip). For the discussion to be complete, we need to cover both options.
Figure 5 shows, as an example, a 4-loop Majorana neutrino mass diagram, combining two LNC four-fermion operators with a LNV Majorana mass insertion in the \(N\) propagator. The LHC LNV process \(pp\to l^{+}l^{+}jj\) is contained in the inside of the diagram, just cutting the diagram at the thin lines. Thus, the two observables are always either both present in the theory or none of them is, exactly as in the originally black box theorem for \(0\nu\beta\beta\) decay.
Note that, simple power counting shows that this 4-loop diagram is divergent. Clearly, a lower order diagram contributing to the neutrino mass should exist, in order to provide a counter term for this infinity, and - apart from some very fine-tuned special cases - that lower order diagram will give a larger contribution to the neutrino mass than the 4-loop diagram. However, given only the effective operator and not the full UV complete model, it is not possible to decide at which loop level Majorana masses will appear. We will come back to discuss this point in section 3.3.
Figure 5 shows the connection between different LNV observables in the mass eigenstate basis. The underlying physics, however, is again more clearly visible in the weak eigenstate basis. Figure 6 shows an example based on the simplest \(d=6\)\(N_{R}\)SMEFT operator, \(\mathcal{O}_{duNe}\). Here, the diagram gives actually a 4-loop realisation of the Weinberg operator,
Figure 5: Lepton number violation at the LHC and a 4-loop neutrino mass diagram. Different from figure 4, this diagram does not assume that \(N\) has couplings to the \(W\)-boson. Instead, only electric charge conservation and the existence of some operator \(\bar{d}u\overline{N}e\) is assumed. Again, the diagram is drawn in the mass eigenstate basis and the origin of the LNV is assigned to the Majorana mass \(M\).
\({\cal O}_{\rm Wbg}\propto LLHH\) (which will generate the neutrino mass matrix after symmetry breaking). Similar diagrams can be drawn for all other single-\(N_{R}\) four-fermion operators in \(N_{R}\)SMEFT.
In the above discussion it was assumed that the source of the LNV is the Majorana propagator of the \(N_{R}\). However, the same black box connection between LNV @ LHC and Majorana neutrino mass can be established also in the case that the necessary LNV is due to a LNV operator. Let's discuss this in the weak eigenstate basis directly. We choose as the LNV \(d=7\) operator the example of \({\cal O}_{dLNH}\). (For all other LNV operators the discussion is very similar.) For the LNC \(d=6\) operator we choose \({\cal O}_{dQNL}\). Figure 7 shows the resulting 2-loop diagram for \({\cal O}_{\rm Wbg}\). As before, cutting open the loops defines a LNV process for the LHC: One could, for example, produce the \(N_{R}\) via \({\cal O}_{dQNL}\), while the final state is produced from the decay of \(N_{R}\) via \({\cal O}_{dLNH}\). This decay could either contain two jets plus a Higgs (or \(Z^{0}\)) boson and missing momentum, or two jets plus a \(W\) and a charged lepton.12 The additional bosons (relative to the "standard" \(lljj\) signal) could actually be used to distinguish the two possibilities (Majorana propagator versus \(d=7\) operator) _experimentally_ - at least in principle.
Footnote 12: Only final states _without_ missing energy can be used to determine lepton number experimentally, of course.
One can easily check that the chiralities in the diagram in figure 7 are such, that the momentum \(\not{q}\) is picked from the neutrino propagator (no mass flip). Again, the resulting integral is divergent, indicating that a lower order contribution to the neutrino mass should exist in any UV completion generating this diagram at low energies. However, whether the
Figure 6: Lepton number violation at the LHC and a 4-loop realisation of the Weinberg operator. Different from figure 5, this diagram is drawn in the electro-weak basis, assuming the \(d=6\) operator \({\cal O}_{duNe}\) is non-zero.
neutrino mass is tree-level or 1-loop can not be decided at the level of effective operators only. While we have concentrated here on a specific combination of operators, the same conclusions can easily be reached for all other possible combinations: Observation of LNV @ LHC guarantees the existence of Majorana neutrino masses for the SM neutrinos.
Let us return to the comment, stated above, that for generality of our argument we should not assume that \(N\) has a non-zero coupling to gauge bosons. This statement is motivated by the fact that experimentally it might not be possible to show that the vertex \(e\)-\(N\)-\(W\) exists. However, consider the following: \(H\) and \(L\) can always be coupled to a \(SU(2)\) singlet, thus, if a \(N_{R}\) is present in the theory one can always write down a yukawa term \(y_{\nu}\overline{N_{R}}LH\) - which is equivalent to a non-zero \(V_{lN}\) in the broken phase. One might attempt to forbid this coupling via some extra symmetry beyond those of the SM. An example could be a \(Z_{2}\) symmetry, under which the \(N_{R}\) is odd, such as in the famous "scotogenic" neutrino mass model [59]. However, such an extra symmetry is incompatible with the existence of any of the single-\(N_{R}\) operators in tables 4 and 7. This is easy to see: Consider, for example \({\cal O}_{QuNL}\). We can take this operator and replace \(\overline{Q}u_{R}\) by \(H\) (up type yukawa coupling do exist after all), thus writing down a term proportional to \(\overline{N_{R}}LH\). Since similar replacements can be done for any of the single-\(N_{R}\) operators, the observation of any of these will guarantee that some (although maybe very small) coupling to gauge bosons should be present in the model as well. We stress, however, that as with the black box, this argument is purely qualitative. It does not allow to fix the numerical value of \(y_{\nu}\). In particular, both production and decay of \(N_{R}\) at the LHC could easily be dominated by NRO operators.
We close this subsection with a brief comment about flavour. While in double beta decay the charged leptons are always electrons, the LHC can produce, in principle, any lepton flavour. Just as \(0\nu\beta\beta\) decay guarantees that the \((m)_{ee}\) entry of the neutrino mass matrix is non-zero, the observation of different flavour combinations \((\alpha,\beta)\) in LNV processes at the LHC would then be related to the Majorana neutrino mass matrix entry \((m)_{\alpha\beta}\) in
Figure 7: Lepton number violation at the LHC and a 2-loop realisation of the Weinberg operator. Different from figure 6, in this diagram the origin of LNV is the \(d=7\) operator \({\cal O}_{dLNH}\).
the gauge basis.
### Neutrino masses in models derived from LNV \(d=7\) operators
In this subsection, we will briefly discuss neutrino mass generation at the renormalisable level. The aim of this discussion is not to provide a detailed fit of neutrino masses and mixing angles to experimental data,13 but rather to demonstrate that all models generating LNV operators with \(N_{R}\) also generate active neutrino masses.14 Given this connection, one might be tempted to think that \(d=7\) operators are not observable in accelerator experiments, due to the smallness of the observed neutrino masses. However, as we will discuss now, such a conclusion holds only for a very specific subset of UV decompositions and not in general.
Footnote 13: Once a particular model is specified, neutrino fits can be easily done using, for example, the formulas in [60].
Footnote 14: To simplify the discussion below, we assume the \(d=7\) operators violate \(L\), see previous section.
First of all, with \(N_{R}\) being a complete SM singlet, all models giving rise to single-\(N_{R}\)\(d=6\) operators necessarily allow to write down also a neutrino yukawa coupling, as discussed above. If, in addition, lepton number is violated, also a Majorana mass for \(N_{R}\) is allowed. In fact, the Majorana mass term is mandatory in this case: One can easily show that in all \(N_{R}\) models with LNV, \(M_{M}\) is generated radiatively via diagrams with divergent integrals. Thus, for consistency, all these models require also the presence of a tree-level \(M_{M}\) as a counter term. A seesaw type-I contribution to the neutrino masses is therefore unavoidable in all models with a \(N_{R}\) and LNV. For the discussion in this subsection, however, this contribution to the neutrino masses is irrelevant, since it does not place any restrictions on the Wilson coefficients of the \(d=7\) operators.
As we pointed out in section 2.3, we found 112 different models for \(d=7\) operators. We will not discuss all these models in details, but instead focus on just four decompositions for the example operator \(\mathcal{O}_{LNLH}\), see figure 8. These four examples are sufficient to cover essentially all relevant aspects of neutrino mass generation in the 112 models: The remaining models could be discussed in much the same way, with adequate replacements for the corresponding model parameters.
In figure 8 we show in the top row two example decomposition for \(\mathcal{O}_{LNLH}\) containing \(\mathcal{N}\) and \(\Xi_{1}\). These will give a tree-level seesaw contribution of type-I and type-II to the neutrino masses. (Note, that one can replace \(\mathcal{N}\) by \(\Sigma\), and obtain a decomposition with a type-III seesaw too.) We note in passing that out of the 112 models (14, 8, 8) contain \((\mathcal{N},\Sigma,\Xi_{1})\), respectively. The decompositions in the bottom row, on the other hand, will generate neutrino masses radiatively.
Let us discuss the tree-level cases first. Consider the example decomposition shown in figure 8, top left. The Lagrangian contains the terms:
\[\mathcal{L}\propto y_{NL}^{\varphi}\left(\overline{N_{R}}L\right)\varphi+y_{ NL}^{\varphi}\left(\overline{\mathcal{N}}L\right)\varphi+y_{\mathcal{N}L} \left(\overline{\mathcal{N}}L\right)H+\frac{1}{2}M_{\mathcal{N}}\overline{ \mathcal{N}^{c}}\mathcal{N}+\text{h.c.} \tag{3.2}\]
where \(\mathcal{N}\) and \(\varphi\) are "heavy" copies of \(N_{R}\) and \(H\).15 The Wilson coefficient generated from
this diagram is matched via
\[c_{LNLH}=-\frac{1}{4}\frac{y_{NL}^{\varphi}y_{\mathcal{NL}}^{\varphi}y_{\mathcal{ NL}}}{M_{\mathcal{N}}m_{\varphi}^{2}}. \tag{3.3}\]
On the other hand, \(\mathcal{N}\) must be a Majorana field, otherwise the diagram can not be closed. Thus, \(\mathcal{N}\) gives a contribution to the neutrino mass _a la_ seesaw type-I:
\[m_{\nu}\propto y_{\mathcal{NL}}^{2}\frac{v^{2}}{M_{\mathcal{N}}}. \tag{3.4}\]
We can use this equation to put an upper limit on the coefficient \(c_{LNLH}\):
\[c_{LNLH}\lesssim 10^{-6}\frac{y_{NL}^{\varphi}y_{\mathcal{NL}}^{\varphi}}{ \Lambda^{3}}\left(\frac{\Lambda}{v}\right)^{1/2}\left(\frac{m_{\nu}}{0.1\,{ \rm eV}}\right), \tag{3.5}\]
where we have assumed \(M_{\mathcal{N}}\simeq m_{\varphi}\simeq\Lambda\). This estimate shows that production of a \(N_{R}\) via this operator in a lepton collider is completely negligible for this model. We can also estimate the partial decay width of a \(N_{R}\) via \(\mathcal{O}_{LNLH}\) given this constraint. We find the decay length is roughly
\[(c\tau)\sim\left(\frac{\Lambda}{{\rm TeV}}\right)^{5}\left(\frac{m_{N_{R}}}{ 10\,{\rm GeV}}\right)^{-5}\left(\frac{0.1\,{\rm eV}}{m_{\nu}}\right)^{2}10^{8 }\,{\rm m}, \tag{3.6}\]
for \(y_{NL}^{\varphi}=y_{\mathcal{NL}}^{\varphi}=1\). A decay length this large would render the \(N_{R}\) essentially stable for collider experiments, unless it is much heavier than indicated in eq. (3.6). Note that, similar
Figure 8: Four example decompositions for \(\mathcal{O}_{LNLH}\). The two examples shown in the top row will give tree-level contributions to the active neutrino masses via type-I (left) or type-II seesaw (right). The decomposition in the bottom generate radiative neutrino masses at 1-loop (left) and 2-loops (right). The quantum numbers for the new fields are given in tables 1 and 2.
arguments can be presented for all decompositions of \(d=7\) operators containing either \(\mathcal{N}\) or \(\Sigma\).
For decompositions containing \(\Xi_{1}\), the situation is slightly more complicated. If we allow for LNV, the Lagrangian for a model with a \(\Xi_{1}\) field contains the terms:
\[\mathcal{L}\propto y_{L}^{\Xi_{1}}\left(\overline{L^{c}}L\right)\Xi_{1}+\kappa_ {\Xi_{1}}HH\Xi_{1}^{\dagger}+\kappa_{\Xi_{1}\varphi}\Xi_{1}^{\dagger}H\varphi+\ldots \tag{3.7}\]
The simultaneous presence of both terms will lead to a seesaw type-II contribution to the active neutrino mass matrix. We can use this to rewrite the Wilson coefficient for the decomposition shown in figure 8, top right as:
\[|c_{LNLH}| \lesssim y_{NL}^{\varphi}\frac{m_{\nu}}{v_{\Xi_{1}}}\frac{1}{\Lambda^{3}} \tag{3.8}\] \[\lesssim 10^{-10}y_{NL}^{\varphi}\left(\frac{m_{\nu}}{0.1\,\text{eV}} \right)\left(\frac{\text{GeV}}{v_{\Xi_{1}}}\right)\frac{1}{\Lambda^{3}}. \tag{3.9}\]
where we have assumed \(m_{\Xi_{1}}=m_{\varphi}=\kappa_{\Xi_{1}\varphi}=\Lambda\). The SM \(\rho\)-parameter puts an upper limit on the induced vacuum expectation value of the triplet, \(v_{\Xi_{1}}\), of roughly \(v_{\Xi_{1}}\lesssim 2\) GeV [54], which motivates the stringent constraint eq. (3.9). Neutrino oscillation data, however, allow \(v_{\Xi_{1}}\) as small as \(v_{\Xi_{1}}\sim 0.1\) eV. Obviously, no numerically relevant constraint on \(|c_{LNLH}|\) can be derived in this case.
Let us turn now to decomposition #3 in figure 8, \((\mathcal{S}_{1},\varphi)\). The same particle content appears also in decompositions of the operators \(\mathcal{O}_{eNLH}\), \(\mathcal{O}_{LNeH}\), \(\mathcal{O}_{dQNeH}\) and \(\mathcal{O}_{QuNeH}\). In neutrino physics this combination of BSM particles is known as the Zee model [61]. Neutrino masses are generated at 1-loop level, see figure 9 to the left. The Lagrangian of the Zee model containts the terms:
\[\mathcal{L}\propto y_{L}^{\mathcal{S}_{1}}\left(\overline{L^{c}}L\right) \mathcal{S}_{1}+y_{Ne}^{\mathcal{S}_{1}}\left(\overline{N_{R}^{c}}e_{R} \right)\mathcal{S}_{1}+y_{eL}^{\varphi}\left(\overline{e_{R}}L\right)\varphi^ {\dagger}+\kappa_{\mathcal{S}_{1}\varphi}\mathcal{S}_{1}^{\dagger}H\varphi+ \text{h.c.}+\ldots \tag{3.10}\]
Disregarding flavour indices for simplicity, the neutrino mass in the Zee model can be estimated as:
\[m_{\nu}^{\text{Zee}}\simeq-\frac{1}{16\pi^{2}}y_{L}^{\mathcal{S}_{1}}m_{\tau} y_{eL}^{\varphi}\frac{\sqrt{2}v\kappa_{\mathcal{S}_{1}\varphi}}{m_{h_{2}^{+}} ^{2}-m_{h_{1}^{+}}^{2}}\log\left(\frac{m_{h_{2}^{+}}^{2}}{m_{h_{1}^{+}}^{2}} \right)\,. \tag{3.11}\]
Here, \(h_{i}\) are the two mass eigenstates formed by \(\mathcal{S}_{1}\) and the charged component in \(\varphi\). \(m_{\tau}\) is the mass of the \(\tau\) lepton and we have neglected terms proportional to \(m_{\mu,e}\), which are not
Figure 9: One-loop (left) and two-loop (right) neutrino mass diagrams, based on the decomposition of \(\mathcal{O}_{LNLH}\) containing the BSM particles \((\mathcal{S}_{1},\varphi)\) or \((E,\mathcal{S}_{1})\).
relevant for this discussion. Thus, neutrino masses will put a stringent constraint on the product \(y_{L}^{\mathcal{S}_{1}}y_{eL}^{\varphi}(\kappa_{\mathcal{S}_{1}\varphi}/\Lambda)\), where \(\Lambda\simeq m_{\mathcal{S}_{1}}\simeq m_{\varphi}\). Logically, this combination could be small because one, two or all three parameters are suppressed. The matching of the operators, \(\mathcal{O}_{LNLH}\), \(\mathcal{O}_{eNLH}\), \(\mathcal{O}_{LNeH}\), on the other hand, will be proportional to:
\[c_{LNLH} \propto y_{L}^{\mathcal{S}_{1}}y_{NL}^{\varphi}\frac{\kappa_{ \mathcal{S}_{1}\varphi}}{\Lambda}\] \[c_{eNLH} \propto y_{Ne}^{\mathcal{S}_{1}}y_{eL}^{\varphi}\frac{\kappa_{ \mathcal{S}_{1}\varphi}}{\Lambda}\] \[c_{LNeH} \propto y_{Ne}^{\mathcal{S}_{1}}y_{NL}^{\varphi}\frac{\kappa_{ \mathcal{S}_{1}\varphi}}{\Lambda}\,. \tag{69}\]
In case all three parameters entering the neutrino mass are small, \(y_{L}^{\mathcal{S}_{1}}\sim y_{eL}^{\varphi}\sim(\kappa_{\mathcal{S}_{1} \varphi}/\Lambda)\sim\epsilon\), all of the coefficients in eq. (69) will be suppressed. However, in the more optimistic case, where either \(y_{L}^{\mathcal{S}_{1}}\) or \(y_{eL}^{\varphi}\) are suppressed by \(\epsilon^{3}\), while the other two parameters are order \(\mathcal{O}(1)\), either \(c_{LNLH}\) or \(c_{eNLH}\) and \(c_{LNeH}\) can be large. However, given the constraint from neutrino masses, it is impossible that all three operators are observable at the same time.
Consider now decomposition #4 in figure 8, \((E,\mathcal{S}_{1})\). This model allows to write the following terms in the Lagrangian:
\[\mathcal{L}\propto y_{L}^{\mathcal{S}_{1}}\left(\overline{L^{c}}L\right) \mathcal{S}_{1}+y_{Ne}^{\mathcal{S}_{1}}\left(\overline{N_{R}^{c}}e_{R} \right)\mathcal{S}_{1}+y_{LE}\left(\overline{L}E\right)H+y_{NE_{L}}^{\mathcal{ S}_{1}}\left(\overline{N_{R}}E\right)\mathcal{S}_{1}+m_{E}\overline{E}E+\ldots \tag{70}\]
Note that the vertex proportional to \(y_{Ne}^{\mathcal{S}_{1}}\) does not appear in figure 8, but it is contained in a decomposition for \(\mathcal{O}_{LNeH}\) with the same particle content. Also, \(y_{Ne}^{\mathcal{S}_{1}}\) is necessary for the 2-loop diagram in figure 9. In this diagram, from the \(N_{R}\) propagator \(P_{R}(\not{q}+M_{N_{R}})P_{L}\) the momentum term survives. LNV is due to the simultaneous presence of \(y_{Ne}^{\mathcal{S}_{1}}\) and \(y_{NE_{L}}^{\mathcal{S}_{1}}\). Similar to the discussion for the Zee model, either \(y_{Ne}^{\mathcal{S}_{1}}\) or \(y_{NE_{L}}^{\mathcal{S}_{1}}\) (or both) must be small, to fulfill the neutrino mass constraint. Thus, either \(\mathcal{O}_{LNLH}\) or \(\mathcal{O}_{LNeH}\) can have unsupressed Wilson coefficients, but not both operators at the same time.
We close this section summarizing: (1) Models for LNV in \(d=7\)\(N_{R}\)SMEFT operators will always also lead to active neutrino masses either at tree-, 1- or 2-loop. (2) For decompositions leading to seesaw type-I or type-III contributions to the neutrino mass, the Wilson coefficients will be severely suppressed, rendering the corresponding operators phenomenologically irrelevant for accelerator experiments. (3) For decompositions leading to radiative neutrino masses, the Wilson coefficients for some of the corresponding operators can be large, but the same UV-decomposition contributes typically to more than one operator and neutrino mass constraints exclude the possibility that all corresponding operators are observable at the same time.
## 4 Conclusions
Right-handed neutrinos with electro-weak scale masses have recently attracted a lot of attention in the literature. From the theoretical point of view, \(N_{R}\)'s represent the simplest extension of the standard model that can explain the active neutrino masses as observed in oscillation experiments (via some variant of the seesaw mechanism). From the experimental
side, in the past few years a number of new experiments have been proposed to search for long-lived particles with unprecedented sensitivities. \(N_{R}\)'s with masses around (1-100) GeV are prime candidates for long-lived particles, due to the smallness of the active neutrino masses.
If new physics exists, but at a mass scale outside the reach of the LHC, effective field theory is the correct tool to study BSM. The relevant EFT involving right-handed neutrinos is \(N_{R}\)SMEFT and a number of recent papers have studied the phenomenology of \(N_{R}\)SMEFT.
In this work we discussed a systematic tree-level decomposition of \(N_{R}\)SMEFT operators at \(d=6\) and \(d=7\), using a diagrammatic method. The resulting lists of BSM particles provide a complete dictionary of models, which can be used for studying \(N_{R}\) phenomenology. We have also briefly compared our lists of particles to the Granada dictionary for tree-level UV-completions for SMEFT at \(d=6\)[37]. Our lists of BSM particles are given in tables 1 - 3. In the appendix we give the Lagrangian terms for the resulting models for all terms involving \(N_{R}\). These Lagrangians were calculated with Sym2Int[62, 63]. In the auxiliary file added to this paper, we give all remaining terms involving the BSM fields necessary for the calculation of the matching of the UV models onto \(N_{R}\)SMEFT. The matching can be done automatically with the help of, for example, Matchete[46] and we added an example notebook for a number of models as an auxiliary file to this paper.
We also discussed lepton number violation, that unavoidably appears if \(d=6\) and \(d=7\) operators are present in the theory at the same time. LNV is always linked to Majorana neutrino masses and LNV in \(N_{R}\)SMEFT is no exception, as we discussed in detail. We also discussed possible constraints on \(d=7\) operators from the observed neutrino masses. While some of the possible UV-models for \(d=7\) operators must have tiny Wilson coefficients, due the neutrino mass constraint, there exist many models for which \(d=7\)\(N_{R}\)SMEFT operators could be observable in future LLP experiments.
## Appendix A Lagrangian
The Lagrangian describing renormalisable interactions among the SM fields, the singlet \(N_{R}\), and the new BSM fields introduced in tables 1 - 3, can be expressed as
\[\mathcal{L}_{\text{UV}}=\mathcal{L}_{light}+\mathcal{L}_{mixed}+\mathcal{L}_{ heavy}\,.\]
The first term includes interactions involving only the SM fields and the \(N_{R}\). The second term describes the interactions between these light fields and the heavy BSM fields, while the last term contains the interactions of the heavy fields among themselves.
In this appendix we write down all renormalisable terms in which the light singlet \(N_{R}\) is involved. Additionally, we provide the Lagrangian terms that include the new vector field \(\mathcal{U}_{1}\) and new interactions for the vector \(\mathcal{L}_{1}\) not considered in Ref. [37]. Finally, we present the interaction terms belonging to \(\mathcal{L}_{heavy}\) that contribute to the model diagrams leading to \(N_{R}\)SMEFT operators at \(d=7\).
The Lagrangian \({\cal L}_{light}\) comprises the renormalisable SM Lagrangian, the mass term for \(N_{R}\),16 and the allowed Yukawa interaction for the singlet. It is expressed as17
Footnote 16: Note that we have only written a Majorana mass term for \(N_{R}\). Further discussion regarding this aspect is provided in section 3.
Footnote 17: Explicit \(SU(2)\) and \(SU(3)\) index contractions have been omitted in the Lagrangians of this appendix.
\[{\cal L}_{light}={\cal L}_{\rm SM}-\frac{1}{2}M_{\rm M}\overline{N_{R}^{c}}N_{ R}-y_{\nu}\left(\overline{N_{R}}L\right)H+{\rm h.c.} \tag{114}\]
The interactions involving \(N_{R}\) in \({\cal L}_{mixed}\) can be classified into two categories: fermion-fermion-scalar (_FFS_) and fermion-fermion-vector (_FFV_) interactions. Each category includes two types of terms: A) those with one light field (\(N_{R}\)) and two heavy fields, and B) those with two light fields (either one or two \(N_{R}\)) and one heavy field. We gather all these interactions in \({\cal L}_{mixed}^{N_{R}}\), which is given by
\[{\cal L}_{mixed}^{N_{R}}={\cal L}_{{}_{FFS,(A)}}^{N_{R}}+{\cal L}_{{}_{FFS,(B )}}^{N_{R}}+{\cal L}_{{}_{FFV,(A)}}^{N_{R}}+{\cal L}_{{}_{FFV,(B)}}^{N_{R}}\,, \tag{115}\]
where
\[{\cal L}_{{}_{FFS,(A)}^{N_{R}}}^{N_{R}} =y_{NN_{R}}^{\cal S}\left(\overline{N_{R}^{c}}{\cal N}_{R} \right){\cal S}+y_{NN_{L}}^{\cal S}\left(\overline{N_{R}}{\cal N}_{L}\right){ \cal S}\] \[+y_{NE_{R}}^{\cal S}\left(\overline{N_{R}^{c}}E_{R}\right){\cal S }_{1}+y_{NE_{L}}^{\cal S}\left(\overline{N_{R}}E_{L}\right){\cal S}_{1}\] \[+y_{N\Delta_{1R}}^{\varphi}\left(\overline{N_{R}^{c}}\Delta_{1R} \right)\varphi+y_{N\Delta_{1L}}^{\varphi}\left(\overline{N_{R}}\Delta_{1L} \right)\varphi\] \[+y_{N\Sigma_{R}}^{\Xi}\left(\overline{N_{R}^{c}}\Sigma_{R} \right)\Xi+y_{N\Sigma_{L}}^{\Xi}\left(\overline{N_{R}}\Sigma_{L}\right)\Xi\] \[+y_{N\Sigma_{1R}}^{\Xi}\left(\overline{N_{R}^{c}}\Sigma_{1R} \right)\Xi_{1}+y_{N\Sigma_{1L}}^{\Xi}\left(\overline{N_{R}}\Sigma_{1L}\right) \Xi_{1}\] \[+y_{ND_{R}}^{\omega_{1}}\left(\overline{N_{R}^{c}}D_{R}\right) \omega_{1}^{\dagger}+y_{ND_{L}}^{\omega_{1}}\left(\overline{N_{R}}D_{L}\right) \omega_{1}^{\dagger}\] \[+y_{N\Omega_{R}}^{\mu_{1}}\left(\overline{N_{R}^{c}}Q_{1R}\right) \Pi_{1}^{\dagger}+y_{NQ_{1L}}^{\mu_{1}}\left(\overline{N_{R}}Q_{1L}\right) \Pi_{1}^{\dagger}\] \[+y_{NQ_{7R}}^{\Pi_{7}}\left(\overline{N_{R}^{c}}Q_{7R}\right)\Pi _{7}^{\dagger}+y_{NQ_{7L}}^{\Pi_{7}}\left(\overline{N_{R}}Q_{7L}\right)\Pi_{7} ^{\dagger}\] \[+y_{NT_{1R}}^{\xi}\left(\overline{N_{R}^{c}}T_{1R}\right)\zeta^{ \dagger}+y_{NT_{1L}}^{\varphi}\left(\overline{N_{R}}T_{1L}\right)\zeta^{ \dagger}+{\rm h.c.}\,, \tag{116}\]
\[{\cal L}_{{}_{FFS,(B)}^{N_{R}}}^{N_{R}} =y_{NN}^{\cal S}\left(\overline{N_{R}^{c}}N_{R}\right){\cal S}+y_ {Ne}^{\cal S}\left(\overline{N_{R}^{c}}e_{R}\right){\cal S}_{1}+y_{NL}^{ \varphi}\left(\overline{N_{R}}L\right)\varphi\] \[+y_{Nd}^{\omega_{1}}\left(\overline{N_{R}^{c}}d_{R}\right)\omega_ {1}^{\dagger}+y_{Nu}^{\omega_{2}}\left(\overline{N_{R}^{c}}u_{R}\right) \omega_{2}^{\dagger}+y_{QN}^{\Pi_{1}}\left(\overline{Q}N_{R}\right)\Pi_{1}\] \[+y_{N\Delta_{1R}}\left(\overline{N_{R}^{c}}\Delta_{1R}\right)H+y _{N\Delta_{1L}}\left(\overline{N_{R}}\Delta_{1L}\right)H+{\rm h.c.}\,, \tag{117}\]
\[{\cal L}_{{}_{FFV,(A)}^{N_{R}}}^{N_{R}} =g_{NN_{R}}^{\cal B}\left(\overline{N_{R}}\gamma_{\mu}{\cal N}_{R} \right){\cal B}^{\mu}+g_{N{\cal N}_{L}}^{\cal B}\left(\overline{N_{R}^{c}} \gamma_{\mu}{\cal N}_{L}\right){\cal B}^{\mu}\] \[+g_{NEx_{R}}^{\cal B}\left(\overline{N_{R}}\gamma_{\mu}E_{R} \right){\cal B}_{1}^{\mu}+g_{NEx_{L}}^{\cal B}\left(\overline{N_{R}^{c}}\gamma _{\mu}E_{L}\right){\cal B}_{1}^{\mu}\] \[+g_{N\Delta_{1R}}^{\cal L}\left(\overline{N_{R}}\gamma_{\mu} \Delta_{1R}\right){\cal L}_{1}^{\mu}+g_{N\Delta_{1L}}^{\cal L}\left( \overline{N_{R}^{c}}\gamma_{\mu}\Delta_{1L}\right){\cal L}_{1}^{\mu}\] \[+g_{N\Delta_{3R}}^{\cal L}\left(\overline{N_{R}}\gamma_{\mu} \Delta_{3R}\right){\cal L}_{3}^{\mu\dagger}+g_{N\Delta_{3L}}^{\cal L}\left( \overline{N_{R}^{c}}\gamma_{\mu}\Delta_{3L}\right){\cal L}_{3}^{\mu\dagger}\] \[+g_{N\Sigma_{R}}^{\cal W}\left(\overline{N_{R}}\gamma_{\mu}\Sigma_{ R}\right){\cal W}^{\mu}+g_{N\Sigma_{L}}^{\cal W}\left(\overline{N_{R}^{c}} \gamma_{\mu}\Sigma_{L}\right){\cal W}^{\mu}\] \[+g_{N\Sigma_{1R}}^{\cal W}\left(\overline{N_{R}}\gamma_{\mu} \Sigma_{1R}\right){\cal W}_{1}^{\mu\dagger}+g_{N\Sigma_{1L}}^{\cal W}\left( \overline{N_{R}^{c}}\gamma_{\mu}\Sigma_{1L}\right){\cal W}_{1}^{\mu}\] \[+g_{ND_{R}}^{\cal L}\left(\overline{N_{R}}\gamma_{\mu}D_{R} \right){\cal U}_{1}^{\mu\dagger}+g_{ND_{L}}^{\cal L}\left(\overline{N_{R}^{c}} \gamma_{\mu}D_{L}\right){\cal U}_{1}^{\mu\dagger}\]
\[+g^{\mathcal{L}_{1}}_{dQ_{5}}\left(\overline{d_{R}}\gamma_{\mu}Q_{5R} \right)\mathcal{L}^{\mu\dagger}_{1}+g^{\mathcal{L}_{1}}_{uQ_{1}}\left(\overline {d_{R}}\gamma_{\mu}Q_{1R}\right)\mathcal{L}^{\mu}_{1}+\text{h.c.} \tag{100}\]
Again, we have included in the first line the three interactions with \(N_{R}\), which we already presented in \(\mathcal{L}^{N_{R}}_{FFV}\). The remaining terms involve at least one SM field.
Finally, we write down the 3-point interaction terms among heavy BSM fields that are
needed in different opening diagrams of the operator \(\mathcal{O}_{NH^{4}}\). These are
\[\mathcal{L}_{heavy}^{\text{diag}} =\left\{y_{\Delta_{1}}^{\mathcal{S}}\left(\overline{\Delta_{1L}} \Delta_{1R}\right)\mathcal{S}+y_{\Delta_{1}}^{\overline{\Xi}_{1}}\left( \overline{\Delta_{1L}}\Delta_{1R}\right)\Xi\right.\] \[\left.+\,y_{\Delta_{1L}}^{\overline{\Xi}_{1}}\left(\overline{ \Delta_{1L}^{c}}\Delta_{1L}\right)\Xi_{1}+y_{\Delta_{1R}}^{\overline{\Xi}_{1} }\left(\overline{\Delta_{1R}^{c}}\Delta_{1R}\right)\Xi_{1}\right\}+\text{ h.c.}\] \[\left.+\,\mu_{\mathcal{S}\Xi_{1}}\left(\mathcal{S}\overline{\Xi} _{1}^{\dagger}\Xi_{1}\right)+\mu_{\mathcal{S}\Xi}\left(\mathcal{S}\Xi\Xi \right)+\mu_{\mathcal{S}}\left(\mathcal{S}\mathcal{S}\mathcal{S}\right)\,. \tag{100}\]
The complete set of interactions in \(\mathcal{L}_{\text{UV}}\) can be found in the ancillary file added to this paper.
## Acknowledgements
We would like to thank Jose Santiago for help with the tool MatchMakerEFT [64] and Javier Fuentes-Martin for help with Matchete[46]. M.H. and R.B. acknowledge support by grants PID2020-113775GB-I00 (AEI/10.13039/501100011033) and CIPROM/2021/054 (Generalitat Valenciana). R.B. also acknowledges financial support from the Generalitat Valenciana (grant ACIF/2021/052). R.C. is supported by the Alexander von Humboldt Foundation Fellowship.
|
2303.11877 | Dynamic and polarimetric VLBI imaging with a multiscalar approach | Recently multiscale imaging approaches such as DoG-HiT were developed to
solve the VLBI imaging problem and showed a promising performance: they are
fast, accurate, unbiased and automatic. We extend the multiscalar imaging
approach to polarimetric imaging, reconstructions of dynamically evolving
sources and finally to dynamic polarimetric reconstructions. These extensions
(mr-support imaging) utilize a multiscalar approach. The time-averaged Stokes I
image is decomposed by a wavelet transform into single subbands. We use the set
of statistically significant wavelet coefficients, the multiresolution support,
computed by DoG-HiT as a prior in a constrained minimization manner: we fit the
single-frame (polarimetric) observables by only varying the coefficients in the
multiresolution support. The EHT is a VLBI array imaging supermassive black
holes. We demonstrate on synthetic data that mr-support imaging offers ample
regularization and is able to recover simple geometric dynamics at the horizon
scale in a typical EHT setup. The approach is relatively lightweight, fast and
largely automatic and data driven. The ngEHT is a planned extension of the EHT
designed to recover movies at the event horizon scales of a supermassive black
hole. We benchmark the performance of mr-support imaging for the denser ngEHT
configuration demonstrating the major improvements the additional ngEHT
antennas will bring to dynamic, polarimetric reconstructions. Current and
upcoming instruments offer the observational possibility to do polarimetric
imaging of dynamically evolving structural patterns with highest spatial and
temporal resolution. State-of-the-art dynamic reconstruction methods can
capture this motion with a range of temporal regularizers and priors. With this
work, we add an additional, simpler regularizer to the list: constraining the
reconstruction to the multiresolution support. | Hendrik Müller, Andrei Lobanov | 2023-03-21T14:21:43Z | http://arxiv.org/abs/2303.11877v1 | # Dynamic and polarimetric VLBI imaging with a multiscalar approach
###### Abstract
Context:Due to the limited number of antennas and the limited observation time, an array of antennas in Very Long Baseline Interferometry (VLBI) often samples the Fourier domain only very sparsely. Powerful deconvolution algorithms are needed to compute a final image. Recently multiscale imaging approaches such as DoG-HiT were developed to solve the VLBI imaging problem and showed a promising performance: they are fast, accurate, unbiased and automatic.
Aims:We extend the multiscalar imaging approach to polarimetric imaging, reconstructions of dynamically evolving sources and finally to dynamic polarimetric reconstructions.
Methods:These extensions (mr-support imaging) utilize a multiscalar approach. The time-averaged Stokes I image is decomposed by a wavelet transform into single subbands. We use the set of statistically significant wavelet coefficients, the multiresolution support, computed by DoG-HiT as a prior in a constrained minimization manner: we fit the single-frame (polarimetric) observables by only varying the coefficients in the multiresolution support.
Results:The Event Horizon Telescope (EHT) is a VLBI array imaging supermassive black holes. We demonstrate on synthetic data that mr-support imaging offers ample regularization and is able to recover simple geometric dynamics at the horizon scale in a typical EHT setup. The approach is relatively lightweight, fast and largely automatic and data driven. The ngEHT is a planned extension of the EHT designed to recover movies at the event horizon scales of a supermassive black hole. We benchmark the performance of mr-support imaging for the denser ngEHT configuration demonstrating the major improvements the additional ngEHT antennas will bring to dynamic, polarimetric reconstructions.
Conclusions:Current and upcoming instruments offer the observational possibility to do polarimetric imaging of dynamically evolving structural patterns with highest spatial and temporal resolution. State-of-the-art dynamic reconstruction methods can capture this motion with a range of temporal regularizers and priors. With this work, we add an additional, simpler regularizer to the list: constraining the reconstruction to the multiresolution support.
## 1 Introduction
In Very Long Baseline Interferometry (VLBI) the signals recorded at single antennas are correlated to achieve a spatial resolution that would not be achievable with single-dish instruments. The correlation product of every antenna pair at a fixed time is the Fourier coefficient (visibility) of the true sky brightness distribution with a Fourier frequency determined by the projected spatial vector joining two antennas (baseline). As the Earth rotates during the observing run, baselines rotate on elliptical tracks in the Fourier domain, hence filling up the Fourier plane (uv-plane) continuously. However, due to the limited amount of antennas and the limited amount of observing time, the coverage of Fourier coefficients (uv-coverage) is sparse. VLBI imaging is the problem to recover the true sky brightness distribution from these sparsely covered Fourier coefficients.
It is a long-standing frontline goal in astronomy to recover images of the shadow of a supermassive black hole. The Event Horizon Telescope (EHT) is a globally spanning VLBI array that observes at 230 GHz (with a recent upgrade to 345 GHz). With the combination of global baselines and short baselines, the EHT achieves the angular resolution that is needed to capture the first image of the black hole shadow in M87 (Event Horizon Telescope Collaboration et al. 2019a) and in the Milky Way (Event Horizon Telescope Collaboration et al. 2022a). The next-generation Event Horizon Telescope (ngEHT) is a planned extension of the EHT (Doeleman et al. 2019; Johnson & the ngEHT Project 2023). It may produce movies of the accretion onto the central black hole SGR A* at the scales of the event horizon (Roelofs et al. 2023; Emami et al. 2023). The dynamic time-scales for these observations are very short. Observations of Sgr A* in the sub-mm (Bower et al. 2015; Wielgus et al. 2022) and near-infrared regime (GRAVITY Collaboration et al. 2018a,b) confirm that Sgr A* is time-varying on timescales as short as 30 minutes. The predicted ISCO period varies between 4 minutes and roughly 30 minutes depending on the spin of the black hole. Palumbo et al. (2019) concluded that a well-sampled baseline coverage on timescales of \(\sim 30\) minutes is needed to recover the source dynamics.
CLEAN (Hogbom 1974) and its many variants (Clark 1980; Schwab 1984; Wakker & Schwarz 1988; Bhatnagar & Cornwell 2004; Cornwell 2008; Rau & Cornwell 2011; Muller & Lobanov 2023) served the community well for decades, but are recently challenged by forward imaging approaches in the spirit
of Regularized Maximum Likelihood (RML) methods (Narayan and Nityananda, 1986; Wiaux et al., 2009; Garsden et al., 2015; Ikeda et al., 2016; Chael et al., 2016, 2018; Akiyama et al., 2017, 2019; A; Event Horizon Telescope Collaboration et al., 2019; Muller and Lobanov, 2022) and Bayesian approaches (Arras et al., 2019, 2021; Broderick et al., 2020, 2020). Recently we developed new multiresolution tools for performing VLBI imaging (Muller and Lobanov, 2022, 2023). For these multiscalar approaches we designed special wavelet-based basis functions (difference of Gaussian and difference of spherical Bessel functions) and fitted the basis functions to the uv-coverage. In this way we define smooth basis-functions that are well suited to describe (compress) the recovered image features by encoding information about the uv-coverage itself. Some wavelets are most sensitive to gaps in the uv-coverage while others are most sensitive to covered Fourier coefficients. While the signal from latter ones should be recovered, the signal from former ones are suppressed (effectively avoiding overfitting).
As a byproduct for these multiscalar imaging algorithms, we compute the so called multiresolution support (Muller and Lobanov, 2022): a set of wavelet parameters that are deemed statistically significant to represent the recovered image features. The multiresolution support encodes various information about the recovered image. Firstly, it implements a'support constraint' (where is the emission located in the image?). Secondly, it encodes a'spatial constraint' (which spatial scales are needed to represent the image features at these locations?). Especially the second prior information is determined by the spatial scales that are present in the data, i.e. that are covered by baselines in the observation. We demonstrated in Muller and Lobanov (2022) that the multiresolution support is a powerful prior information very well suited to refine the imaging procedure. In Muller and Lobanov (2022) we proposed to add amplitudes and phases to the data terms and remove any regularizer term, but solve the resulting optimization problem by only updating the coefficients in the multi-resolution support. The fit to the observed visibilities improved, but without the addition of spurious artifacts that are typical for overfitting.
Among Stokes I imaging, full polarimetric imaging are of interest for the VLBI community both theoretically (Blandford and Znajek, 1977; Hardee et al., 2007; Kramer and MacDonald, 2021) and observationally (among many other e.g. Gomez et al., 2011; Hovatta et al., 2012; Zamaninasab et al., 2014; Gomez et al., 2016; Potzl et al., 2021; Ricci et al., 2022), in particular at the event horizon scales (Event Horizon Telescope Collaboration et al., 2021, 2020). In polarimetric imaging the recorded data are separated into several polarized subbands and recombined in the four Stokes parameters. Essentially we have four Stokes parameters (I, Q, U, V) and corresponding polarized visibilities. Hence, the problem that we aim to solve for the other three Stokes parameters is the same as for Stokes I: recovering a signal from a sparse measurement of the Fourier coefficients. However, there are some slight differences: while the Stokes I image is necessarily non-negative (and this is used during imaging as a prior), this does not have to be true for Stokes Q, U, and V. Moreover, \(I^{2}\geq Q^{2}+U^{2}+V^{2}\) applies.
The multiresolution support is a well suited prior to be applied to the polarimetric imaging when the Stokes I image is already done. The'support constraint' of the multiresolution support encodes the information that linear and circular polarized emission theoretically can only appear at locations where total intensity (Stokes I) is bigger than zero. This might not reflect the observation situation in every case: sometimes the Stokes I signature cannot be retrieved with the spatial sensitivity of the interferometer while the more localized (e.g. due to Faraday rotation) polarized structural pattern is visible. However, in most VLBI studies this pathological situation does not appear and'support constraint' is a good approximation. Moreover, the'spatial constraint' adheres the fact that the polarimetric visibilities have the same uv-coverage as total intensity visibilities, i.e. the same spatial scales (the ones covered by uv-coverage) are present in the polarized images.
Another domain of current research is the study of dynamic sources, such as Sgr A*, i.e. the static imaging of a dynamically evolving source as in Event Horizon Telescope Collaboration et al. (2022) and the dynamic movie reconstruction (Roelofs et al., 2023). In this work we focus on latter problem. Data sets of dynamic sources pose additional challenges. Due to the short variability time scale, the effective uv-coverage in every frame is not sufficient for efficient snapshot imaging. Modern approaches utilize a temporal correlation instead, in a Bayesian framework (Bouman et al., 2018; Broderick et al., 2022; Roelofs et al., 2023) or as temporal regularizer in the RML framework (Bouman et al., 2018; Johnson et al., 2017; Chael et al., 2022; Roelofs et al., 2023). Moreover, the variability of the source could be misidentified with the calibration of the gains (Event Horizon Telescope Collaboration et al., 2022).
Again the multiresolution support (computed for the time-averaged image) encodes prior information that is very desired for dynamic imaging. The'support constraint' encodes the information that every location of an emission spike appearing during the observation is present also in the mean image. The uv-coverage of the full observation run is the sum of the uv-coverages of the single frames. Hence, the'spatial constraint' also provides some powerful image prior for dynamic imaging: the multiresolution support only allows spatial scales that are present in the mean image (in the full observation run), i.e. the fit in the gaps of the uv-coverage remains under control. On the other hand, the'spatial constraint' allows for the addition of the spatial scales to single frames that might be not represented in the uv-coverage of this single frame, but in earlier or later snapshots. However, we like to mention that there may be a bias towards larger scales since the mean image suppresses small-scale structures present in only part of the individual frames.
Based on the success of the approach presented in Muller and Lobanov (2022) of only changing the coefficients in the multiresolution support to introduce effective regularization, we propose the same approach for static polarimetric imaging and dynamic imaging. As outlined above, the multiresolution support is well suited to be used as a regularizer in these problems as it exactly encodes the prior information that is needed. As we solve two quite different extensions to the standard VLBI imaging with the same approach, it is natural to use the same approach also for the combined problem: a dynamic, polarimetric reconstruction.
## 2 Theory
### Vlbi
As described by the van-Cittert-Zernike theorem the visibilities \(\mathcal{V}\) are related to the true sky-brightness distribution \(I(x,y)\) by a two-dimensional Fourier transform under reasonable assumptions (Thompson et al., 2017):
\[\mathcal{V}_{I}(u,v)=\int\int e^{-2ei(x+vy)}I(x,y)dxdy=:\mathcal{T}I(u,v). \tag{1}\]
From a full coverage of the Fourier coefficients (visibilities) the true sky brightness distribution could be computed by an inverse
Fourier transform. However, in VLBI the uv-coverage is very sparse with significant gaps. This makes the problem of recovering the image an ill-posed inverse problem. The polarized quantities are measured at every antenna with orthogonal polarimetric filters (linear or circular). The cross-correlation of these signals give rise to the polarimetric Stokes I parameters and their respective polarimetric visibilities:
\[\mathcal{V}_{I}=\mathcal{F}I, \tag{2}\] \[\mathcal{V}_{Q}=\mathcal{F}Q,\] (3) \[\mathcal{V}_{U}=\mathcal{F}U,\] (4) \[\mathcal{V}_{V}=\mathcal{F}V, \tag{5}\]
where \(I\) is the total brightness, \(Q\) and \(U\) the linear polarizations and \(V\) the fraction of circular polarization. By construction it is:
\[I^{2}\geq Q^{2}+U^{2}+V^{2}. \tag{7}\]
### Imaging
Imaging with the CLEAN algorithm and its variants (Hogbom 1974; Schwab 1984; Wakker & Schwarz 1988) were the standard in VLBI imaging for the last decades. In CLEAN the imaging problem is equivalently reformulated as a deconvolution problem:
\[I^{D}=B^{D}*I, \tag{8}\]
where \(I^{D}\) is called the dirty map (inverse Fourier transform of all measured, and probably reweighted, Fourier coefficients) and \(B^{D}\) (the dirty map of a synthetic delta source) is called the dirty beam. The astronomer using CLEAN determines some search windows for components, CLEAN looks for the maximum peak in the residual in this window (minor loop) and subtracts the shifted and rescaled dirty beam from the residual (major loop). This procedure is iterated until the residual is noise-like. In this way, CLEAN models the image as a set of delta functions. Finally, these components are restored with a restoring beam (clean beam) that fits the central peak of the dirty beam. CLEAN is an inverse modeling approach to the imaging problem.
Recently forward modeling approaches gained interest in the community in the framework of RML (Chael et al. 2018; Akiyama et al. 2017a; Muller & Lobanov 2022) and Bayesian methods (Arras et al. 2019; Broderick et al. 2020b, a). These methods seem to outperform classical CLEAN in terms of speed, spatial resolution, sensitivity and precision, in particular when the uv-coverage is sparse (e.g. Event Horizon Telescope Collaboration et al. 2019b; Arras et al. 2021; Muller & Lobanov 2022; Roelofs et al. 2023). On the other hand, these forward modeling methods require the fine-tuning of some hyper-parameters and regularization parameters, despite the recent effort to reduce this dependence (Muller & Lobanov 2022). For the remainder of this manuscript we focus on RML methods and ignore Bayesian approaches for now.
In RML, a sum of data fidelity terms and penalty terms is minimized:
\[\hat{I}\in argmin_{I}\sum_{i}\alpha_{i}S_{i}(I)+\sum_{j}\beta_{j}R_{j}(I), \tag{9}\]
where the data fidelity terms \(S_{i}\) measures the fidelity of the recovered solution \(I\) to the observed data (i.e. polarized visibilities) and the penalty terms/regularization terms \(R_{j}\) measure the fidelity of the guess image \(I\). The regularization parameters \(\alpha_{i}\) and \(\beta_{j}\) are manually set weights that balance data fidelity and regularization terms. Typical choices for the data terms are chi-squareds to the observed (polarimetric) visibilities, and related calibration independent quantities such as closure phases and closure amplitudes. For the regularization terms a wide range of regularizers has been applied in the past, e.g. sparsity promoting regularization (11, 12), smoothness constraints (total variation, total squared variation), hard constraints (total flux, non-negativity), entropy maximization (MEM) or multiscale decompositions (hard thresholding on scales). The regularization terms introduce regularization to the ill-posed imaging problem. By balancing the data terms and the regularization terms, we select a possible guess solution that is fitting data (small data terms) and robust against noise and artifacts (small penalty terms). We have demonstrated in previous works (Muller & Lobanov 2022) that a support constraint has the same regularization effect. By constraining the space of free parameters to the multiresolution support we were able to refine the fit to the observed data in later imaging rounds.
### Wavelets
The basis behind multiscalar approaches are multiscalar dictionaries. We proposed in (Muller & Lobanov 2022) the use of radial-symmetric difference of Gaussian (DoG) wavelets and extended them to directional dependent basis functions in (Muller & Lobanov 2023). Moreover, we introduced in (Muller & Lobanov 2023) steep, quasi-orthogonal basis functions to study the Fourier domain by difference of Bessel functions (DoB). Both dictionaries (DoG and DoB) are related to each other: the DoG wavelets approximate the central peak of the DoB wavelets, but do not contain the wider sidelobes of latter ones. In what follows we quickly summarize these wavelet dictionaries. For more detailed information we refer to (Muller & Lobanov 2022, 2023).
Wavelets have a wide range of applications in image compression. The most widely used continuous wavelet is the Mexican-hat wavelet which is a rescaled second order derivative of a Gaussian (Lagrangian of Gaussians) (Starck et al. 2015). The difference of Gaussian method offers some viable approximation to Mexican hat wavelets. A DoG-wavelet is described by two width parameters \(\sigma_{1},\sigma_{2}\):
\[\Phi^{\sigma_{1},\sigma_{2}}_{\mathrm{DoG}}(x,y) =\frac{1}{2\pi\sigma_{1}^{2}}\exp\left(\frac{-r(x,y)^{2}}{2\sigma_ {1}^{2}}\right)-\frac{1}{2\pi\sigma_{2}^{2}}\exp\left(\frac{-r(x,y)^{2}}{2 \sigma_{2}^{2}}\right)\] \[=G_{\sigma_{1}}-G_{\sigma_{2}}. \tag{10}\]
The Fourier transform of these DoG-wavelets define ring-like filters in the Fourier domain:
\[\mathcal{F}\Phi^{\sigma_{j},\sigma_{j+1}}_{\mathrm{DoG}}(u,v)\propto\exp \left(-2\pi^{2}\sigma_{j}^{2}q(u,v)^{2}\right)-\exp\left(-2\pi^{2}\sigma_{j+1} ^{2}q(u,v)^{2}\right). \tag{11}\]
The extension to DoB-wavelets is natural. We replace the DoG-wavelets, just by spherical Bessel functions:
\[\Phi^{\sigma_{j},\sigma_{j+1}}_{\mathrm{DoB}}(x,y)=\] \[\frac{1}{\tilde{\sigma}_{j}r(x,y)}J_{1}(2\pi r(x,y)/\tilde{ \sigma}_{j})-\frac{1}{\tilde{\sigma}_{j+1}r(x,y)}J_{1}(2\pi r(x,y)/\tilde{ \sigma}_{j+1}). \tag{12}\]
Moreover, the extension of both wavelets to directional dependent basis functions is straightforward as well. One just has to replace the radial coordinates by elliptical ones.
The wavelet decomposition is composed out of the wavelet basis functions from a sequence of increasing widths \(\sigma_{0}\leq\sigma_{1}\leq...\leq\sigma_{J}\):
\[\Psi^{\text{DoG}}:I\mapsto\mathcal{I}=[\Phi^{\sigma_{0},\sigma_{1}} \ast I,\Phi^{\sigma_{1},\sigma_{2}}_{\text{DoG}}\ast I,...,G_{\sigma_{J}}\ast I], \tag{13}\] \[\Psi^{\text{DoB}}:I\mapsto\mathcal{I}=[\Phi^{\sigma_{0},\sigma_{1 }}\ast I,\Phi^{\sigma_{0},\sigma_{1}}_{\text{DoB}}\ast I,...,J_{\sigma_{J}}\ast I]. \tag{14}\]
For direction dependent dictionaries, we use elliptical Gaussians and Bessel functions instead. For more details we refer to our discussion in Muller & Lobanov (2023). The multiscale dictionary is the adjoint of the multiscale decomposition (in what follows called \(\Gamma\)):
\[\Gamma:\mathcal{I}=\{I_{0},I_{1},I_{2},...,I_{J}\}\mapsto\sum_{i=0}^{J-1} \Phi^{\sigma_{i},\sigma_{i+1}}_{\text{DoG}}\ast I_{i}+G_{\sigma_{J}}\ast I_{J}, \tag{15}\]
with an analogous action for DoB-wavelets and multi-directional wavelets. The complete action of the multi-scalar and multi-directional wavelet decomposition is presented in the Appendix.
### DoG-Hit
Our novel algorithm for doing dynamic polarimetric reconstructions is an extension of the DoG-HiT algorithm (Muller & Lobanov 2023). We summarize this algorithmic framework in this section. DoG-HiT models the image by a radial symmetric wavelet dictionary \(\Psi^{\text{DoG}}\). The Fourier transform of the basis functions of the dictionary (atoms) are sensitivity filters in the Fourier domain. Hence, by fitting the widths of the Gaussians to the uv-coverage, we define wavelets that are most sensitive to measured Fourier coefficients and wavelets that are most sensitive to gaps in the uv-coverage. The signal of former ones should be kept, while the lack of later atoms causes sidelobes in the image. In this way, the dictionary allows for a better separation between measured features (covered by baselines) and uncovered artifacts. We interpolate the signal in the gaps by the smooth nature of the basis functions, but suppress the signal in the gaps to a level that overfitting is prohibited. All in all, we solve the minimization problem (Muller & Lobanov 2022):
\[\hat{\mathcal{I}}\in\operatorname{argmin}_{\mathcal{I}} \left[S_{\text{cph}}(FT\mathcal{I},\mathcal{V})+S_{\text{cla}}(F \mathcal{I},\mathcal{V})\right.\] \[\left.+\beta\cdot\|\mathcal{I}\|_{\rho}+R_{\text{flux}}(\mathcal{I },f)\right], \tag{16}\]
where \(S_{\text{cph}}\) and \(S_{\text{cla}}\) denote the \(\chi^{2}\)-fit to the closure phases and closure amplitudes respectively. \(R_{\text{flux}}\) denotes a characteristic function on the total flux of the guess solution. We use the pseudo-norm \(\|\cdot\|_{\rho}\)(i.e. the number of non-zero coefficients) as a sparsity promoting regularization term weighted with a regularization parameter \(\beta\). Eq. (16) is solved by a forward-backward splitting algorithm alternated with rescaling the emission to a predefined total flux (Muller & Lobanov 2022). The final recovered solution is:
\[\hat{I}=\Gamma\mathcal{I}. \tag{17}\]
The regularization parameter \(\beta\) is the only free parameter that needs to be chosen manually by the user. The number of free parameters is therefore much smaller than the number of free parameters for RML methods such as chitin (Chael et al. 2016, 2018) or SMILI (Akiyama et al. 2017b, a) since the penalty term is chosen data-driven. We demonstrated in Muller & Lobanov (2022) that although the optimization landscape is much simpler, the reconstructions obtained by DoG-HiT are competitive to RML reconstructions. Moreover, we only fit closure phases and closure amplitudes for DoG-HiT in Eq. (16), i.e. the reconstruction is robust against instrumental gain corruptions. Consecutively we use the model computed by DoG-HiT for self-calibration, i.e. we determine the gains.
### Multiresolution support
A specific property of the multiscalar decompositions is the multiresolution support. (Mertens & Lobanov 2015) paved the way for the application of the multiresolution support in the analysis of AGN jets. The multiresolution support is a set of wavelet components that are statistically significant (Starck et al. 2015). We decompose a noisy image by a wavelet dictionary: \([I_{0},I_{1},I_{2},...,I_{J}]=\Psi I\). Moreover, we compute the scale-dependent noise-level \(s_{j}\) by decomposing a Gaussian white noise field with the same wavelet dictionary. Given some threshold \(k_{s}\), we can define a set of statistically significant wavelet coefficients with the criterion that \(\left\|I_{J}(x,y)\right\|\geq k_{s}s_{j}\) where the noise-level is approximated by the variance from an emission-free region of the image scale \(I_{j}\) (i.e. far away from the center). The multiresolution support for a celestial ground truth image from the EHT imaging challenges1 is illustrated in Fig. 1.
Footnote 1: [http://vlbimaging.csail.mit.edu/](http://vlbimaging.csail.mit.edu/)
The multiresolution support encodes two different types of prior information about the model. Firstly, it encodes a'support constraint', i.e. it defines the position of significant emission spikes in the field of view.
Secondly, the multiresolution support contains information about the spatial scales that are present in the observation. In sparse VLBI arrays, this is dominated by the uv-coverage, i.e. by which spatial scales are covered by observed baselines in the Fourier domain. As various wavelet basis functions are most sensitive to various baselines or gaps in the uv-coverage, the information about which spatial scales are covered by observations is directly imprinted in the multiresolution support. This is especially true for the direction dependent DoG- and DoB-wavelets used for DoG-HiT that were fitted to the uv-coverage, i.e. that were developed to allow an optimal separation between covered features and gaps in the uv-coverage.
DoG-HiT solves the minimization of Eq. (16) with a forward-backward splitting algorithm. The backward projection step is the application of the proximal-point operator of the \(l^{0}\) penalization function, which is a hard thresholding (Muller & Lobanov 2022). Hence, all insignificant wavelet coefficients are set to zero. DoG-HiT therefore computes an approximation of the multiresolution support as a byproduct. This support was used for further refining rounds in the imaging (Muller & Lobanov 2022).
The computation of the multiresolution support as a byproduct of DoG-HiT highlights an essential improvement of DoG-HiT compared to CLEAN regarding supervision. The support of significant emission is found by DoG-HiT automatically, while it has to be selected in CLEAN by the user-defined CLEAN windows. DoG-HiT is therefore is less user-biased and provides (compared to standard RML frameworks and CLEAN) an essential step towards unsupervised VLBI imaging.
## 3 Algorithms
We outline in this section the algorithms used for static polarimetry, dynamic Stokes I imaging and dynamic polarimetry. In what follows, we will call these algorithms'mr-support imaging'.
### Stokes I
Static Stokes I images are constructed with DoG-HiT with the five round pipeline presented in (Muller & Lobanov 2022). However, in (Muller & Lobanov 2022) we used only radially symmetric wavelets. As an extension, we use the multi-directional dictionaries developed in (Muller & Lobanov 2023) in this work, i.e. we replace the circular symmetric Gaussians by elliptical Gaussians. Moreover, we used a grid search in (Muller & Lobanov 2022) to find a proper starting point for the forward-backward splitting minimization iterations of DoG-HiT. Since the backward step in the minimization is essentially a hard thresholding, we tried different scale-dependent thresholds in an equidistant grid to minimize Eq. (16) and used the setting of the minimum as the starting point for the forward-backward iterations. For this manuscript, we use the same grid search, but apply the orthogonal DoB-wavelets in the grid search, while still using the DoG wavelets in the imaging rounds of the pipeline. We will not focus on the Stokes I reconstruction in this work as these extensions are rather straightforward and minor, and the focus of the manuscript is on an extension of DoG-HiT to polarimetry. We recall one of the main advantages of DoG-HiT: the algorithm works mainly unsupervised with a minimal set of free parameters, hence adding a minimal human bias in the imaging procedure.
### Polarimetry
For polarimetric reconstructions we first reconstruct a Stokes I image with DoG-HiT and solve for the gains by self-calibrating to the final output (note that DoG-HiT relies on calibration independent closure quantities). As a second step, we solve for the polarimetric Stokes parameters \(Q,U\) and \(V\). We take the multiresolution support computed by DoG-HiT for the Stokes I imaging and constrain the space of free parameters to all wavelet coefficients in the multiresolution support. We then solve for \(Q,U,V\) by minimizing the fit to \(\mathcal{V}_{Q},\mathcal{V}_{U},\mathcal{V}_{V}\) with a gradient descent algorithm, but only allow coefficients in the multiresolution support to vary. In summary we solve the following problems:
\[\begin{split}\hat{\mathcal{Q}}\in\operatorname{argmin}_{Q=[(Q_{0 },...,Q_{1},Q_{2}),(x,y)=0\,\text{whenever}\,I_{(x,y)=0}]}\left[S_{Q}(\text{ FTQ},\mathcal{V}_{Q})\right]\\ \hat{\mathcal{U}}\in\operatorname{argmin}_{t=[U_{0},...,U_{1},U _{2},U_{3}]=0\,\text{whenever}\,I_{(x,y)=0}]}\left[S_{U}(\text{FTU},\mathcal{ V}_{U})\right],\end{split} \tag{18}\]
where \(\left\{I_{0},...,I_{n}\right\}=:\hat{\mathcal{J}}\) are the recovered wavelet coefficients for the Stokes I image as in Sec. 2.4. \(S_{U}\) and \(S_{Q}\) are the \(chi^{2}\)-fit qualities to the Stokes Q and U visibilities. The side condition \(\hat{Q}_{j}(x,y)=0\) whenever \(\hat{I}_{j}(x,y)=0\) denotes the constraint that we only vary coefficients in the multiresolution support.
The multiresolution support is a well suited regularizer here: the support constraint encodes the side-condition Eq. (7) effectively, i.e. polarized emission is only allowed to appear at locations in the images in which we found relevant emission in total intensity. While this inequality (7) holds true theoretically in any case, in practice the pathological situation could occur that due to the instrumental effect a non-detection of Stokes I does not rule out polarimetric structures. With this caveat in mind, we assume for the rest of the manuscript that inequality (7) holds true in observations as well. Moreover, the polarimetric visibilities have the same uv-coverage as the Stokes I visibility. The'spatial constraint' of the multiresolution support describes which spatial scales are statistically significant to describe the emission in the image, which in case of sparse VLBI arrays is dominated by the uv-coverage (i.e. which spatial scales are compressed by which baselines and whether these baselines are measured). Hence, we already computed the multiresolution support as a byproduct in DoG-HiT to study the uv-coverage of the observation and get control over overfitting in the gaps of the uv-coverage by suppressing the respective atoms of the dictionary. This effective regularization can be copied over to the polarized visibilities as the uv-coverage is the same.
Moreover, we like to stress out once again that the multiresolution support is a completely data driven property computed as a sideproduct by DoG-HiT. Hence, the reconstruction of polarimetric properties still relies on a minimal set of hyperparameters and remains largely unsupervised.
We fit complex polarimetric visibilities directly here. That requires that a good polarization calibration is available already. The method is however easy to adapt to more realistic situations
Figure 1: Left panels: true image and true image with additional Gaussian noise, middle panels: wavelet decomposition of the noised image with the DoG-wavelet dictionary computed with filter sizes \(\sigma_{0}=1,\sigma_{1}=2,\sigma_{2}=4,...,\sigma_{5}=32\) pixels, right panels: multiresolution support computed by thresholding the wavelet scales to the scale-dependent noise plotted as a mask with either value 1 (coefficient in the support) or 0 (coefficient not in the multiresolution support)
since it is (opposed to CLEAN) a forward-modeling technique. Firstly, instead of a constrained \(\chi^{2}\)-minimization to the complex visibilities, one could just optimize the fit to the visibility-domain polarization fraction as in (Johnson et al., 2015). Secondly, the minimization in Eq. (18) is done iteratively, where the most important features are recovered first and gradually more detailed features will be recovered at later iterations. Hence, with a similar philosophy to how self-calibration interacts with CLEAN, we could run the minimization for some iterations and do the calibration on the current model, then continue the minimization and calibration in an alternating manner.
### Dynamic Stokes I
For dynamic Stokes I imaging, we first reconstruct a static image with DoG-HiT. For this work we assume that the static image of a dynamically evolving source might be a good approximation to the mean image during the time of observation. This might be in particular true if the source structure contains some persistent structure during the complete observing run, as could be expected for Sgr A* in EHT observations with a persistent shadow in rotating hotspot models (Tiede et al., 2020). However, based on the dynamics of the target, it may be difficult to recover a decent fit to the data with a static image. In this work we applied a procedure inspired by the strategy in Event Horizon Telescope Collaboration et al. (2022b), i.e. we added a systematic noise-floor on every baseline to account for variability. However, we did not repeat the sophisticated noise modeling applied in Event Horizon Telescope Collaboration et al. (2022b).
We compute the multiresolution support by the static mean image. Then, we cut the observation in single frames and reconstruct images at every frame independently. All frames together make up the dynamic movie reconstruction. However, due to the shortness of single frames, snapshot imaging is not possible due to the sparsity of the uv-coverage. Again we propose to use the multiresolution support instead. We minimize the \(\chi^{2}\) for every single frame observation independently for every frame in a gradient descent algorithm (using the mean image as an initial guess), but only allow coefficients in the multiresolution support to vary.
The multiresolution support is a well suited regularizer here as well: if the static image is a good approximation to the mean image, the static image contains all the locations of emission in the field of view. If at some time an emission spike occurs at a specific location, this emission spike should be visible in the mean as well. Hence, the'support constraint' encodes information about the location of emission at single frames. This assumption comes with the caveat that short-living, small-scale features may be not strong enough in the mean image and excluded later from the dynamic reconstructions due to the multiresolution support. However, we also doubt that such a feature would be visible with the much sparser uv-coverage of single scans, and therefore would not be recovered anyways. Moreover, the uv-coverage of the complete observation is the sum of the observations of the single frames. In single frame observations there are three different categories of Fourier coefficients/baselines: the ones measured by observations in this single frame (very sparse), the ones that are not measured during the time of the single frame, but will be measured at later (earlier) times in the observation, and the baselines that are not measured at all due to the sparsity of the array. By doing constrained optimization (constrained by the multiresolution support) to the single frame observation we fit the first class of baselines, copy the solution over from the initial guess (mean image) for the second class of baselines, and suppress the last class of baselines by the multiresolution support. Hence, the'spatial constraint' implemented by the multiresolution support is a well suited prior to do dynamic imaging.
The reasonable assumption of temporal correlation between scans, e.g. by a regularizer term favoring temporal smoothness, is not used explicitly for mr-support imaging. However, such assumptions could be included in the dynamic reconstruction straight-forwardly: instead of fitting the visibilities with a constrained minimization approach, we minimize the sum of a quality metric for the fit to the visibilities and a temporal regularization term, but only vary coefficients in the multiresolution support. However, for this work we restrict ourselves to reconstructions without penalization on the temporal evolution such that now new regularization parameters are introduced and the reconstruction remains automatic and completely data-driven. Moreover, due to this fact all scans can be computed in parallel allowing for fast computations.
### Dynamic polarimetry
We propose the same procedure for polarized imaging and dynamic Stokes I imaging: fitting the respective visibilities with a gradient descent approach while only varying coefficients in the multiresolution support computed by DoG-HiT. It is therefore natural to utilize this approach for dynamic polarimetry as well. In fact, we propose the following strategy. First reconstruct a static Stokes I image by DoG-HiT and compute the multiresolution support. Then cut the observation in single frames and solve for dynamics and polarimetry together by fitting to \(\mathcal{V}_{I},\mathcal{V}_{Q},\mathcal{V}_{U},\mathcal{V}_{V}\) together in single frames independently, but only vary coefficients in the multiresolution support.
## 4 Synthetic data tests
### Synthetic observations
We tested the capabilities for mr-support imaging for polarimetric image reconstructions. We test three different source models (static polarized Sgr A* model, a slowly rotating crescent and a rapidly rotating crescent) with two different arrays (EHT and a possible ngEHT configuration). A thorough comparison of existing imaging approaches for dynamic polarimetry is in preparation and will be deferred to a consecutive work. For more details we also refer to the ngEHT analysis challenges Roelofs et al. (2023), and in particular the upcoming third challenge 2 in which we compete with mr-support imaging. We review our submission to the third challenge in Sec. 4.5.
Footnote 2: [https://challenge.ngeht.org/challenge3/](https://challenge.ngeht.org/challenge3/)
We observe the synthetic ground truth images and movies with the array of the EHT 2022 observations and added thermal noise according to the measured SEEDs of the 2017 observation campaign (Event Horizon Telescope Collaboration et al., 2019). We used ten minute cycles of five minutes of continued observation with an integration time of ten seconds and a five minute off-source gap (mimicking calibration, pointing scans). This cycle time is of special interest when discussing dynamic reconstructions as the five-minute gaps essentially limit the temporal resolution. The data sets were scan-averaged prior to the imaging procedure.
As ngEHT configuration we took the EHT 2022 array configuration (i.e. ALMA, APEX, GLT, IRAM-30 m, JCMT, KP,
LMT, NOEMA, SMA, SMT, SPT) and added ten additional antennas from the list of (Raymond et al., 2021) as was done for the ngEHT Analysis challenges (Roelofs et al., 2023): HAY (34 m), OVRO (10.4 m), GAM (15 m), BAR, BAJA, NZ, SGO, CAT, GARS, CNI (all 6 m). We added instrumental noise according to the size of the telescopes, but did not add further calibration errors. As a ground truth we took the slowly rotating crescent model with a rotation period of one hour. As for the EHT 2022 coverage, the ground truth movie is observed with a cycle of five minutes on source and a five minutes gap and an integration time of ten seconds (10 minutes on source and 2 minutes gaps in the fastly rotating crescent example).
As a static synthetic test image we took a synthetic Sgr A* image out of the _ethim_ software package (Chael et al., 2018). The true image model is presented in Fig. 2.
For the dynamic Stokes I imaging we used a crescent model (Tiede et al., 2022):
\[I(r,\theta)=I_{0}(1-s\cos(\theta-\xi))\frac{\delta(r-r_{0})}{2\pi r_{0}}. \tag{19}\]
We use the parameters: \(I_{0}=0.6\,\mathrm{Jy}\), \(s=0.46\), and \(r_{0}=22\,\mu\)as. To account for dynamics roughly similar to rotating hotspot models (Tiede et al., 2020) we let the crescent rotate clockwise. One rotation period takes 1 hour which is roughly comparable to the flux variability time-scale of the SGR A* lightcurve (Wielgus et al., 2022). The synthetic ground truth image is presented in Fig. 3. To illustrate the orientation of the crescent, we also show a green arrow from the image center to the location of the brightest pixel in the image in Fig. 3. For polarized movies we have to add polarization. For the sake of simplicity we used a simpler model for testing the capabilities of dynamic polarimetry here: we added a constant linear polarized structure at 10% (no circular polarization) with a rotating EVPA. To separate the dynamic polarimetric reconstruction from effects of the Stokes I imaging, the rotation of the EVPs is counter-clockwise (rotation of Stokes I was clock-wise) and has a different rotation period of two hours instead of one hour as for the Stokes I images.
As an additional model we also test a rapidly rotating crescent model with an orbital period time of twenty minutes. We show the ground truth movie in Fig. 4. The constant EVPA pattern rotates counter-clockwise in one hour. The advance time between scans that is used for pointing and calibration limits the temporal resolution. For an array as sensitive such as the ngEHT a smaller gap time might be possible. We therefore synthetically observed the rapidly rotating movie with a cycle of ten minutes of scientific observation (ten seconds integration time) and two minutes gaps.
### Static polarization with EHT coverage
We fitted the scales to the uv-coverage first with the procedure outlined in Muller & Lobanov (2022) and Muller & Lobanov (2023): we searched for jumps in the sorted distribution of uv-distances that exceed some threshold and we selected the radial scales accordingly. We defined nine radial scales and used four different angles, resulting in 36 scales to represent the uv-coverage. The Stokes I image was recovered with DoG-HiT (Muller & Lobanov, 2022) using the multi-directional dictionaries introduced in Muller & Lobanov (2023) as described in Sec. 3.1. As presented in Sec. 3.2 we then computed the multiresolution support. The multiresolution support is presented in Fig. 5. Some scales that are most sensitive to gaps in the uv-coverage are suppressed completely, while other scales encode various parts of the emission structure, i.e. the ring like emission (scale 34 and scale 35), the extended emission structure (scale 30 and 32), the fine crescent structure (among others scale 4, 7, 9, 14 and 24), or the bright spot to the left of the crescent (e.g. scale 0, 2 and 10). The minimization to the polarized visibilities was done with the limited-memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm (Byrd et al., 1995), as implemented in Scipy (Jones et al., 2001). To assert global convergence, we blurred the Stokes Q and U image of the reconstruction with the nominal resolution and redo the minimization with a gradient descent procedure.
We show the final reconstruction result in Fig. 2. The reconstruction of the Stokes I image is relatively successful. The crescent-like shadow image is overall well recovered. However, there are some finer structures that are not recovered by DoG-HiT: the closing of the ring by a thin line towards the right and the fainter structure inside the ring. The linear polarized emission is overall very well recovered. The total fraction of linear polarized light and the overall direction of the electromagnetic vector position angles (EVPA) in North-South direction are well recovered. The synthetic ground truth image contains some more complex, local structures, e.g. a rotation of the EVPA in the bottom left of the image towards an east-west direction. This shift is partly visible in the recovered image as well, although the amount of rotation is smaller.
All in all, this example demonstrates that even for a very challenging and sparse array such as the EHT 2022 array the polarimetric reconstruction with support imaging is quite successful in both the overall structure, but also in the reconstruction of more localized polarimetric structures with a size of \(\approx 5\,\mu\)as. Thus, similar to the DoG-HiT reconstruction for the Stokes I image, mr-support polarimetry seems to offer mild super-resolution. Interestingly, super-resolution and a good fit to the polarized visibilities is offered without introducing artifacts in the image. This demonstrates the power of the regularization approach.
### Dynamic Stokes I
The synthetic slowly rotating crescent movie was observed as described in Sec. 4.1 with a ten-minute cycle with EHT coverage. According to this temporal resolution, we cutted the observation into frames with a length of ten minutes for the dynamic reconstruction. The reconstruction was then done with the mr-support approach in the best time window \(t\in[10\,\mathrm{UT},14\,\mathrm{UT}]\)(Farah et al., 2022) as outlined in Sec. 3.3: as a first step we fitted a symmetric ring model to the data, created a mean image with DoG-HiT with the fitted ring model as an initial guess, then we solved sequentially for every frame by mr-support imaging with the support calculated from the mean. As an initial guess for the single frame imaging with mr-support imaging we used the reconstruction of the respectively preceding frame (or the mean in case of the first frame).
We present the reconstruction results in Fig. 7. The single frames all show a circular structure with a radius of \(\approx 22\,\mu\)as. Moreover, nearly all frames have an asymmetry of a crescent. However, the crescent asymmetry is less prominent than in the true image. As for the true dynamic movie, we illustrate the orientation of the crescent by an arrow from the center to the brightest pixel in the reconstruction. Following the orientations of the recovered crescents in Fig. 7 a clear rotation with an orbital period of one hour is visible. The orientation of the recovered crescents match in most frames with the synthetic ground truth except for some notable exceptions at 11 UT (no asymme
try recovered at all), and 13.16 UT-13.5 UT (wrong orientations). In particular the latter one could be a consequence of taking the reconstruction at the preceding frame as an initial guess for the next frame. The false-recovery at 13.16 UT hence also affects all following frames.
We present in Fig. 8 the reconstruction result for a slowly rotating crescent with ngEHT coverage. The reconstruction of the crescent is excellent at every frame with high contrast images. The single-frame images do not show additional image artifacts. Although the additional ngEHT antennas have rather large thermal noise-levels, the much improved density of the array effectively stabilizes against thermal noise. Strikingly the orientation of the crescents matches the ground truth very well. We present in Fig. 6 a comparison between the true position angles and the
Figure 3: Synthetic ground truth dynamic movie (slowly rotating crescent) in the time interval between 10 UT and 14 UT. The green arrow ranges from the image center to the position of the brightest pixel in the frame, hence illustrating the orientation of the crescent.
Figure 2: Left panel: static polarization ground truth, middle panel: static reconstruction with mr-support imaging, right panel: uv-coverage of synthetic observation (EHT 2022 array).
recovered ones with an error by the temporal smearing due to the scan-length.
The ngEHT array is much denser than the EHT configuration of 2022. This enhances the possible temporal resolutions. We therefore also studied the possibility to observe faster rotating structures at the event horizon with the fast rotating crescent model. The dynamic reconstruction was done in this case in frames of three minutes in length. The faster orbital period and the shorter frame length complicate the reconstruction procedure: there are fewer observation points per single frame which raises the problem of sparsity. Moreover, due to the shorter dynamical timescale and the smaller number of observing points per single frame, the scan-averaged visibility points worsen the signal to noise ratio by a factor of \(\sqrt{3}\) compared to the slower rotating crescent. The reconstruction results for dynamic Stokes I imaging with mr-support imaging are shown in Fig. 9. The crescent is observed at every frame. Additionally the overall orientation matches quite well. However, the quality of the reconstruction decreases compared to the slowly rotating crescent, as can be expected: the asymmetry of the crescents is less clear and the orientation is slightly off by roughly fifteen degrees in some frames.
All in all, we observe that with mr-support imaging we recover the correct image structure, including overall shadow feature, crescent asymmetry, and orientation, for most frames in the observation very well. Again we like to mention that these particular successful reconstructions do not suffer from introducing image artifacts despite the sparsity of the uv-coverage, especially in single frame observations. This, once again, demonstrates the regularizing property of the mr-support approach.
### Dynamic polarimetry
As outlined in Sec. 3.4 we did the dynamic reconstruction of the Stokes I channel first. Hence, we copied over the reconstructions from Sec. 4.3. We then added polarization frame by frame by mr-support imaging. Similar to our procedure presented in Sec. 4.2 we first minimized the data terms (fit to polarized visibilities) with a BFGS minimization procedure, blurred the reconstructed polarized images with the nominal resolution, and minimized the fit with a gradient descent procedure starting from the blurred image as an initial guess.
The reconstruction results in the time-window \(t\in[10UT,11UT]\) are presented in Fig. 10 for a slowly rotating crescent model with EHT coverage. The relatively simple polarized structure is well recovered in each frame. While the recovered images show some local variation from the overall orientation, the larger scale EVPA orientation matches for all frames. The fraction of polarized linearly polarized light is surprisingly well recovered. Again, despite of some local variations in the recovered EVPA, the challenging reconstruction does not show image artifacts.
In Fig. 11 we present the reconstruction of the slowly rotating crescent observed with the ngEHT. The quality of the reconstruction improved compared to the reconstructions presented in Fig. 10. The global orientation of the EVP
Figure 4: True movie for fast rotating crescent.
for every frame. In the reconstructions with the EHT configuration we also observed some local variations from the overall polarimetric structure. These cannot be observed anymore in the reconstructions with ngEHT coverage.
We present the dynamic polarimetry reconstruction with mr-support imaging of the rapidly rotating crescent in Fig. 12. The reconstruction of the polarimetric structure, i.e. the rotation of the EVPAs, remains excellent. These results suggest that mr-support imaging could handle dynamic, polarimetric structural features at the event horizon with realistic dynamic time scales.
### ngEHT analysis challenge
Additionally to the rather simple synthetic data tests presented in the previous subsections, we show here the reconstructions by mr-support imaging for the third ngEHT Analysis challenge 3. The ngEHT Analysis challenges are a series of semi-blind data challenges to evaluate the performance of algorithms for the planned ngEHT instrument (Roelofs et al., 2023). The ngEHT is a planned instrument to recover (polarimetric) movies at the event horizon scales (Doeleman et al., 2019).
Footnote 3: [https://challenge.ngheft.org/](https://challenge.ngheft.org/)
The ground truth movies produced for the ngEHT Analysis challenge resemble the current theoretical state-of-the-art in simulations (Roelofs et al., 2023; Chatterjee et al., 2023). Here we present the reconstructions of a RIAF model of Sgr A* (Broderick et al., 2016) with a shearing hotspot (Tiede et al., 2020) with hotspot parameters inspired by GRAVITY Collaboration et al. (2018). The data sets were observed with the EHT array and ngEHT arrays that we used for the geometric data sets as well. In contrast to the proof of concept with geometric models, the ngEHT challenge data contain the full set of data corruptions
Figure 5: Multiresolution support for the reconstruction of the static polarization example with EHT coverage.
Figure 6: True position angle (blue) and the recovered position angle recovered with mr-support imaging for an EHT configuration (red) and ngEHT configuration (green) for the slowly rotating crescent model. The errorbars reflect the change in position angle in the true source model within a 10 minutes scan (cycle length of synthetic observation).
that may be expected from real observations (Roelofs et al. 2023) simulated with the SYMBA package (Roelofs et al. 2020) including atmospheric turbulence, atmospheric opacity, pointing offsets, a scattering screen and thermal noise specific to each antenna. However, no polarization leakage was added to the data. For more information we refer to Roelofs et al. (2023) and the challenge website4. The data sets were network calibrated as it is standard in the EHT data processing (Event Horizon Telescope Collaboration et al. 2022b). The ngEHT Analysis challenge is in particular well suited as a verification data set since the challenge was done blindly, neither the source files nor the specific data corruptions were made public to the analysis teams.
Footnote 4: [https://challenge.ngeht.org/](https://challenge.ngeht.org/)
We show the ground-truth movie in Fig. 13. A static (but not descattered) image was recovered by DoG-HiT with a systematic error budget of 2%. The static image is computed by DoG-HiT in a completely unsupervised way from closure quantities. We used this calibration independent model to calibrate the data set on long time intervals (1 hour). Next we calculated the multiresolution support and cutted the image into frames of six minutes. The dynamic reconstruction was done with mr-support imaging. We self-calibrated the data set in every single observing frame during the procedure. Then we added polarization in every frame.
The recovered movie is presented in Fig. 14. Moreover, we show magnified panels of selected frames in Fig. 15. The single frames all show a ring-like structure with a central depression. Compared to the ground truth frames, the reconstructed images have a worse quality due to the rapid variability, systematics and sparse coverage. Moreover, an interstellar scattering screen was added to the data that was not removed during the imaging procedure. The reconstruction of the shearing hotspot motion is more challenging. We recover an approaching hotspot to the right of the ring at UT 11.3 (upper panels in Fig. 15), an extended (polarized) tail to the North-West (top right) from UT 11.3 until UT 11.6 (middle panels in Fig. 15), and a clearly visible arc of larger intensity within the ring to the South-East (bottom left) from UT 11.7-UT 11.9 (bottom panels in Fig. 15). These features are consistent with the hotspot motion of the ground truth movie. While we recover some motion related to the hotspot motion, a continuously evolving movie was not recovered. This is a result of the rather bad simulated weather conditions and the observation cadence for the third challenge: the source was (synthetically) observed for ten minutes followed by a gap of ten minutes. While mr-support imaging sufficiently recovers some (scattered) hotspot related features in the frames that have observed visibilities, the algorithm does not contain an interpolation scheme to the scans without observations (it just assumes the starting point, i.e. the preceding frame). Hence, we do not recover an evolving movie, but several frames (e.g. UT 11.5 and UT 11.6 or UT 11.7 until end) show the same image structure.
The synthetic ground truth polarization is less dynamic and hence easier to recover. We recover the overall radially-conic EVPA pattern in every frame with minor small scale perturbations from the ground truth (that may be also related to the different Stokes I images). Moreover, the recovered polarization fraction matches the true one. As a more detailed feature we successfully recover a larger fractional polarization for the shearing hotspots that follows the hotspot motion.
The presented data set mimics one of the most challenging VLBI data analysis problems so far with various data corruptions, high frequencies (i.e. phase instabilities), fast dynamics and polarimetric structures, the need for super-resolution, and a sparse VLBI uv-coverage. As expected, the reconstruction quality with mr-support imaging is degraded compared to the rather simple geometric data tests that we discussed before. However, the application highlights already the potential of mr-support imaging to do unsupervised, super-resolving, dynamical and polarimetric imaging together. This presents a unique capability in the landscape of existing imaging algorithms by now, and in particular a domain of research in which CLEAN remains limited due to its lack of resolution, its high demand of human supervision and calibration, and lacking support for dynamical reconstructions.
## 5 Conclusions and outlook
We presented in this manuscript a novel algorithmic approach to do static polarimetry, dynamic imaging and finally dynamic polarimetry. The approach was based on our previous works on multiscalar imaging (Muller & Lobanov 2022, 2023) and the multiresolution support in particular. The multiresolution support encodes important information about the emission structure on one hand (which spatial scales are present where in the image?) and the uv-coverage on the other hand (which of these spatial scales is measured by baselines?). Hence, the multiresolution support is well suited to introduce regularization for challenging extensions to the standard VLBI imaging problem in the spirit of constrained minimization: we optimize the fit to the respective data terms (chi-squared to frame by frame visibilities or to polarized visibilities), but vary wavelet coefficients in the multiresolution support only.
We demonstrated with applications to simple geometric synthetic observations the power of this approach. The mr-support constraint suppressed the introduction of image artifacts, hence providing ample regularization. Moreover, the approach is flexible enough to allow for the reconstruction of both dynamically evolving structures and polarimetric structures. Moreover, the blind application to more complex movies of the third ngEHT Analysis challenges demonstrated that the algorithm may also provide reasonable reconstructions with real data corruptions in one of the most challenging VLBI imaging problems, although the quality of the reconstruction is degraded.
Mr-support imaging shares the basic advantage of multiscalar approaches that are fitted to the uv-coverage. The static reconstructions are done with DoG-HiT which is completely data-driven and largely automatic without many hyperparameters (Muller & Lobanov 2022). The same applies for the extension to dynamics and polarimetric quantities. There are no further, specific regularization terms (with corresponding weights) introduced, rather the reconstruction is regularized again by the data driven multiresolution support which is determined by the uv-coverage and baseline-noise. Hence, mr-support imaging is blind and unbiased as well. However, we recognized an important bottlenecks for the dynamic reconstructions with mr-support imaging: the static average image needs to approximate the true time-averaged image quite well.
An extension to RML approaches to dynamic imaging, i.e. the addition of temporal regularizers, is straight-forward as well. Note that due to the lack of regularization parameters controlling the temporal correlation, mr-support imaging basically calculates images with rich structures from the extreme sparsity of a single scan independently of preceding and proceeding scans. That indicates that the multiresolution support information is a rather strong prior information that, once a reasonable static model is established, allows for the handling of extreme sparsity in the data.
The geometric test observations tested throughout this study are rather simple. First, we neglected circular polarization for the purpose of simplicity. We note that we only added thermal noise to the observations and no phase and amplitude errors. This does not affect the reconstruction of the static Stokes I image (neither for a static source nor for a dynamically evolving source) since DoG-HiT uses the closure quantities as data terms only (Muller & Lobanov, 2022). However, phase and amplitude calibration errors could affect the subsequent mr-support imaging rounds since for every frame the (polarized) visibilities are used instead of the closure quantities. Hence, we assume that one was able to solve for the (polarized) self-calibration with the time-averaged mean image. This does not has to be necessarily true, but might be a good approximation when the dynamic time-scale of the source and the dynamic time-scale of the gain-variability are different allowing a gain self-calibration with the mean image (e.g. compare Wielgus et al., 2022; Event Horizon Telescope Collaboration et al., 2022).
Moreover, while a rotating crescent movie might be a good approximation to a rotating hotspot model in first instance, the model is only a rough approximation to the range of models for the dynamics at the horizon scale. The same applies to the rather simple polarization model used. We therefore tested the algorithm in the blind third ngEHT Analysis challenge as well. While due to the systematic errors added to the synthetic data, the reconstructions are worse than in the previous data tests, mr-support imaging, for the first time, is able to recover super-resolved, polarized movies in an unsupervised way. This is a unique capability among all currently existing VLBI imaging algorithms. Furthermore, we expect further significant improvements from including a temporal regularizer in the dynamic imaging and from more sophisticated strategies for the static image reconstruction, in particular from frameworks that already demonstrated to be able to recover fast dynamics such as _ethim_ or _StarWarps_.
Finally, the application of the same ground truth movie to a possible ngEHT array configuration demonstrates the improvements that the ngEHT project will bring to dynamic reconstructions. The quality of the fits to Stokes I and polarimetric properties improves. With a ngEHT configuration it is even possible to recover structural patterns with dynamic timescales of about \(\sim 10-20\,\mathrm{min}\) and therefore what can be expected from real observations.
## Acknowledgments
We thank the team from the ngEHT Analysis challenge lead by Freek Roelofs, Lindy Blackburn and Greg Lindahl for the chance to use and publish their synthetic data set for this work. Special thanks goes in particular to Paul Tiede for providing the RI-AFSPOT model of SGR A*. HM received financial support for this research from the International Max Planck Research School (IMPRS) for Astronomy and Astrophysics at the Universities of Bonn and Cologne. This work was partially supported by the M2FINDERS project funded by the European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation Programme (Grant Agreement No. 101018682).
Our imaging pipeline and our software is available online as MrBeam software tool5. Our software makes use of the publicly available chitin (Chael et al., 2018), regpy (Regpy, 2019) and WISE software packages (Mertens & Lobanov, 2015).
Footnote 5: [https://github.com/hmuellergoe/mrbeam](https://github.com/hmuellergoe/mrbeam)
## References
* Akiyama et al. (2017) Akiyama, K., Ikeda, S., Pleau, M., et al. 2017a, AJ, 153, 159
* Akiyama et al. (2017) Akiyama, K., Kuramochi, K., Ikeda, S., et al. 2017b, ApJ, 838, 1
* Arras et al. (2021) Arras, P., Beester, H. L., Perley, R. A., et al. 2021, A&A, 646, A84
* Arras et al. (2019) Arras, P., Frank, P., Leike, R., Westermann, R., & Enslin, T. A. 2019, A&A, 627, A134
* Bhatnagar & Cornwell (2004) Bhatnagar, S. & Cornwell, T. J. 2004, A&A, 426, 747
* Blandford & Znajek (1977) Blandford, R. D. & Znajek, R. L. 1977, MNRAS, 179, 433
* Bouman et al. (2018) Bouman, K. L., Johnson, M. D., Dalca, A. V., et al. 2018, IEEE Transactions on Computational Imaging, 4, 512
* Bower et al. (2015) Bower, G. C., Markoff, S., Devet, J., et al. 2015, ApJ, 802, 69
* Broderick et al. (2016) Broderick, A. E., Fish, V. L., Johnson, M. D., et al. 2016, ApJ, 820, 137
* Broderick et al. (2022) Broderick, A. E., Gold, R., Georgier, B., et al. 2022, The Astrophysical Journal Letters, 930 1
* Broderick et al. (2020) Broderick, A. E., Gold, R., Karami, M., et al. 2020a, ApJ, 897, 139
* Broderick et al. (2020) Broderick, A. E., Pesce, D. W., Tiede, P., Pu, H.-Y., & Gold, R. 2020b, ApJ, 898, 9
* Byrd et al. (1995) Byrd, R. H., Lu, P., & Nocedal, J. 1995, SIAM Journal on Scientific and Statistical Computing, 16, 1190
* Clael et al. (2022) Clael, A., Chan, C.-K., Khouman, et al. 2022, acheal/eht-imaging: v1.2.4, Zenodo
* Chael et al. (2018) Chael, A. A., Johnson, M. D., Bouman, K. L., et al. 2018, ApJ, 857, 23
* Chael et al. (2016) Chael, A. A., Johnson, M. D., Narayan, R., et al. 2016, ApJ, 829, 11
* Chatterjee et al. (2023) Chatterjee, K., Chael, A., Tiede, P., et al. 2023, Galaxies, 11
* Clark (1980) Clark, B. G. 1980, A&A, 937, 377
* Cornwell (2008) Cornwell, T. J. 2008, IEEE Journal of Selected Topics in Signal Processing, 2, 793
* Doeleman et al. (2019) Doeleman, S., Blackburn, L., Dexter, J., et al. 2019, in Bulletin of the American Astronomical Society, Vol. 51, 256
* Emami et al. (2023) Emami, R., Tiede, P., Doeleman, S. S., et al. 2023, Galaxies, 11
* Event Horizon Telescope Collaboration et al. (2022a) Event Horizon Telescope Collaboration, Akiyama, K., Alberdi, A., et al. 2022a, ApJ, 930, L14
* Event Horizon Telescope Collaboration et al. (2019a) Event Horizon Telescope Collaboration, Akiyama, K., Alberdi, A., et al. 2019a, ApJ, 875, L1
* Event Horizon Telescope Collaboration et al. (2019b) Event Horizon Telescope Collaboration, Akiyama, K., Alberdi, A., et al. 2019b, ApJ, 875, L4
* Event Horizon Telescope Collaboration et al. (2021a) Event Horizon Telescope Collaboration, Akiyama, K., Alberdi, A., et al. 2021a, ApJ, 910, 43
* Event Horizon Telescope Collaboration et al. (2021b) Event Horizon Telescope Collaboration, Akiyama, K., Alberdi, A., et al. 2021b, ApJ, 910, 43
* Event Horizon Telescope Collaboration et al. (2021b) Event Horizon Telescope Collaboration, Akiyama, K., Alberdi, A., et al. 2021b, ApJ, 910, 43
* Farah et al. (2022) Farah, J., Galsion, P., Akiyama, K., et al. 2022, The Astrophysical Journal Letters, 930, 1
* Garsden et al. (2015) Garsden, H., Girard, J. N., Starck, J. L., et al. 2015, A&A, 575, A90
* Gomez et al. (2016) Gomez, J. L., Lobanov, A. P., Bruni, G., et al. 2016, ApJ, 817, 96
* Gomez et al. (2011) Gomez, J. L., Roca-Sopopoff, M., Agudo, I., Marscher, A. P., & Morstad, S. G. 2011, ApJ, 733, 11
* GRAVITY Collaboration et al. (2018a) GRAVITY Collaboration, Abuter, R., Amorim, A., et al. 2018a, A&A, 615, L15
* Graffayr Collaboration et al. (2018b) Graffayr Collaboration, Abuter, R., Amorim, A., et al. 2018b, A&A, 618, L10
* Hardee et al. (2007) Hardee, P., Mirno, Y., & Nishikawa, K.-T. 2007, Ap&SS, 311, 281
* Hogbom (1974) Hogbom, J. A. 1974, A&AS, 15, 417
* Hovatta et al. (2012) Hovatta, I., Lister, M. L., Aller, M. F., et al. 2012, AJ, 144, 105
* Ikeda et al. (2016) Ikeda, S., Tazaki, F., Akiyama, K., Hada, K., & Honma, M. 2016, PASJ, 68, 45
* Johnson & the ngEHT Project (2023) Johnson, M. & the ngEHT Project (2023), Key Science Goals for the Next-Generation Event Horizon Telescope, to appear in Galaxies
* Johnson et al. (2017) Johnson, M. D., Bouman, K. L., Blackburn, L., et al. 2017, ApJ, 850, 172
* Johnson et al. (2015) Johnson, M. D., Fish, V. L., Doeleman, S. S., et al. 2015, Science, 350, 1242
* Jones et al. (2001) Jones, E., Oliphant, T., Peterson, P., et al. 2001, SciPy: Open source scientific tools for Python, (Online; accessed 2015-08-25)
* Kramer & MacDonald (2021) Kramer, J. A. & MacDonald, R. N. 2021, A&A, 656, A143
* Mertens & Lobanov (2015) Mertens, F. & Lobanov, A. 2015, A&A, 574, A67
* Muller & Lobanov (2023) Muller, H. & Lobanov, A. 2023, accepted for publication by A&A, arXiv:2301.11681
* Muller & Lobanov (2022) Muller, H. & Lobanov, A. P. 2022, A&A, 666, A137
* Narayan & Nityananda (1986) Narayan, R. & Nityananda, R. 1986, ARA&A, 24, 127
* Falumbo et al. (2019) Falumbo, D. C. M., Doeleman, S. S., Johnson, M. D., Bouman, K. L., & Chael, A. A. 2019, ApJ, 881, 62
* Poti et al. (2021) Poti, F. M., Lobanov, A. P., Ros, E., et al. 2021, A&A, 648, A82
* Reu & Cornwell (2011) Reu, U. & Cornwell, T. J. 2011, A&A, 532, A71
* Raymond et al. (2021) Raymond, A. W., Palumbo, D., Paine, S. N., et al. 2021, ApJS, 253, 5
* Reegpy (2019) Reegpy. 2019, "Fpython tools
[MISSING_PAGE_POST]
## References
* [1] A. A.
[MISSING_PAGE_POST]
Figure 10: True (upper panels) and recovered (lower panels) test images with full Stokes polarization for the slowly rotating crescent. The mr-support imaging approach succeeds in recovering the true large scale orientation of the EVPA.
Figure 11: Same as Fig. 10 but with ngEHT coverage: slowly rotating crescent observed with the ngEHT.
## Appendix A
Fig. 12: Polarimetric reconstruction of fast rotating crescent with ngEHT coverage.
Fig. 13: Synthetic ground truth movie of Sgr A* used for the third ngEHT Analysis challenge. The model is a RIAF model with a semianalytic shearing hotspot.
Fig. 12: Polarimetric reconstruction of fast rotating crescent with ngEHT coverage.
Figure 14: Reconstruction of the movie plotted in Fig. 13 with mr-support imaging for the third ngEHT Analysis challenge. |
2307.04033 | Probabilistic Test-Time Generalization by Variational Neighbor-Labeling | This paper strives for domain generalization, where models are trained
exclusively on source domains before being deployed on unseen target domains.
We follow the strict separation of source training and target testing, but
exploit the value of the unlabeled target data itself during inference. We make
three contributions. First, we propose probabilistic pseudo-labeling of target
samples to generalize the source-trained model to the target domain at test
time. We formulate the generalization at test time as a variational inference
problem, by modeling pseudo labels as distributions, to consider the
uncertainty during generalization and alleviate the misleading signal of
inaccurate pseudo labels. Second, we learn variational neighbor labels that
incorporate the information of neighboring target samples to generate more
robust pseudo labels. Third, to learn the ability to incorporate more
representative target information and generate more precise and robust
variational neighbor labels, we introduce a meta-generalization stage during
training to simulate the generalization procedure. Experiments on seven
widely-used datasets demonstrate the benefits, abilities, and effectiveness of
our proposal. | Sameer Ambekar, Zehao Xiao, Jiayi Shen, Xiantong Zhen, Cees G. M. Snoek | 2023-07-08T18:58:08Z | http://arxiv.org/abs/2307.04033v3 | # Learning Variational Neighbor Labels
###### Abstract
This paper strives for domain generalization, where models are trained exclusively on source domains before being deployed at unseen target domains. We follow the strict separation of source training and target testing but exploit the value of the unlabeled target data itself during inference. We make three contributions. First, we propose probabilistic pseudo-labeling of target samples to generalize the source-trained model to the target domain at test time. We formulate the generalization at test time as a variational inference problem by modeling pseudo labels as distributions to consider the uncertainty during generalization and alleviate the misleading signal of inaccurate pseudo labels. Second, we learn variational neighbor labels that incorporate the information of neighboring target samples to generate more robust pseudo labels. Third, to learn the ability to incorporate more representative target information and generate more precise and robust variational neighbor labels, we introduce a meta-generalization stage during training to simulate the generalization procedure. Experiments on six widely-used datasets demonstrate the benefits, abilities, and effectiveness of our proposal.
## 1 Introduction
As soon as test data distributions differ from the ones experienced during training, deep neural networks start to exhibit generalizability problems and accompanying performance degradation [21; 50]. To deal with the distribution shift, domain generalization [31; 34; 45; 46] has emerged as a promising tactic for generalizability to unseen target domains. However, as the methods are only trained on source domains, this may still lead to overfitting and limited guarantees for good performance on unseen target domains.
To better adapt models to target domains, without relying on target data during training, test-time adaptation [37; 58; 59; 61] was introduced. It provides an alternative learning paradigm, by training a model on source data and further adjusting the model according to the unlabeled target data at test time. Different settings for test-time adaptation have emerged. Test-time training by [58] and test-time adaptation by [61] attack image corruptions with a model trained on the original uncorrupted image distribution. The trained model is fine-tuned with self-supervised learning or entropy minimization to adapt to different corruptions in an online manner. The paradigm is also employed under the domain generalization setting using multiple source domains during training [14; 28; 29; 67], where the domain shifts are typically manifested in varying image styles and scenes, rather than corruptions. In this paper, we focus on the latter setting and refer to it as test-time domain generalization.
One widely applied strategy for updating models at test time is by optimizing or adjusting the model with target pseudo labels based on the source-trained model [28; 29]. However, due to domain shifts, the source-model predictions of the target samples can be uncertain and inaccurate, leading to updated models that are overconfident on mispredictions [71]. As a result, the obtained model becomes unreliable and misspecified to the target data [65].
In this paper, we make three contributions to attack the unreliability of test-time domain generalization by pseudo labels. First, we define pseudo labels as stochastic variables and estimate the distributions over them. By doing so, the uncertainty in predictions of the source-trained model is incorporated into the generalization to the target data at test time, alleviating the misleading effects of uncertain and inaccurate pseudo labels. Second, due to the proposed probabilistic formalism, it is natural and convenient to utilize variational distributions to leverage extra information. By hinging on this benefit, we design variational neighbor labels that leverage the neighboring information of target samples into the inference of the pseudo-label distributions. This makes the variational labels more accurate, which enables the source-trained model to be better specified to target data and therefore conducive to model generalization on the target domain. Third, to learn the ability to incorporate more representative target information in the variational neighbor labels, we simulate the test-time generalization procedure across domains by meta-learning. Beyond the well-known meta-source and meta-target stages [1; 11; 67], we introduce a meta-generalization stage in between the meta-source and meta-target stages to mimic the target generalization procedure. Based on the multiple source domains seen during training the model is exposed to different domain shifts iteratively and optimized to learn the ability to generalize to unseen domains. Our experiments on six widely-used domain generalization benchmarks demonstrate the promise and effectiveness of our proposal.
## 2 Methodology
### Preliminary
We are given data from different domains defined on the joint space \(\mathcal{X}\times\mathcal{Y}\), where \(\mathcal{X}\) and \(\mathcal{Y}\) denote the data space and label space, respectively. The domains are split into several source domains \(\mathcal{D}_{s}\!\!=\!\!\big{\{}(\mathbf{x}_{s},\mathbf{y}_{s})^{i}\big{\}}_{ i=1}^{N_{s}}\) and the target domain \(\mathcal{D}_{t}\!\!=\!\!\big{\{}(\mathbf{x}_{t},\mathbf{y}_{t})^{i}\big{\}}_{ i=1}^{N_{t}}\). Our goal is to train a model on source domains that is expected to generalize well on the (unseen) target domain.
We follow the test-time domain generalization setting [14; 28; 67], where a source-trained model is generalized to target domains by adjusting the model parameters at test time. A common strategy for adjusting the model parameters is that the model \(\mathbf{\theta}\) is first trained on source data \(\mathcal{D}_{s}\) by minimizing a supervised loss \(\mathcal{L}_{train}(\mathbf{\theta})\!\!=\!\!\mathbb{E}_{(\mathbf{x}_{s},\mathbf{ y}_{s})^{i}\in\mathcal{D}_{s}}[L_{\mathrm{CE}}(\mathbf{x}_{s},\mathbf{y}_{s}; \mathbf{\theta})]\); and then at test time the source-trained model \(\mathbf{\theta}_{s}\) is generalized to the target domain by optimization with certain surrogate losses, e.g., entropy minimization, based on the online unlabeled test data, which is formulated as:
\[\mathcal{L}_{test}(\mathbf{\theta})=\mathbb{E}_{\mathbf{x}_{t}\in\mathcal{D}_{t}}[ L_{E}(\mathbf{x}_{t};\mathbf{\theta}_{s})], \tag{1}\]
where the entropy is calculated on the source model predictions. However, test samples from the target domain could be largely misclassified by the source model due to the domain shift, resulting in large uncertainty in the predictions. Moreover, the entropy minimization tends to update the model with high confidence even for the wrong predictions, which would cause a misspecified model for the target domain. To solve those problems, we address test-time domain generalization from a probabilistic perspective and further propose variational neighbor labels to incorporate more target information. A graphical illustration to highlight the differences between common test-time domain generalization and our proposals is shown in Figure 1.
Figure 1: **Test-time domain generalization.** (a) The original test-time domain generalization algorithm [28; 29] obtains the target model \(\mathbf{\theta}_{t}\) by self learning of the unlabeled target data \(\mathbf{x}_{t}\) on source-trained model \(\mathbf{\theta}_{s}\). (b) Our probabilistic formulation models the uncertainty of pseudo labels \(p(\hat{\mathbf{y}}_{t})\) for more robust generalization on the unseen target data. (c) Furthermore, we propose variational neighbor labels to incorporate neighboring target information into the generation of pseudo labels. The variational model is further trained with meta-generalization.
### Probabilistic pseudo-labeling
We first provide a probabilistic formulation for test-time domain generalization. Given the target sample \(\mathbf{x}_{t}\) and the source-trained model \(\mathbf{\theta}_{s}\), we would like to make predictions on the target sample. To this end, we formulate the predictive likelihood as:
\[p(\mathbf{y}_{t}|\mathbf{x}_{t},\mathbf{\theta}_{s})=\int p(\mathbf{y}_{t}|\mathbf{ x}_{t},\mathbf{\theta}_{t})p(\mathbf{\theta}_{t}|\mathbf{x}_{t},\mathbf{\theta}_{s})d\mathbf{ \theta}_{t}\approx p(\mathbf{y}_{t}|\mathbf{x}_{t},\mathbf{\theta}_{t}^{*}), \tag{2}\]
where we use the value \(\mathbf{\theta}_{t}^{*}\) obtained by the maximum a posterior (MAP) to approximate the integration of \(\mathbf{\theta}_{t}\)[18]. Intuitively, the MAP approximation is interpreted as inferring the posterior over \(\mathbf{\theta}_{t}\): \(p(\mathbf{\theta}_{t}|\mathbf{x}_{t},\mathbf{\theta}_{s})\approx\delta(\mathbf{\theta}_{t }{=}\mathbf{\theta}_{t}^{*})\), which we obtain by fine-tuning \(\mathbf{\theta}_{s}\) using the target data \(\mathbf{x}_{t}\).
**Pseudo labels as stochastic variables.** To model the uncertainty of predictions for more robust test-time generalization, we treat pseudo labels as stochastic variables in the probabilistic framework as shown in Figure 1 (b). The pseudo labels are obtained from the source model predictions, which follow categorical distributions. Then we reformulate eq. (2) as:
\[p(\mathbf{y}_{t}|\mathbf{x}_{t},\mathbf{\theta}_{s}) =\int p(\mathbf{y}_{t}|\mathbf{x}_{t},\mathbf{\theta}_{t})\Big{[}\int p (\mathbf{\theta}_{t}|\hat{\mathbf{y}}_{t},\mathbf{x}_{t},\mathbf{\theta}_{s})p(\hat{ \mathbf{y}}_{t}|\mathbf{x}_{t},\mathbf{\theta}_{s})d\hat{\mathbf{y}}_{t}\Big{]}d \mathbf{\theta}_{t} \tag{3}\] \[\approx\mathbb{E}_{p(\hat{\mathbf{y}}_{t}|\mathbf{x}_{t},\mathbf{ \theta}_{s})}[p(\mathbf{y}_{t}|\mathbf{x}_{t},\mathbf{\theta}_{s}^{*})],\]
where \(\mathbf{\theta}_{t}^{*}\) is the MAP value of \(p(\mathbf{\theta}_{t}|\hat{\mathbf{y}}_{t},\mathbf{x}_{t},\mathbf{\theta}_{s})\), obtained via gradient descent on the data \(\mathbf{x}_{t}\) and the corresponding pseudo labels \(\hat{\mathbf{y}}_{t}\) starting from \(\mathbf{\theta}_{s}\). The formulation allows us to sample different pseudo labels from the categorical distribution \(p(\hat{\mathbf{y}}_{t})\) to update the model \(\mathbf{\theta}_{t}^{*}\), which takes into account the uncertainty of the source-trained predictions.
The common pseudo-labeling method can be treated as a specific case of eq. 3, which approximates the expectation of \(p(\hat{\mathbf{y}}_{t})\) by utilizing the \(\mathtt{argmax}\) function on \(p(\hat{\mathbf{y}}_{t})\), generating the hard pseudo labels. \(\mathbf{\theta}_{t}^{*}\) is then obtained by a point estimation of the hard pseudo labels. However, due to domain shifts, the \(\mathtt{argmax}\) value of \(p(\hat{\mathbf{y}}_{t})\) is not guaranteed to always be correct. The optimization of the source-trained model then is similar to entropy minimization (eq. 1), where the updated model can achieve high confidence but wrong predictions of some target samples due to domain shifts. For example, consider a toy binary classification task, where the predicted probability is \([0.4,0.6]\) with the ground-truth label \([1,0]\). The pseudo label generated by selecting the maximum probability is \([0,1]\), which is inaccurate. Optimization based on these labels would give rise to a model misspecified to target data, failing to generalize to the target domain. In contrast, our probabilistic formulation allows us to sample pseudo labels from the categorical distribution \(p(\hat{\mathbf{y}}_{t}|\mathbf{x}_{t},\mathbf{\theta}_{s})\), which incorporates the uncertainty of the pseudo label \(\hat{\mathbf{y}}_{t}\) in a principled way. Continuing the example, the pseudo labels sampled from the predicted distribution have a probability of \(40\%\) to be the true label, which leads to the generalization of the model in the correct direction. Therefore, the formulation improves generalization by accessing accurate pseudo labels.
### Variational neighbor labels
Based on the probabilistic formulation, we further propose variational neighbor labels that incorporate information of the neighboring target samples to estimate pseudo-label distributions that are more robust against domain shifts [28; 29]. On the one hand, introducing variational inference into pseudo-labeling is natural and convenient under the proposed probabilistic formulation. On the other hand, to generate pseudo labels that are more accurate and calibrated for more robust generalization, it is necessary to incorporate more target information. Assume that we have a mini-batch of target data \(\mathbf{X}_{t}{=}\left\{\mathbf{x}_{t}^{i}\right\}_{i=1}^{M}\), we reformulate eq. (3) as:
\[p(\mathbf{y}_{t}|\mathbf{x}_{t},\mathbf{\theta}_{s},\mathbf{X}_{t}) =\int p(\mathbf{y}_{t}|\mathbf{x}_{t},\mathbf{\theta}_{t})\Big{[}\int \int p(\mathbf{\theta}_{t}|\hat{\mathbf{y}}_{t},\mathbf{x}_{t},\mathbf{\theta}_{s})p( \hat{\mathbf{y}}_{t},\mathbf{w}_{t}|\mathbf{x}_{t},\mathbf{\theta}_{s},\mathbf{X}_{ t})d\hat{\mathbf{y}}_{t}d\mathbf{w}_{t}\Big{]}d\mathbf{\theta}_{t} \tag{4}\] \[=\int\int p(\mathbf{y}_{t}|\mathbf{x}_{t},\mathbf{\theta}_{t}^{*})p( \hat{\mathbf{y}}_{t},\mathbf{w}_{t}|\mathbf{x}_{t},\mathbf{\theta}_{s},\mathbf{X}_ {t})d\hat{\mathbf{y}}_{t}d\mathbf{w}_{t}.\]
As in eq. (3), \(\mathbf{\theta}_{t}^{*}\) is the MAP value of \(p(\mathbf{\theta}_{t}|\hat{\mathbf{y}}_{t},\mathbf{x}_{t},\mathbf{\theta}_{s})\). We introduce the latent variable \(\mathbf{w}_{t}\) to integrate the information of the neighboring target samples \(\mathbf{X}_{t}\) as shown in Figure 1. To facilitate the estimation of the variational neighbor labels, we set the prior distribution as:
\[p(\hat{\mathbf{y}}_{t},\mathbf{w}_{t}|\mathbf{x}_{t},\mathbf{\theta}_{s},\mathbf{X }_{t})=p(\hat{\mathbf{y}}_{t}|\mathbf{w}_{t},\mathbf{x}_{t})p_{\mathbf{\phi}}( \mathbf{w}_{t}|\mathbf{\theta}_{s},\mathbf{X}_{t}), \tag{5}\]
where \(p_{\mathbf{\phi}}(\mathbf{w}_{t}|\mathbf{\theta}_{s},\mathbf{X}_{t})\) is generated by the features of \(\mathbf{X}_{t}\) together with their output values based on \(\mathbf{\theta}_{s}\). In detail, to explore the information of neighboring target samples, we first generate the predictions of \(\mathbf{X}_{t}\) by the source-trained model \(\mathbf{\theta}_{s}\). Then we estimate the averaged target features of each category according to the source-model predictions. The latent variable \(\mathbf{w}_{t}\) is obtained by the model \(\mathbf{\phi}\) with the averaged features as the input. This procedure is presented by the distribution \(p_{\mathbf{\phi}}(\mathbf{w}_{t}|\mathbf{\theta}_{s},\mathbf{X}_{t})\) in eq. (5). The variational neighbor labels \(\hat{\mathbf{y}}_{t}\) are obtained by classifying the target samples using \(\mathbf{w}_{t}\). Rather than directly using the source model \(\mathbf{\theta}_{s}\), we estimate \(\hat{\mathbf{y}}_{t}\) from the latent variable \(\mathbf{w}_{t}\), which integrates the information of neighboring target samples to be more accurate and reliable.
To approximate the true posterior of the joint distribution \(p(\hat{\mathbf{y}}_{t},\mathbf{w}_{t})\) and incorporate more representative target information, we design a variational posterior \(q(\hat{\mathbf{y}}_{t},\mathbf{w}_{t}|\mathbf{x}_{t},\mathbf{\theta}_{s},\mathbf{ X}_{t},\mathbf{Y}_{t})\) to supervise the prior distribution \(p(\hat{\mathbf{y}}_{t},\mathbf{w}_{t}|\mathbf{x}_{t},\mathbf{\theta}_{s},\mathbf{ X}_{t})\) during training:
\[q(\hat{\mathbf{y}}_{t},\mathbf{w}_{t}|\mathbf{x}_{t},\mathbf{\theta}_{s},\mathbf{ X}_{t},\mathbf{Y}_{t})=p(\hat{\mathbf{y}}_{t}|\mathbf{w}_{t},\mathbf{x}_{t})q_{ \mathbf{\phi}}(\mathbf{w}_{t}|\mathbf{\theta}_{s},\mathbf{X}_{t},\mathbf{Y}_{t}). \tag{6}\]
The variational posterior has similar implementations with the prior distribution and the parameters \(\mathbf{\phi}\) are shared by both the prior and posterior distributions. \(\mathbf{Y}_{t}{=}\left\{\mathbf{y}_{t}^{i}\right\}_{i=1}^{M}\) denotes the actual labels of the target data \(\mathbf{X}_{t}\). Since the labels of the target data \(\mathbf{Y}_{t}\) are inaccessible, we can only utilize the prior distribution \(p(\hat{\mathbf{y}}_{t},\mathbf{w}_{t}|\mathbf{x}_{t},\mathbf{\theta}_{s},\mathbf{ X}_{t})\) at test time. Therefore, we introduce the variational posterior under the meta-learning framework [13; 17; 67], where we mimic domain shifts and the test-time generalization procedure during training to learn the variational neighbor labels. In this case, according to the variational posterior distribution, the prior distribution \(p(\hat{\mathbf{y}}_{t}|\mathbf{w}_{t},\mathbf{x}_{t})p_{\mathbf{\phi}}(\mathbf{w} _{t}|\mathbf{\theta}_{s},\mathbf{X}_{t})\) learns the ability to incorporate more representative target information and generate more accurate neighbor labels.
### Meta-generalization with variational neighbor labels
To train the ability of the model to generate more reliable variational neighbor labels and to fully utilize the pseudo-label distributions for generalization, we adopt meta-learning to simulate domain shifts and test-time generalization procedures [13; 17; 67]. We split the source domains \(\mathcal{D}_{s}\) into meta-source domains \(\mathcal{D}_{s^{\prime}}\) and a meta-target domain \(\mathcal{D}_{t^{\prime}}\) during training. The meta-target domain is selected randomly in each iteration to mimic diverse domain shifts. Moreover, we divide each iteration into meta-source, meta-generalization, and meta-target stages to simulate the training stage on source domains, test-time generalization, and test stage on target data, respectively.
**Meta-source.** To obtain the meta-source model \(\mathbf{\theta}_{s^{\prime}}\), we train the model on meta-source domains by minimizing the supervised loss:
\[\mathbf{\theta}_{s^{\prime}}=\min_{\mathbf{\theta}}\mathbb{E}_{(\mathbf{x}_{s^{\prime} },\mathbf{y}_{s^{\prime}}))\in\mathcal{D}_{s^{\prime}}}[L_{\mathrm{CE}}( \mathbf{x}_{s^{\prime}},\mathbf{y}_{s^{\prime}};\mathbf{\theta})], \tag{7}\]
where \((\mathbf{x}_{s^{\prime}},\mathbf{y}_{s^{\prime}})\) denotes the input-label pairs of samples from the meta-source domains.
**Meta-generalization.** Once the meta-source model \(\mathbf{\theta}_{s^{\prime}}\) is obtained, to mimic test-time generalization and prediction, our goal is to optimize the meta-source model by the meta-target data and make predictions with the meta-target-generalized model, which is formulated as:
\[p(\mathbf{y}_{t^{\prime}}|\mathbf{x}_{t^{\prime}},\mathbf{\theta}_{s^{\prime}})= \mathbb{E}_{p(\hat{\mathbf{y}}_{t^{\prime}}|\mathbf{x}_{t^{\prime}},\mathbf{ \theta}_{s^{\prime}})}[p(\mathbf{y}_{t^{\prime}}|\mathbf{x}_{t^{\prime}},\mathbf{ \theta}_{t^{\prime}})] \tag{8}\]
where \(p(\hat{\mathbf{y}}_{t^{\prime}}|\mathbf{x}_{t^{\prime}},\mathbf{\theta}_{s^{\prime}})\) denotes the pseudo-label distribution of the meta-target data. \(\mathbf{\theta}_{t^{\prime}}^{*}\) is the MAP value of \(p(\mathbf{\theta}_{t^{\prime}}|\hat{\mathbf{y}}_{t^{\prime}},\mathbf{x}_{t^{\prime}},\mathbf{\theta}_{s^{\prime}})\) similar to eq. (3). Moreover, by introducing the variational neighbor labels, we reformulate eq (8) to:
\[p(\mathbf{y}_{t^{\prime}}|\mathbf{x}_{t^{\prime}},\mathbf{\theta}_{s^{\prime}}, \mathbf{X}_{t^{\prime}})=\int\int p(\mathbf{y}_{t^{\prime}}|\mathbf{x}_{t^{ \prime}},\mathbf{\theta}_{t^{\prime}}^{*})p(\hat{\mathbf{y}}_{t^{\prime}},\mathbf{w }_{t^{\prime}}|\mathbf{x}_{t^{\prime}},\mathbf{\theta}_{s^{\prime}},\mathbf{X}_{t^{ \prime}})d\hat{\mathbf{y}}_{t^{\prime}}d\mathbf{w}_{t^{\prime}}, \tag{9}\]
where \(p(\hat{\mathbf{y}}_{t^{\prime}},\mathbf{w}_{t^{\prime}}|\mathbf{x}_{t^{\prime}},\mathbf{\theta}_{s^{\prime}},\mathbf{X}_{t^{\prime}}){=}p(\hat{\mathbf{y}}_{t^{ \prime}}|\mathbf{w}_{t^{\prime}},\mathbf{x}_{t^{\prime}})p_{\mathbf{\phi}}(\mathbf{ w}_{t^{\prime}}|\mathbf{\theta}_{s^{\prime}},\mathbf{X}_{t^{\prime}})\) is the joint prior distribution of the meta-target neighbor labels \(\hat{\mathbf{y}}_{t^{\prime}}\) and latent variable \(\mathbf{w}_{t^{\prime}}\) similar with eq. (4). The joint variational posterior is designed as \(q(\hat{\mathbf{y}}_{t^{\prime}},\mathbf{w}_{t^{\prime}}|\mathbf{x}_{t^{\prime}}, \mathbf{\theta}_{s^{\prime}},\mathbf{X}_{t^{\prime}},\mathbf{Y}_{t^{\prime}}){=}p( \hat{\mathbf{y}}_{t^{\prime}}|\mathbf{w}_{t^{\prime}},\mathbf{x}_{t^{\prime}})q_{ \mathbf{\phi}}(\mathbf{w}_{t^{\prime}}|\mathbf{\theta}_{s^{\prime}},\mathbf{X}_{t^{ \prime}},\mathbf{Y}_{t^{\prime}})\) to learn more reliable neighbor labels by considering the actual labels \(\mathbf{Y}_{t^{\prime}}\) of the meta-target data. Under the meta-learning setting, the actual labels \(\mathbf{Y}_{t^{\prime}}\) of the meta-target data are accessible since source data are fully labeled. Thus, the variational distribution utilizes both the domain and categorical information of the neighboring samples and models the meta-target distribution more reliably, generating more accurate neighbor labels \(\hat{\mathbf{y}}_{t^{\prime}}\) of the meta-target samples.
With the variational neighbor labels \(\hat{\mathbf{y}}_{t^{\prime}}\) generated by the posterior \(q(\hat{\mathbf{y}}_{t^{\prime}},\mathbf{w}_{t^{\prime}}|\mathbf{x}_{t^{\prime}}, \boldsymbol{\theta}_{s^{\prime}},\mathbf{X}_{t^{\prime}},\mathbf{Y}_{t^{\prime}})\). The test-time domain generalization procedure is simulated by obtaining \(\boldsymbol{\theta}_{t^{\prime}}^{*}\) from:
\[\boldsymbol{\theta}_{t^{\prime}}^{*}=\boldsymbol{\theta}_{s^{\prime}}-\lambda_ {1}\nabla_{\boldsymbol{\theta}}L_{\mathrm{CE}}(\mathbf{x}_{t^{\prime}},\hat{ \mathbf{y}}_{t^{\prime}};\boldsymbol{\theta}_{s^{\prime}}),\ \ \ \hat{\mathbf{y}}_{t^{\prime}}\sim p(\hat{\mathbf{y}}_{t^{\prime}}|\mathbf{w}_{t^ {\prime}},\mathbf{x}_{t^{\prime}}),\ \ \ \mathbf{w}_{t^{\prime}}\sim q_{\boldsymbol{\phi}}(\mathbf{w}_{t^{ \prime}}|\boldsymbol{\theta}_{s^{\prime}},\mathbf{X}_{t^{\prime}},\mathbf{Y}_{t ^{\prime}}), \tag{10}\]
where \(\lambda_{1}\) denotes the learning rate of the optimization in the meta-generalization stage.
**Meta-target.** Since our final goal is to obtain good performance on the target data after optimization, we further mimic the test-time inference on the meta-target domain and supervise the meta-target prediction on \(\boldsymbol{\theta}_{t^{\prime}}^{*}\) by maximizing the log-likelihood:
\[\log p(\mathbf{y}_{t^{\prime}}|\mathbf{x}_{t^{\prime}}, \boldsymbol{\theta}_{s^{\prime}},\mathbf{X}_{t^{\prime}})=\log\int\int p( \mathbf{y}_{t^{\prime}}|\mathbf{x}_{t^{\prime}},\boldsymbol{\theta}_{t^{ \prime}}^{*})p(\hat{\mathbf{y}}_{t^{\prime}},\mathbf{w}_{t^{\prime}}|\mathbf{ x}_{t^{\prime}},\boldsymbol{\theta}_{s^{\prime}},\mathbf{X}_{t^{\prime}})d\hat{ \mathbf{y}}_{t^{\prime}}d\mathbf{w}_{t^{\prime}} \tag{11}\] \[\geq\mathbb{E}_{\mathbf{q}_{\boldsymbol{\phi}}(\mathbf{w}_{t^{ \prime}})}\mathbb{E}_{p(\hat{\mathbf{y}}_{t^{\prime}}|\mathbf{w}_{t^{\prime}},\mathbf{x}_{t^{\prime}})}[\log p(\mathbf{y}_{t^{\prime}}|\mathbf{x}_{t^{ \prime}},\boldsymbol{\theta}_{t^{\prime}}^{*})]-\mathbb{D}_{KL}[q_{\boldsymbol {\phi}}(\mathbf{w}_{t^{\prime}}|\boldsymbol{\theta}_{s^{\prime}},\mathbf{X}_{t ^{\prime}},\mathbf{Y}_{t^{\prime}})||p_{\boldsymbol{\phi}}(\mathbf{w}_{t^{ \prime}}|\boldsymbol{\theta}_{s^{\prime}},\mathbf{X}_{t^{\prime}})],\]
where \(p_{\boldsymbol{\phi}}(\mathbf{w}_{t^{\prime}}|\boldsymbol{\theta}_{s^{\prime}},\mathbf{X}_{t^{\prime}})\) generated by the features of \(\mathbf{X}_{t^{\prime}}\) together with their output values based on \(\boldsymbol{\theta}_{s^{\prime}}\). \(q_{\boldsymbol{\phi}}(\mathbf{w}_{t^{\prime}}|\boldsymbol{\theta}_{s^{\prime}},\mathbf{X}_{t^{\prime}},\mathbf{Y}_{t^{\prime}})\) is obtained by the features of \(\mathbf{X}_{t^{\prime}}\) considering the actual labels \(\mathbf{Y}_{t^{\prime}}\). The detailed formulation is provided in Appendix.
As aforementioned, the actual labels \(\mathbf{y}_{t^{\prime}}\) of the meta-target data are accessible during training. We can further supervise the updated model \(\boldsymbol{\theta}_{t^{\prime}}^{*}\) on its meta-target predictions by the actual labels. Maximizing the log-likelihood \(\log p(\mathbf{y}_{t^{\prime}}|\mathbf{x}_{t^{\prime}},\boldsymbol{\theta}_{s^ {\prime}},\mathbf{X}_{t^{\prime}})\) is equal to minimizing:
\[\mathcal{L}_{meta}=\mathbb{E}_{(\mathbf{x}_{t^{\prime}},\mathbf{y}_{t^{ \prime}})}[\mathbb{E}_{\mathbf{\hat{y}}_{\boldsymbol{\phi}}(\mathbf{w}_{t^{ \prime}})}\mathbb{E}_{p(\hat{\mathbf{y}}_{t^{\prime}}|\mathbf{w}_{t^{\prime}},\mathbf{x}_{t^{\prime}})}L_{\mathrm{CE}}(\mathbf{x}_{t^{\prime}},\mathbf{y}_{ t^{\prime}};\boldsymbol{\theta}_{t^{\prime}}^{*})]+\mathbb{D}_{KL}[q_{\boldsymbol{\phi}}( \mathbf{w}_{t^{\prime}})||p_{\boldsymbol{\phi}}(\mathbf{w}_{t^{\prime}})]. \tag{12}\]
The parameters \(\boldsymbol{\theta}\) are finally updated by:
\[\boldsymbol{\theta}=\boldsymbol{\theta}_{s^{\prime}}-\lambda_{2}\nabla_{ \boldsymbol{\theta}}\mathcal{L}_{meta}, \tag{13}\]
where \(\lambda_{2}\) denotes the learning rate for the meta-target stage. Note that the loss in eq. (12) is computed on the parameters \(\boldsymbol{\theta}_{t^{\prime}}^{*}\) obtained by eq. (10), while the optimization is performed over the meta-source-trained parameters \(\boldsymbol{\theta}_{s^{\prime}}\) obtained by eq. (7). Intuitively, the model updated by meta-target neighbor labels is trained to achieve good performance on the meta-target data. Thus, it learns the ability to generate more accurate variational neighbor labels with neighboring target samples and to achieve good generalization across domains with these neighbor labels of new unseen domains.
The parameters in the variational inference model \(\boldsymbol{\phi}\) are jointly trained with \(\boldsymbol{\theta}\). To guarantee that the variational neighbor labels do extract the neighboring information for discrimination, we add a cross-entropy loss \(\mathcal{L}_{\hat{c}\hat{c}}\) on the variational neighbor labels and the corresponding actual labels during training. Thus, \(\boldsymbol{\phi}\) is updated with a learning rate \(\lambda_{3}\) by:
\[\boldsymbol{\phi}=\boldsymbol{\phi}-\lambda_{3}(\nabla_{\boldsymbol{\phi}} \mathcal{L}_{\hat{c}\hat{c}}-\nabla_{\boldsymbol{\phi}}\mathcal{L}_{meta}), \tag{14}\]
### Test-time generalization
At test time, the model trained on the source domains with the meta-learning strategy \(\boldsymbol{\theta}_{s}\) is generalized to \(\boldsymbol{\theta}_{t}^{*}\) by further optimization:
\[\boldsymbol{\theta}_{t}^{*}=\boldsymbol{\theta}_{s}-\lambda_{1}\nabla_{ \boldsymbol{\theta}}L_{\mathrm{CE}}(\mathbf{x}_{t},\hat{\mathbf{y}}_{t}; \boldsymbol{\theta}_{s}),\ \ \ \hat{\mathbf{y}}_{t}\sim p(\hat{\mathbf{y}}_{t}|\mathbf{w}_{t},\mathbf{x}_{t}),\ \ \ \mathbf{w}_{t}\sim p_{ \boldsymbol{\phi}}(\mathbf{w}_{t}|\boldsymbol{\theta}_{s},\mathbf{X}_{t}). \tag{15}\]
Since the target labels \(\mathbf{Y}_{t}\) are inaccessible, we generate neighbor labels \(\hat{\mathbf{y}}_{t}\) and latent variables \(\mathbf{w}_{t}\) from the prior distribution \(p(\hat{\mathbf{y}}_{t},\mathbf{w}_{t}|\mathbf{x}_{t},\boldsymbol{\theta}_{s}, \mathbf{X}_{t})\)\(=\)\(p(\hat{\mathbf{y}}_{t}|\mathbf{w}_{t},\mathbf{x}_{t})p_{\boldsymbol{\phi}}( \mathbf{w}_{t}|\boldsymbol{\theta}_{s},\mathbf{X}_{t})\). \(\boldsymbol{\theta}_{t}^{*}\) is then utilized to make predictions on the (unseen) target data \(\mathcal{D}_{t}\), which is formulated as:
\[\begin{split} p(\mathbf{y}_{t}|\mathbf{x}_{t},\boldsymbol{\theta}_{s},\mathbf{X}_{t})&=\int p(\mathbf{y}_{t}|\mathbf{x}_{t}, \boldsymbol{\theta}_{t})\Big{[}\int p(\boldsymbol{\theta}_{t}|\hat{\mathbf{y}}_{t },\mathbf{x}_{t},\boldsymbol{\theta}_{s})p(\hat{\mathbf{y}}_{t},\mathbf{w}_{t}| \mathbf{x}_{t},\boldsymbol{\theta}_{s},\mathbf{X}_{t})d\hat{\mathbf{y}}_{t}d \mathbf{w}_{t}\Big{]}d\boldsymbol{\theta}_{t}\\ &=\mathbb{E}_{p_{\boldsymbol{\phi}}(\mathbf{w}_{t})}\mathbb{E}_{p( \hat{\mathbf{y}}_{t}|\mathbf{w}_{t},\mathbf{x}_{t})}[\log p(\mathbf{y}_{t}| \mathbf{x}_{t},\boldsymbol{\theta}_{t}^{*})].\end{split} \tag{16}\]
We provide both the training algorithm and test-time generalization algorithm in Appendix.
## 3 Experiments
### Datasets
**Six datasets.** We demonstrate the effectiveness of our method on image classification problems. We evaluate our method on six widely used domain generalization datasets. _PACS_[31] consists of 7
classes and 4 domains: Photo, Art painting, Cartoon, and Sketch with 9,991 samples. _VLCS_[16] consists of 5 classes from 4 different datasets: Pascal, LabelMe, Caltech, and SUN with 10,729 samples. _Office-Home_[60] contains 15,500 images of 65 categories. The images are from four domains, i.e., Art, Clipart, Product, and Real-World. _TerraIncognita_[4] has 4 domains from different locations: Location 100, Location 38, Location 43, and Location 46. The dataset includes 24,778 samples of 10 categories. We follow the training and validation split in [31] and evaluate the model according to the "leave-one-out" protocol [32; 7]. We also evaluate our method on the _Rotated MNIST_ and _Fashion-MNIST_ datasets following [48], where the images are rotated by different angles as different domains. We use the subsets with rotation angles from \(15^{\circ}\) to \(75^{\circ}\) in intervals of \(15^{\circ}\) as five source domains, and images rotated by \(0^{\circ}\) and \(90^{\circ}\) as the target domains.
**Implementation details.** We utilize ResNet-18 for all our experiments and ablation studies and report the accuracies on ResNet-50 for comparison as well. We evaluate the method on the online test-time domain generalization setting [28], we increment the target data iteratively and keep updating and evaluating the model. The backbones are pretrained on ImageNet same as the previous methods. During training, we use a varied learning rate throughout the model and train the model for 10,000 iterations. In the meta-generalization procedure, we set the learning rate \(\lambda_{1}\) as 0.0001 for all layers. During meta-target, we set the learning rate for the pretrained ResNet (\(\lambda_{2}\)) to 5e-5 and the learning rate of the variational module (\(\lambda_{3}\)) and classifiers as 1e-4 for all datasets. The batch size is set to 70 during the training and set to 20 during the test-time generalization procedure. At test-time, we use the learning rate of 0.0001 for all the layers and update all parameters. All hyperparameters for source training and test-time using the validation set have been selected as mentioned in [28]. We provide more implementation details and computational costs in Appendix. We will release the code.
### Results
Ablations on variational neighbor labels.To show the benefits of the proposed method, we conduct an ablation on PACS and TerraIncognita. We first compare the probabilistic test-time domain generalization (eq. 3) with the common one (eq. 1). As shown in the first two rows in Table 1, the probabilistic formulation performs better, which demonstrates the benefits of modeling uncertainty of the pseudo labels during generalization at test time. By incorporating more target information from the neighboring samples, in row 3, the variational neighbor labels (eq. 4) become more reliable, which benefits the generalization on the target data. Moreover, when learned by meta-generalization, the variational neighbor labels further improve the performance (row 4). With the meta-generalization strategy, the variational neighbor labels learn the ability to incorporate more representative target information by introducing the variational posterior of meta-target data during training (eq. 12).
Calibration ability.To further show the benefits of the variational neighbor labels, we also investigate the calibration ability by measuring the Expected Calibration Error (ECE) [25]. We report hard pseudo-labeling and soft pseudo-labeling as baselines. As shown in Figure 2(a), the ECE of our method is lower than the baselines on all domains, demonstrating a better ability to model uncertainty at test time. We also vary the number of bins when calculating the ECE to demonstrate the insensitivity to this hyperparameter. Figure 2(b) shows our variational neighbor labels consistently have lower ECE independent of the number of bins. By incorporating pseudo labels as latent variables with variational inference and considering neighboring target information, the proposed method models the uncertainty of the target samples more accurately. With the better-calibrated labels, the model achieves more robust generalization on the target domain at test time.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{2}{c}{**PACS**} & \multicolumn{2}{c}{**TerraIncognita**} \\ \hline Pseudo-labeling baseline & (eq. 1) & 81.3 \(\pm\)0.3 & 41.2 \(\pm\)0.4 \\ \hline Probabilistic pseudo-labeling & (eq. 3) & 82.0 \(\pm\)0.2 & 42.5 \(\pm\)0.5 \\ Variational neighbor-labeling & (eq. 4) & 82.4 \(\pm\)0.3 & 43.8 \(\pm\)0.5 \\ Meta variational neighbor-labeling & (eq. 12) & 83.5 \(\pm\)0.4 & 46.2 \(\pm\)0.6 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Ablations on variational neighbor labels. Results on PACS and TerraIncognita with ResNet-18. Our probabilistic formulation performs better than the common pseudo-labeling baseline for test-time domain generalization by considering the uncertainty. Incorporating more target information by the variational neighbor labels improves results further, especially when used in concert with meta-generalization. We provide per-domain results in Appendix.**
**Generalization with limited data.** Test-time generalization and adaptation methods usually require large batches of target samples to update the source-trained model. However, during real-world deployment, the number of available target samples may be limited. In this case, the performance of test-time generalization methods will be constrained. We compare our method with Tent [61] on PACS with different batch sizes during generalization. The results are shown in Figure 3. Tent performs well with large batch sizes, but with smaller batch sizes, e.g., 16, the performance suffers and is worse than the baseline model. By contrast, our method consistently achieves good results even with small target batch sizes. We provide detailed results in the Appendix, as well as other experiments with limited target data. We conclude that by incorporating the uncertainty and representative neighboring information, our method is more robust to small batch sizes.
**Generalization along with inference.** For more insights into the variational neighbor labels, we provide the online performance along with generalization steps. As shown in Figure 4, starting from the same baseline accuracy, the gap between the results of variational neighbor labels and the hard pseudo labels becomes larger and larger along with the generalization steps. Variational neighbor labels achieve faster generalization of the source-trained model. After 50 iterations, the performance of the hard pseudo labels is saturated and even drops due to the error accumulation resulting from inaccurate pseudo labels during model updating. By considering the uncertainty and neighboring target information, our variational neighbor labels improve performance and are less prone to saturation, leading to better accuracy.
**Orthogonality.** Since the proposed meta-learned variational neighbor labels focus on generating pseudo labels at test time, the method is orthogonal to other deployment techniques, e.g., data augmentation for generalization at test time [72]. Achieving test-time domain generalization compounded with these methods will further improve the performance. To demonstrate this, we conduct test-time generalization by our method with augmented target samples on PACS without altering the source training strategy. When adding similar augmentation as in [72], we increase our result from 83.5% (Table 1) to 84.9% overall accuracy. We provide the complete table including the per-domain results in Appendix. In the following we report the results of our method in conjunction with augmentations.
Figure 4: **Generalization along with inference.** Variational neighbor labels achieve faster generalization and the generalization is less prone to saturation.
Figure 3: **Generalization with limited data.** Variational neighbor labels outperform Tent [61] on PACS, independent of batch size. Largest improvement for small batch sizes.
Figure 2: **Calibration ability** on PACS. (a) Variational neighbor labels consistently have a lower Expected Calibration Error (ECE) compared to pseudo label baselines, independent of the target domains and (b) our proposal is insensitive to the number of bins used for the calculation of the ECE.
**State-of-the-art comparisons.** We compare our proposal with state-of-the-art test-time domain generalization, as well as some standard domain generalization and test-time adaptation methods. Note the latter methods are designed for single-source image corruption settings, so we report the reimplemented results from [29]. Table 2 shows the results on PACS, VLCS, Office-Home, and TerralIncognita for both ResNet-18 and ResNet-50 backbones. Our method is competitive on most of the datasets, except for Office-Home where the sample-wise generalization of [67] performs better. The reason can be that the representative neighboring information is more difficult to incorporate with a larger number of categories (e.g., 65 in Office-Home), which needs larger capacity models \(\mathbf{\phi}\). Note that our method still outperforms other recent methods [10; 28; 29; 61] on Office-Home. Moreover, since we consider the uncertainty of the variational neighbor labels, the proposed method solves some hard cases of the single-sample generalization as provided in [67]. As shown in Figure 5, our method has low confidence in the uncertain samples, e.g., with different objectives or limited information, showing good calibration of our method. After generalization at test time, the confidence of the correct category improves and the model predicts correctly, showing the effectiveness of test-time generalization with the meta-generalized variational neighbor labels in complex scenes. In addition, there are also some recent standard domain generalization methods achieving good performance. For instance, [19] achieve 86.4, 78.9, 69.3, and 51.0 on PACS, VLCS, Office-Home, and TerraIncognita based on ResNet-50. Different from our method which only utilizes the data from one dataset, they first meta-learn the loss function with an extra dataset before training, suggesting that our method may further improve as well when relying on more source datasets during training. We also conduct experiments on Rotated MNIST and Fashion-MNIST, which are provided in the Appendix. Our method also achieves competitive performance on these datasets.
## 4 Related Work
**Domain generalization.** Domain generalization is introduced to learn a model on one or several source domains that can generalize well on any out-of-distribution target domain [5; 46; 76]. Different from domain adaptation [41; 42; 62], domain generalization methods do not access any target data during training. One of the most widely-used methods for domain generalization is domain-invariant learning [2; 22; 36; 44; 46; 74], which learns invariant feature representations across source domains. As an alternative, source domain augmentation methods [30; 49; 54; 77; 78] try to generate more
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**PACS**} & \multicolumn{2}{c}{**VLCS**} & \multicolumn{2}{c}{**Office-Home**} & \multicolumn{2}{c}{**TerralIncognita**} \\ \cline{2-9} & ResNet-18 & ResNet-50 & ResNet-18 & ResNet-50 & ResNet-18 & ResNet-50 & ResNet-18 & ResNet-50 \\ \hline \multicolumn{9}{l}{**Standard domain generalization**} \\ ERM [24] & 79.8 & 85.5 & 74.9 & 77.5 & 59.9 & 66.5 & 42.0 & 46.1 \\ Arjovsky et al. [2] & 80.9 & 83.5 & 75.1 & 78.5 & 58.0 & 64.3 & 38.4 & 47.6 \\ Huang et al. [27] & 80.5 & 85.2 & 75.4 & 77.1 & 58.4 & 65.5 & 39.4 & 46.6 \\ Shi et al. [55] & 82.0 & 85.5 & 76.9 & 77.8 & 62.0 & 68.6 & 40.2 & 45.1 \\ \multicolumn{9}{l}{**Test-time adaptation on domain generalization**} \\ Wang et al. [61] & 80.8 & 83.7 & 67.0 & 69.7 & 60.9 & 66.3 & 39.9 & 43.9 \\ Liang et al. [38] & 82.4 & 84.1 & 65.2 & 67.0 & 62.6 & 67.7 & 33.6 & 35.2 \\ \multicolumn{9}{l}{**Test-time domain generalization**} \\ Iwasawa \& Matsuo [28] & 81.7 & 84.5 & 76.5 & 78.3 & 57.0 & 66.5 & 41.6 & 45.9 \\ Dubey et al. [14] & - & 84.1 & - & 78.0 & - & 67.9 & - & 47.3 \\ Jang et al. [29] & 81.9 & 84.1 & 77.3 & 77.6 & 63.7 & 68.6 & 42.6 & 47.4 \\ Chen et al. [10] & 83.8 & - & 76.9 & - & 62.0 & - & 43.2 & - \\ Xiao et al. [67] & 84.1 & 87.5 & - & - & **66.0** & **71.0** & - & - \\ \multicolumn{9}{l}{_This paper_} \\ \multicolumn{9}{l}{**85.0**\(\pm\)0.4} & \multicolumn{1}{l}{**87.9**\(\pm\)0.3} & \multicolumn{1}{l}{**78.2**\(\pm\)0.3} & \multicolumn{1}{l}{**79.1**\(\pm\)0.4} & \multicolumn{1}{l}{64.3**\(\pm\)0.3} & \multicolumn{1}{l}{69.1} & \multicolumn{1}{l}{64.9} & \multicolumn{1}{l}{49.4} & \multicolumn{1}{l}{49.4} & \multicolumn{1}{l}{49.
source domains during training. Recently, meta-learning-based methods [3; 9; 11; 12; 33] have been explored to learn the ability to handle domain shifts.
**Test-time adaptation.** Another solution to address distribution shifts without target data during training is adapting the model at test time. Source-free adaptation [15; 38] was first proposed to adapt the source-trained model to the entire target set. Differently, test-time adaptation [20; 23; 26; 40; 53; 58; 61; 75] achieves adaptation and prediction in an online manner, without halting inference. One common test-time adaptation is fine-tuning by entropy minimization [61], which is followed by many works [23; 29; 47; 72]. Since entropy minimization does not consider the uncertainty of source model predictions, there are also some probabilistic algorithms [6; 75] based on Bayesian semi-supervised learning and models fine-tuned on soft pseudo labels [52; 79]. Different from these works, we introduce the uncertainty by considering pseudo labels as latent variables and estimate the distributions by variational inference. Our models consider uncertainty within the same probabilistic framework, without introducing extra models or knowledge distillation operations.
**Test-time domain generalization.** The test-time adaptation methods [58; 61] are mainly utilized to adjust models to corrupted data distributions, with a single source distribution during training. The idea of adjusting the source-trained model at test time is further explored under the domain generalization setting [14; 28; 66; 73] to consider target information for better generalization. We refer to these methods as test-time domain generalization. Dubey et al. [14] generate domain-specific classifiers for the target domain with the target domain embeddings. Iwasawa and Matsuo [28] adjust the prototypical classifier online according to the pseudo labels of the target data. Some methods [1; 67; 13] also investigated meta-learning for test-time domain generalization. These methods mimic the domain shifts during training to make better use of the multiple source domains. Du et al. [13] meta-learn to estimate the batch normalization statistics from each target sample to adjust the source-trained model. Xiao et al. [67] learn to adapt the classifier to each individual target sample by mimicking domain shifts during training. Our method also learns the ability to adjust the model by unseen data under the meta-learning setting. We utilize multiple source domains to mimic domain shifts during training. Differently, we design meta-generalization and meta-target stages during training to simulate both the generalization and inference procedures at test time. The entire algorithm is explored under a probabilistic framework.
**Pseudo-label learning.** Pseudo-label learning uses the predictions of the model for retraining on downstream tasks, which is often applied for unlabeled data and self-training [35; 43; 68; 69]. The pseudo labels are also utilized for unsupervised domain adaptation [39; 57; 79], test-time adaptation [8; 52; 63], and test-time domain generalization [28; 29; 64] to make better use of the information of the unlabeled data from the target distributions. As the pseudo labels can be noisy and overconfident [79], recently several studies focus on the appropriate selection and uncertainty of the pseudo labels [47; 51; 52; 79]. These works either select the pseudo labels with criteria such as the entropy consistency score of the model predictions. [39; 47; 56] or use soft pseudo labels to take the uncertainty into account [52; 70; 79]. Our work is related to these works since we also use pseudo labels to generalize the source-trained model to the target domain. Different from the previous methods, we are the first to introduce pseudo labels as latent variables in a probabilistic parameterized framework for test-time domain generalization, where we incorporate uncertainty and generate pseudo labels with neighboring information through variational inference and meta-learning.
## 5 Conclusion and Discussion
We propose to cast test-time domain generalization as a probabilistic inference problem and model pseudo labels as latent variables in the formulation. By incorporating the uncertainty of the pseudo labels, the probabilistic formulation mitigates updating the source-trained model with inaccurate supervision, which arises due to domain shifts and leads to misspecified models. Based on the probabilistic formulation, we further propose the variational neighbor labels under the designed meta-generalization setting, which estimates the pseudo labels by incorporating neighboring target information through variational inference and learns the ability to generalize the source-trained model.
tsAblation studies and further comparisons show the benefits, abilities, and effectiveness of our method on six common domain generalization datasets. Since the method utilizes meta-learning and neighboring target information, it requires multiple source domains during training and small batches of target samples at test time, which can be a limitation in some environments. We consider a single-source and single-target-sample variant as a valuable investigation for future work.
## Acknowledgments
This work is financially supported by the Inception Institute of Artificial Intelligence, the University of Amsterdam and the allowance Top consortia for Knowledge and Innovation (TKIs) from the Netherlands Ministry of Economic Affairs and Climate Policy.
|
2302.10840 | Valid Inference for Machine Learning Model Parameters | The parameters of a machine learning model are typically learned by
minimizing a loss function on a set of training data. However, this can come
with the risk of overtraining; in order for the model to generalize well, it is
of great importance that we are able to find the optimal parameter for the
model on the entire population -- not only on the given training sample. In
this paper, we construct valid confidence sets for this optimal parameter of a
machine learning model, which can be generated using only the training data
without any knowledge of the population. We then show that studying the
distribution of this confidence set allows us to assign a notion of confidence
to arbitrary regions of the parameter space, and we demonstrate that this
distribution can be well-approximated using bootstrapping techniques. | Neil Dey, Jonathan P. Williams | 2023-02-21T17:46:08Z | http://arxiv.org/abs/2302.10840v2 | # Valid inference for machine learning model parameters
###### Abstract
The parameters of a machine learning model are typically learned by minimizing a loss function on a set of training data. However, this can come with the risk of overtraining; in order for the model to generalize well, it is of great importance that we are able to find the optimal parameter for the model on the entire population--not only on the given training sample. In this paper, we construct valid confidence sets for this optimal parameter of a machine learning model, which can be generated using only the training data without any knowledge of the population. We then show that studying the distribution of this confidence set allows us to assign a notion of confidence to arbitrary regions of the parameter space, and we demonstrate that this distribution can be well-approximated using bootstrapping techniques.
PAC-Learning, Glivenko-Cantelli Classes, Random Sets, Imprecise Probability, Hypothesis Testing
## 1 Introduction
Many machine learning applications train a model by finding parameters that minimize a loss function (such as the zero-one loss or squared loss) on some training data. This approach generalizes well outside the training data if the average loss on the training data is sufficiently close to the expected loss (i.e. "risk") for the entire population. Classical _probably approximately correct_ (PAC) learning theory (e.g. Valiant, 1984; Kearns, Schapire and Sellie, 1994; Mohri, Rostamizadeh and Talwalkar, 2018) provides theoretical rates at which the empirical risk function converges to the population risk in various settings, but provides no theoretical guarantees about estimation properties of the risk _minimizer_. In general, little work has been done to quantify the uncertainty in an empirical risk minimizer (ERM).
Given that an ERM is a function of the data, aleatory uncertainty is a natural quantification of the inherent randomness of the ERM: It is desirable to be able to construct confidence sets and hypothesis tests for risk minimizers of machine learning models that are "valid" in the sense that they posses frequentist repeated sampling guarantees. Of the statistical literature that exists for uncertainty quantification for ERMs, much of it falls under the umbrella of the more general framework of M-estimation (Huber, 1981). The statistical properties of M-estimators are well studied (e.g. Maronna, Martin and Yohai, 2006; van de Geer, 2000; Boos and Stefanski, 2018); however, the theory of M-estimation typically only yields asymptotic frequentist guarantees that require strong consistency conditions or distributional assumptions. Another approach is found in Hudson, Carone and Shojaie (2021), which studies hypothesis testing for the risk minimizer using a nonparametric extension of the classical score test. Unfortunately, the theory from Hudson, Carone and Shojaie (2021) requires smoothness conditions on the risk function, and again only yields asymptotic frequentist guarantees. Cella and Martin (2022) also studies this problem via the framework of generalized inferential models, but is once again only able to provide approximate validity due to the need to appeal to asymptotic theory. Bayesian approaches to uncertainty quantification of the risk minimizer, such as the penalized exponentially tilted empirical likelihood posterior of Tang and Yang (2021) or the Gibbs-posterior inference approaches of Martin and Liu (2013) and Bhattacharya and Martin (2022), are similarly only capable of giving asymptotic coverage guarantees. It thus remains an open question: Can we obtain valid confidence
sets and hypothesis tests for these risk minimizers? In this paper, we answer this question in the affirmative for all machine learning models satisfying a relatively weak _uniform convergence_ assumption.
The primary insight of this paper is illustrated in Figure 1: The risk minimizer \(\theta_{0}\) is typically estimated via an ERM \(\widehat{\theta}_{S}\) that depends on a random sample \(S\). Our insight is that uncertainty in the ERM can be quantified by examining a neighborhood \(\widehat{\Theta}_{S}^{\varepsilon}\) around \(\widehat{\theta}_{S}\) of some size \(\varepsilon\); ideally, this neighborhood would contain \(\theta_{0}\), but it is typically unclear how large \(\varepsilon\) needs to be to do so. The primary reason doing so is difficult is that the empirical risk may behave very differently from the population risk. A simple example of this appears when using a machine learning model that always overfits to match the data perfectly, so that the empirical risk on a sample is always zero despite the population risk being rather high--this may cause the distance between the ERM and risk minimizer to vary unpredictably across samples. However, if we can ensure that the empirical risk is, with high probability, sufficiently close to the population risk in a uniform manner over the parameter space \(\Theta\), then this issue is solved and it is possible find an \(\varepsilon\) such that \(\widehat{\Theta}_{S}^{\varepsilon}\) contains \(\theta_{0}\) with high probability.
The contributions of our paper are as follows:
* We provide a new framework for statistical inference that can be used to provide uncertainty quantification for a parameter of interest without knowledge of the data-generating distribution.
* We use this framework to create confidence sets that contain the "optimal" (i.e. risk minimizing) parameter of a machine learning model with any pre-specified level of confidence.
* We demonstrate that knowledge of the theoretical distribution of valid confidence sets can be used to determine the location of risk minimizer.
* We draw on the ideas of imprecise probability theory (Dempster, 1967; Shafer, 1976) to study this theoretical distribution of confidence sets. This is in part inspired by the growing use of imprecise probability in the study of both empirical risk minimization and statistical inference (e.g. Kusunoki et al., 2021; Hullermeier, 2014; Jacob et al., 2021; Gong et al., 2021).
* We show that under weak conditions, bootstrapping can be used to estimate the distribution of these confidence sets. This allows us to assign a notion of confidence to arbitrary regions of the parameter space, even when the data-generating distribution is completely unknown.
* We illustrate the utility of our framework in both synthetic and real data examples1.
Footnote 1: The code to reproduce the experiments presented in this paper is available at [https://github.com/neil-dey/valid-inference-ml-estimators](https://github.com/neil-dey/valid-inference-ml-estimators)
The remainder of the paper is structured as follows: In Section2, we give context for the problem that machine learning aims to solve and background on classical machine learning theory. Section3 discusses the notion of validity and briefly covers the related work of conformal prediction. In Section4, we propose our new inferential framework and demonstrate how to construct valid confidence sets for the risk minimizer of a machine learning model. In Section5, we give background on the theory of random sets and imprecise probability, and use this theory to study the distribution of our valid confidence sets. Section6 studies the use of bootstrapping to estimate the distribution of our confidence sets and the practical consequences of our work. Section7 briefly discusses the related work of generalized Inferential Models and gives a comparison between the two approaches. We finish with concluding remarks and directions for future work in Section8.
## 2 The Supervised Learning Problem
For a machine learning task, one receives a sequence of examples \(X_{1},\ldots,X_{m}\) from an example domain \(\mathcal{X}\) and corresponding labels \(Y_{1},\ldots,Y_{m}\) from a label space \(\mathcal{Y}\); the task of interest is being able to predict the label \(Y\) associated with a new, unseen example \(X\). To do so, the practitioner selects a _hypothesis class_\(\mathcal{H}\) consisting of functions (referred to as hypotheses) \(\mathcal{X}\to\mathcal{Y}\) and a loss function \(L:\mathcal{Y}\times\mathcal{Y}\to\mathbb{R}^{+}\). The supervised learning problem is to choose the hypothesis that minimizes the expected loss over the (unknown) data-generating distribution.
A natural question is to ask when the supervised learning problem is tractable to solve, and what it formally means to solve it in the first place. One notion is given by the probably approximately correct (PAC) learning framework, introduced by Valiant (1984). In this framework, we suppose that there exists a _true concept_\(c\in\mathcal{H}\) such that for any example \(X\), its corresponding label is given by \(c(X)\). The randomness in the training sample2\(S=((X_{1},c(X_{1})),\ldots,(X_{m},c(X_{m}))\) typically prevents a guarantee that \(c\) can be found exactly. Hence, the PAC-learning framework claims that a procedure "successfully learns" if it can find, with high probability, a hypothesis "close enough" to \(c\) without needing an excessively large sample size. That is, PAC-learning requires that there exist an algorithm that probably finds a hypothesis that is approximately correct. Formally, we have the following definition:
Footnote 2: In this paper, we define samples to consist of independent and identically distributed observations.
A hypothesis class \(\mathcal{H}\) is PAC-learnable with respect to a fixed loss function \(L\) if there exists an learning process3 and a polynomial \(p\) such that for every distribution \(\mathcal{D}\) over \(\mathcal{X}\), every \(\varepsilon>0\), every \(\delta>0\), and every concept \(c\in\mathcal{H}\),
Footnote 3: formally, an algorithm \(A:\bigcup_{i=1}^{\infty}(X\times\mathcal{Y})^{i}\to\mathcal{H}\), where the domain is the set of all samples of finite size
\[\Pr_{S\sim\mathcal{D}^{m}}\left[\operatorname*{E}_{X\sim\mathcal{D}}\left[L( \widehat{h}_{S}(X),c(X))\right]\leq\varepsilon\right]\geq 1-\delta\]
whenever the sample size \(m\) is at least \(p(\varepsilon^{-1},\delta^{-1})\), where \(\widehat{h}_{S}\) is the output of the learning process when given the sample \(S\) as an input.
The PAC-learning framework was later was later extended to the agnostic PAC-learning model introduced by Kearns, Schapire and Sellie (1994), generalizing to no longer require that there exist a true concept \(c\) that generates the labels. Instead, samples \(S=((X_{1},Y_{1}),\ldots,(X_{m},Y_{m}))\) can be generated from any distribution \(\mathcal{D}\) over \(\mathcal{X}\times\mathcal{Y}\). Although there need not be a true concept in \(\mathcal{H}\), there still exists some lower bound on how badly the hypotheses in \(\mathcal{H}\) perform; this lower bound now acts as the target to aim for. Thus, the agnostic PAC-learning framework defines a procedure to "successfully learn" if it can probably find a hypothesis that is approximately as good as the best that \(\mathcal{H}\) can possibly perform. Formally:
Definition 2: Let \(\mathcal{H}\) be a hypothesis class and \(L\) a loss function. The risk of a hypothesis \(h\in\mathcal{H}\) on a distribution \(\mathcal{D}\) over \(\mathcal{X}\times\mathcal{Y}\) is the expected loss of \(h\):
\[R(h)=\mathop{\mathrm{E}}_{(X,Y)\sim\mathcal{D}}[L(h(X),Y)].\]
Definition 3: A hypothesis class \(\mathcal{H}\) is agnostically PAC-learnable with respect to a fixed loss function \(L\) if there exists a learning process and a polynomial \(p\) such that for every distribution \(\mathcal{D}\) over \(\mathcal{X}\times\mathcal{Y}\), every \(\varepsilon>0\), and every \(\delta>0\),
\[\Pr_{S\sim\mathcal{D}^{m}}\left[R(\widehat{h}_{S})\leq\inf_{h\in\mathcal{H}}R (h)+\varepsilon\right]\geq 1-\delta\]
whenever \(m\geq p(\varepsilon^{-1},\delta^{-1})\), where \(\widehat{h}_{S}\) is the output of the learning process when given the sample \(S\) as an input.
The PAC-learning and agnostic PAC-learning frameworks thus give criteria to determine when the supervised learning problem is tractable. However, they do not tell us how to solve the problem. Although we wish to find \(h\in\mathcal{H}\) that minimizes \(R(h)\), we do not know the data-generating distribution \(\mathcal{D}\) to directly minimize \(R\); hence, practitioners will instead minimize the empirical risk on the given sample \(S\):
\[\widehat{R}_{S}(h)=\frac{1}{m}\sum_{i=1}^{m}L(h(X_{i}),Y_{i}).\]
The hypothesis that minimizes \(\widehat{R}_{S}\) is then used as an approximation to the hypothesis that minimizes \(R\); if one can guarantee that \(\widehat{R}_{S}\) is approximately the same as \(R\) for large enough sample sizes, this yields an alternative criterion for the existence of a solution to the supervised learning problem. This idea is formalized by the notion of the Glivenko-Cantelli property:
Definition 4: A hypothesis class \(\mathcal{H}\) is a strong uniform Glivenko-Cantelli class with respect to a fixed loss function if for every \(\varepsilon>0\),
\[\lim_{n\to\infty}\sup_{\mathcal{D}}\Pr_{S}\left[\sup_{m\geq n}\sup_{h\in \mathcal{H}}\left|\widehat{R}_{S}(h)-R(h)\right|>\varepsilon\right]=0,\]
where the outermost supremum is understood to be over all possible distributions over \(\mathcal{X}\times\mathcal{Y}\), and \(S\) is a sample of size \(m\) from \(\mathcal{D}\).
Remark 1: All these notions of learning turn out to be related. Consider the class4
Footnote 4: We assume in this discussion that \(L\circ\mathcal{H}\) is image-admissible Suslin; c.f. Dudley, Kunita and Ledrappier (1984, page 101)
\[L\circ\mathcal{H}=\{(x,y)\mapsto L(h(x),y)\mid h\in\mathcal{H}\}.\]
If \(\mathcal{H}\) is a binary hypothesis class and \(L\) is the zero-one loss function, then the works of Vapnik and Chervonenkis (1971) and Assouad and Dudley (1989) (see also Dudley, Gine and Zinn 1991 for an alternative, unified proof) give that \(\mathcal{H}\) is a strong uniform Glivenko-Cantelli class with respect to \(L\) if and only if \(L\circ\mathcal{H}\) has finite VC dimension. Furthermore, Vapnik and Chervonenkis (1971) and Blumer et al. (1989) together show that in this setting, \(\mathcal{H}\) with respect to \(L\) is agnostically PAC-learnable if and only if it is PAC-learnable if and only if it has finite VC dimension; that is to say that all of these notions are equivalent for binary hypothesis classes. This result was later generalized to real-valued hypothesis classes \(\mathcal{H}\) and general loss functions \(L\) by Alon et al. (1997), stating that \(\mathcal{H}\) is a strong uniform Glivenko-Cantelli class with respect to \(L\) if and only if \(L\circ\mathcal{H}\) has finite \(\gamma\)-fat shattering dimension at all scales \(\gamma>0\), which in turn implies agnostic PAC-learnability.
## 3 Notions of Validity
In any statistical problem, the use of probabilistic reasoning is necessary to form conclusions; however, there is always a risk that this conclusion fails to correspond to the real world due to randomness in the sample. This does not, however, make probabilistic reasoning useless--indeed, if the practitioner claims that their conclusion will be correct (e.g.) 95% of the time, and we can actually verify this claim, then their resulting conclusions are clearly still meaningful. This notion of accountability in probabilistic reasoning is referred to as _validity_.
To illustrate a common failure of validity, suppose two practitioners perform Bayesian inference for the mean of normally distributed data, with both practitioners yielding 95% credibility intervals. These two practitioners may very well give completely different conclusions about the mean depending on the priors chosen by each--and neither claim is falsifiable, as the conclusion depends on individual prior beliefs. Without the ability for an independent party to verify these claims, the meaningfulness of these intervals to others is questionable. On the other hand, frequentist inference for the same mean will yield a 95% confidence interval that is again different from the Bayesian credibility intervals, but provides a validity guarantee: By repeatedly performing the same data collection and inferential procedure, one can check that the correct conclusion is actually obtained 95% of the time. That is not to say that frequentist inference procedures are always valid, as it is often the case that frequentists use asymptotic confidence intervals or tests in frequentist procedures. Once again, this fails a falsifiability criterion, as if this 95% threshold isn't reached upon repeated sampling by an independent party, the original practitioner may simply point to the justification that the sample size simply wasn't large enough (though they may not have even had a way to know this a priori).
The importance of validity has been recognized in machine learning contexts as well. As an example, the definition of an agnostically PAC-learnable hypothesis class (Definition 3) guarantees control of the type I error rate when concluding that \(R(\widehat{h}_{S})\leq\inf_{h\in\mathcal{H}}R(h)+\varepsilon\) at level \(\delta\). This is precisely the usual frequentist repeated-sampling guarantee: Upon repeatedly training a machine learning model over independent training samples, we can conclude that the risk of \(\widehat{h}_{S}\) is within \(\varepsilon\) of the risk best possible hypothesis, and we would only be incorrect at a rate bounded by \(\delta\). Without such a guarantee, it would be questionable whether or not we could trust that the results of a properly trained machine learning model are useful at all.
Another idea demonstrating the importance of validity in machine learning contexts is given by conformal prediction, introduced by Vovk et al. (2005). Conformal prediction can take a point prediction method (for classification or regression problems) and yield valid prediction regions for new data, in the sense that the generated prediction region will contain the true label with any prespecified level of confidence. Thus, there exists a very useful validity guarantee on the prediction regions generated by conformal prediction-enhanced machine learning algorithms. Because this validity property is so desirable, conformal prediction has found use in a variety of applications, such as online regression forests (Vasiloudis et al., 2019), QSAR modelling for drug development (Eklund et al., 2015), convolutional neural networks for image classification (Matiz and Barner, 2019), and various deep-learning architectures (Messoudi et al., 2020), among others.
## 4 An Inferential Framework for Machine Learning
We propose the following machine learning (ML) framework for inference: Data \((X_{i},Y_{i})_{i=1}^{m}\) are generated i.i.d. from an unknown _data-generating_ distribution \(\mathcal{D}\) over the sample space \(\mathcal{X}\times\mathcal{Y}\). We then fit a _working model_\(Y=h(X;\,\theta)\), where the set of candidate _hypotheses_\(\mathcal{H}=\{x\mapsto h(x;\,\theta)\mid\theta\in\Theta\}\) is known. In contexts where the model is not presented any "inputs" to learn labels from,
we take the convention5 that \(\mathcal{X}=\{\varnothing\}\). In an ML context, \(\theta\) can be thought of as the vector of parameters that is to be "learned" during the training of the model. As an example, if \(\mathcal{H}\) were a class of neural networks, \(\theta\) would be a vector of weights and biases corresponding to a particular trained model. We wish to find \(\theta_{0}\) that best fits the generating distribution, in the sense that
Footnote 5: One may expect the convention \(\mathcal{X}=\varnothing\) to be more natural, but the absence of examples is not the same as an empty example space.
\[\theta_{0}:=\operatorname*{arg\,min}_{\theta\in\Theta}R(\theta)\equiv \operatorname*{arg\,min}_{\theta\in\Theta}\operatorname*{E}_{(X,Y)\sim \mathcal{D}}[L(h(X;\,\theta),Y)]\]
for some fixed loss function \(L:\mathcal{Y}\times\mathcal{Y}\to\mathbb{R}\). Such a \(\theta_{0}\) is referred to as a _risk minimizer_. Together, we call the pair \((\mathcal{H},L)\) the ML model.
It is typical to estimate \(\theta_{0}\) via an estimator \(\widehat{\theta}_{S}\) found by empirical risk minimization:
\[\widehat{\theta}_{S}:=\operatorname*{arg\,min}_{\theta\in\Theta}\widehat{R}_{ S}(\theta)\equiv\operatorname*{arg\,min}_{\theta\in\Theta}\frac{1}{m}\sum_{i=1}^{m}L (h(X_{i};\,\theta),Y_{i})\]
where \(S=((X_{1},Y_{1}),\ldots,(X_{m},Y_{m}))\sim\mathcal{D}^{m}\) is the observed sample. We call such a \(\widehat{\theta}_{S}\) an _empirical risk minimizer_ (ERM). A general property of "learnable" ML models relating the risk and empirical risk is the _uniform convergence property_:
**Definition 5**.: An ML model \((\mathcal{H},L)\) has the uniform convergence property with respect to the data-generating distribution \(\mathcal{D}\) over \(\mathcal{X}\times\mathcal{Y}\) if there exists a function \(f:\mathbb{R}^{+}\times\mathbb{R}^{+}\to\mathbb{R}\) such that for any \(\varepsilon>0\) and \(\alpha>0\), if \(m\geq f(\varepsilon,\alpha)\) then
\[\Pr_{S\sim\mathcal{D}^{m}}\left[\sup_{\theta\in\Theta}|R(\theta)-\widehat{R}_ {S}(\theta)|\leq\varepsilon\right]\geq 1-\alpha.\]
We call any such \(f\) a _witness_ to the uniform convergence property.
**Remark 2**.: Notice that Definition5 is a much weaker assumption than that of being a strong uniform Glivenko-Cantelli class (Definition4); it follows that all image-admissible Suslin ML models with finite VC dimension or \(\gamma\)-fat-shattering dimension for all \(\gamma>0\) have the uniform convergence property with respect to _any_ distribution \(\mathcal{D}\). We thus see that a broad class of models of interest fulfill this requirement.
**Definition 6**.: Let \((\mathcal{H},L)\) have the uniform convergence property, and let \(\mathcal{W}\) be the set of all corresponding witnesses. The uniform convergence function of \((\mathcal{H},L)\) is defined to be
\[f(\varepsilon,\alpha)=\inf_{w\in\mathcal{W}}\lceil w(\varepsilon,\alpha)\rceil,\]
where \(\lceil\,\cdot\,\rceil\) is the ceiling function.
In other words, the uniform convergence function of \((\mathcal{H},L)\) is the smallest integer-valued function that witnesses the uniform convergence property.
**Remark 3**.: For image-admissible Suslin classes of finite VC or \(\gamma\)-fat-shattering dimension, upper bounds on the worst-possible asymptotic behavior of the uniform convergence function are well-known, since this is simply the rate of convergence for the strong Glivenko-Cantelli class. For finite samples, these bounds can be obtained via tools such as the Rademacher complexity or covering numbers.
Though it is typical to only examine ERMs for prediction problems, we find that for inference problems, it is useful to look at parameters that are _almost_ ERMs:
Let \((\mathcal{H},L)\) be an ML model and \(S\) a given sample. The set of \(\varepsilon\)-almost ERMs (\(\varepsilon\)-AERMs) is defined to be
\[\widehat{\Theta}_{S}^{\varepsilon}=\{\theta\in\Theta\mid\widehat{R}_{S}(\theta) \leq\inf_{\vartheta\in\Theta}\widehat{R}_{S}(\vartheta)+\varepsilon\}.\]
It turns out that this set of \(\varepsilon\)-AERMs acts as a valid confidence set for \(\theta_{0}\):
Let \((\mathcal{H},L)\) have uniform convergence function \(f\). Suppose that the risk minimizer \(\theta_{0}\) exists. Then \(\widehat{\Theta}_{S}^{\varepsilon}\) is a \(1-\alpha\) level confidence set for \(\theta_{0}\) if \(m\geq f(\varepsilon/2,\alpha)\).
Proof.: We have by definition that
\[\Pr\Bigl{[}\theta_{0}\in\widehat{\Theta}_{S}^{\varepsilon}\Bigr{]}=\Pr\biggl{[} \widehat{R}_{S}(\theta_{0})\leq\inf_{\vartheta\in\Theta}\widehat{R}_{S}( \vartheta)+\varepsilon\biggr{]}.\]
Next, note by definition of the infimum that for every \(\zeta>0\) there exists \(\theta_{\zeta}\in\Theta\) such that \(\widehat{R}_{S}(\theta_{\zeta})-\inf_{\vartheta\in\Theta}\widehat{R}_{S}( \vartheta)\leq\zeta\). Furthermore, we have by the uniform convergence property that if \(m\geq f(\varepsilon/2,\alpha)\), then
\[\Pr\biggl{[}\sup_{\vartheta\in\Theta}|\widehat{R}_{S}(\vartheta)-R(\vartheta) |\leq\frac{\varepsilon}{2}\biggr{]}\geq 1-\alpha.\]
Now if \(\sup_{\vartheta\in\Theta}|\widehat{R}_{S}(\vartheta)-R(\vartheta)|\leq \varepsilon/2\), then for every \(\theta\in\Theta\), \(|\widehat{R}_{S}(\theta_{0})-R(\theta_{0})|+|\widehat{R}_{S}(\theta)-R( \theta)|\leq\varepsilon\); this is true in particular for every \(\theta_{\zeta}\), so
\[\Pr\biggl{[}|\widehat{R}_{S}(\theta_{0})-R(\theta_{0})|+\inf_{\zeta>0}| \widehat{R}_{S}(\theta_{\zeta})-R(\theta_{\zeta})|\leq\varepsilon\biggr{]} \geq 1-\alpha. \tag{1}\]
Next, we have for every \(\zeta>0\) that
\[\widehat{R}_{S}(\theta_{0})-\inf_{\vartheta\in\Theta}\widehat{R} _{S}(\vartheta)\] \[=\widehat{R}_{S}(\theta_{0})-R(\theta_{0})+R(\theta_{0})-R( \theta_{\zeta})+R(\theta_{\zeta})-\widehat{R}_{S}(\theta_{\zeta})+\widehat{R} _{S}(\theta_{\zeta})-\inf_{\vartheta\in\Theta}\widehat{R}_{S}(\vartheta)\] \[=\Bigl{[}\widehat{R}_{S}(\theta_{0})-R(\theta_{0})\Bigr{]}-[R( \theta_{\zeta})-R(\theta_{0})]+\Bigl{[}R(\theta_{\zeta})-\widehat{R}_{S}( \theta_{\zeta})\Bigr{]}+\biggl{[}\widehat{R}_{S}(\theta_{\zeta})-\inf_{ \vartheta\in\Theta}\widehat{R}_{S}(\vartheta)\biggr{]}\] (Regrouping) \[\leq|\widehat{R}_{S}(\theta_{0})-R(\theta_{0})|-0+|R(\theta_{ \zeta})-\widehat{R}_{S}(\theta_{\zeta})|+\biggl{[}\widehat{R}_{S}(\theta_{ \zeta})-\inf_{\vartheta\in\Theta}\widehat{R}_{S}(\vartheta)\biggr{]}\] ( \[\theta_{0}\] minimizes \[R\] ) \[\leq|\widehat{R}_{S}(\theta_{0})-R(\theta_{0})|+|R(\theta_{\zeta} )-\widehat{R}_{S}(\theta_{\zeta})|+\zeta.\] (Definition of \[\theta_{\zeta}\] )
Taking the infimum over \(\zeta>0\), we thus have that
\[\widehat{R}_{S}(\theta_{0})-\inf_{\vartheta\in\Theta}\widehat{R}_{S}( \vartheta)\leq|\widehat{R}_{S}(\theta_{0})-R(\theta_{0})|+\inf_{\zeta>0}|R( \theta_{\zeta})-\widehat{R}_{S}(\theta_{\zeta})|. \tag{2}\]
Hence, we have by combining inequalities (1) and (2) that
\[\Pr\biggl{[}\widehat{R}_{S}(\theta_{0})-\inf_{\vartheta\in\Theta}\widehat{R}_ {S}(\vartheta)\leq\varepsilon\biggr{]}\geq\Pr\biggl{[}|\widehat{R}_{S}(\theta_ {0})-R(\theta_{0})|+\inf_{\zeta>0}|R(\theta_{\zeta})-\widehat{R}_{S}(\theta_{ \zeta})|\leq\varepsilon\biggr{]}\geq 1-\alpha.\]
as desired.
The above theorem has the following intuition: The ERM should be close to the risk minimizer with high probability due to the uniform convergence property, so looking at some
sufficiently large \(\varepsilon\)-neighborhood of the ERM ought to capture the risk minimizer with high probability.
While it is often the case that \(\theta_{0}=\arg\min_{\theta\in\Theta}R(\theta)\) exists, this is not always the case. If \(\Theta\) is compact (with respect to the topology induced by the pseudometric \(d(x,y)=|R(x)-R(y)|\)), then it is necessarily the case that the risk minimizer \(\theta_{0}\) exists. However, when \(\Theta\) is not compact, \(\theta_{0}\) may not exist--there only exists a neighborhood around \(\theta_{0}\) on the boundary of \(\Theta\). Thus, we instead aim to cover the neighborhood \(\Theta_{0}^{\delta}=\{\theta\in\Theta\mid R(\theta)\leq\inf_{\vartheta\in \Theta}R(\vartheta)+\delta\}\) for some \(\delta\geq 0\). Note that the case \(\delta=0\) is of interest when \(\theta_{0}\) does exist (since then \(\Theta_{0}^{0}=\{\theta_{0}\}\)) or is not unique. Figure 2 illustrates this phenomenon.
The following theorem demonstrates that \(\widehat{\Theta}_{S}^{\varepsilon}\) does indeed remain a valid confidence set for any such \(\Theta_{0}^{\delta}\) (so long as \(\varepsilon\) is chosen large enough).
**Theorem 2**: _Let \((\mathcal{H},L)\) have uniform convergence function \(f\). Then \(\widehat{\Theta}_{S}^{\varepsilon}\) is a \(1-\alpha\) level confidence set for \(\Theta_{0}^{\delta}\) if \(\delta\leq\varepsilon\) and \(m\geq f((\varepsilon-\delta)/2,\alpha)\)._
We have that
\[\Pr\left[\Theta_{0}^{\delta}\leq\widehat{\Theta}_{S}^{\varepsilon}\right]= \Pr\Bigg{[}\sup_{\theta_{0}\in\Theta_{0}^{\delta}}\widehat{R}_{S}(\theta_{0}) \leq\inf_{\vartheta\in\Theta}\widehat{R}_{S}(\vartheta)+\varepsilon\Bigg{]}.\]
As in the proof of Theorem 1, we have by the definition of the infimum that for every \(\zeta>0\) there exists \(\theta_{\zeta}\in\Theta\) such that \(\widehat{R}_{S}(\theta_{\zeta})-\inf_{\vartheta\in\Theta}\widehat{R}_{S}( \vartheta)\leq\zeta\). Furthermore, by definition of the supremum we have that for every \(\eta>0\) there exists \(\theta_{\eta}\in\Theta_{0}^{\delta}\) such that \(\sup_{\theta_{0}\in\Theta_{0}^{\delta}}\widehat{R}_{S}(\theta_{0})-\widehat{R }_{S}(\theta_{\eta})\leq\eta\). Then similarly to the argument in Theorem 1, we have by the uniform convergence property that
\[\Pr\left[\inf_{\eta>0}|\widehat{R}_{S}(\theta_{\eta})R(\theta_{\eta})|+\inf_{ \zeta>0}|\widehat{R}_{S}(\theta_{\zeta})-\widehat{R}(\theta_{\zeta})|\leq \varepsilon-\delta\right]\geq 1-\alpha. \tag{3}\]
Then for every \(\eta>0\) and \(\zeta>0\),
\[\sup_{\theta_{0}\in\Theta_{0}^{\delta}}\widehat{R}_{S}(\theta_{0 })-\inf_{\vartheta\in\Theta}\widehat{R}_{S}(\vartheta)\] \[=\sup_{\theta_{0}\in\Theta_{0}^{\delta}}\widehat{R}_{S}(\theta_{ 0})-\widehat{R}_{S}(\theta_{\eta})+\widehat{R}_{S}(\theta_{\eta})-\widehat{R }_{S}(\theta_{\zeta})+\widehat{R}_{S}(\theta_{\zeta})-\inf_{\vartheta\in \Theta}\widehat{R}_{S}(\vartheta)\] \[\leq\eta+\zeta+\widehat{R}_{S}(\theta_{\eta})-\widehat{R}_{S}( \theta_{\zeta})\]
\[= \Pr\left[\max\left\{\left|p-\frac{1}{m}\sum_{i=1}^{m}I(Y_{i}\neq 0) \right|,\left|1-p-\frac{1}{m}\sum_{i=1}^{m}I(Y_{i}\neq 1)\right|\right\}\leq\varepsilon\right]\] \[= \Pr\left[\left|\frac{1}{m}\sum_{i=1}^{m}Y_{i}-p\right|\leq \varepsilon\right]\]
\[=\sum_{i=\lceil m(p-\varepsilon)\rceil}^{\lfloor m(p+\varepsilon)\rfloor} \binom{m}{i}p^{i}(1-p)^{m-i}\]
Setting the infimum of this quantity over \(p\in[0,1]\) to be at most \(1-\alpha\), we can numerically solve for \(m\) or \(\varepsilon\) to find valid confidence sets for the risk-minimizer \(\theta_{0}\).
Note that due to the discreteness of Bernoulli data, it is often impossible to obtain a coverage of exactly \(1-\alpha\), so the confidence set will be conservative in general; this is not an issue present for continuous data.
**Example 2**: Consider LASSO estimation (without intercept):
\[\widehat{\beta}_{LASSO}=\operatorname*{arg\,min}_{\beta\::\:\|\beta\|_{1} \leq t}\frac{1}{m}\sum_{i=1}^{m}(y_{i}-x_{i}^{\top}\beta)^{2}\]
where \(t\) is a regularization parameter typically chosen via cross-validation. We can consider the corresponding ML model \(\mathcal{X}=\mathbb{R}^{p}\), \(\mathcal{Y}=\mathbb{R}\), \(\mathcal{H}=\{x\mapsto x^{\top}\beta\::\:\|\beta\|_{1}\leq t\}\), and \(L(y,y^{\prime})=(y-y^{\prime})^{2}\).
Suppose that the examples are generated as \(X\sim\mathcal{D}_{x}\), \(Y=X^{\top}\beta_{0}+U\) for \(U\sim\mathcal{D}_{u}\) independent of \(X\) with \(\operatorname{E}[U]=0\). Then assuming that \(\|\beta_{0}\|_{1}\) is bounded above by some known \(t^{\prime}\), as well as bounds on the fourth moments of \(X\) an \(U\), our inferential framework allows us to construct a valid confidence interval for \(\beta_{0}\).
To this end, we first note that the risk of a parameter \(\beta\) is given by
\[R(\beta)=\operatorname{E}[(X^{\top}\beta_{0}+U-X^{\top}\beta)^{2}]= \operatorname{E}[(X^{\top}(\beta_{0}-\beta)+U)^{2}].\]
We can then find a uniform convergence bound:
\[\Pr_{S\sim\mathcal{D}^{m}}\left[\sup_{\beta}|R(\beta)-\widehat{R}_{S}(\beta) |\leq\varepsilon\right]=\Pr_{S\sim\mathcal{D}^{m}}\left[\sup_{\beta}\left|R( \beta)-\frac{1}{m}\sum_{i=1}^{m}(X_{i}^{\top}(\beta_{0}-\beta)+U_{i})^{2} \right|\leq\varepsilon\right]\]
Since our parameter space is compact and the function we are taking the supremum over is continuous, we can substitute \(\beta_{0}-\beta=\widetilde{\beta}\) for some \(\widetilde{\beta}\) a function of \(\beta_{0}\), \((X_{1},\ldots,X_{m})\), and \((U_{1},\ldots,U_{m})\) that attains that supremum:
\[\Pr_{S\sim\mathcal{D}^{m}}\left[\sup_{\beta}|R(\beta)-\widehat{R}_{S}(\beta) |\leq\varepsilon\right]=\Pr_{S\sim\mathcal{D}^{m}}\left[\left|\operatorname{ E}[(X^{\top}\widetilde{\beta}+U)^{2}]-\frac{1}{m}\sum_{i=1}^{m}(X_{i}^{\top} \widetilde{\beta}+U_{i})^{2}\right|\leq\varepsilon\right]\]
In general, this cannot be computed explicitly. However, we can lower bound this using Chebyshev's Inequality, as the first term is the expectation of the last term:
\[\Pr_{S\sim\mathcal{D}^{m}}\left[\sup_{\beta}|R(\beta)-\widehat{R}_{S}(\beta) |\leq\varepsilon\right]\geq 1-\frac{1}{m\varepsilon^{2}}\operatorname{Var}[(X^{ \top}\widetilde{\beta}+U)^{2}]\]
For uniform convergence, we require this to be at least \(1-\alpha\) for all \(\widetilde{\beta}\), which has \(1\)-norm at most \(t+t^{\prime}\). Thus, we only need to have bounds on the fourth moments of \(X\) and \(U\) in order to obtain valid confidence sets for \(\beta_{0}\). Notice that if \(\|\beta_{0}\|_{1}\leq t\) (so that the hypothesis class contains \(\beta_{0}\)), then bounding \(\operatorname{Var}[(X^{\top}\widetilde{\beta}+U)^{2}]\) is exactly the same as bounding \(\operatorname{Var}[Y^{2}]\) when the \(\beta_{0}\) generating \(Y\) is allowed to be "twice as big" as our model thinks it should be.
Note that one can always obtain stronger bounds with more assumptions on the model. For example, a common assumption in regression is that the observed data is \(\sigma^{2}\)-subgaussian. Recall that a random variable \(Z\) is \(\sigma^{2}\)-subgaussian if
\[\operatorname{E}\left[\exp(t(Z-\operatorname{E}[Z]))\right]\leq\exp\biggl{(} \frac{t^{2}\sigma^{2}}{2}\biggr{)}\]
for all \(t\in\mathbb{R}\). The square of a \(\sigma^{2}\)-subgaussian random variable is \((32\sigma^{4},4\sigma^{2})\)-subexponential (Honorio and Jaakkola, 2014, see Supplementary Material B), in the sense that
\[\mathrm{E}\left[\exp\bigl{(}t(Z^{2}-\mathrm{E}[Z^{2}])\bigr{)}\right]\leq\exp \biggl{(}\frac{t^{2}\cdot 32\sigma^{4}}{2}\biggr{)}\]
for all \(|t|\leq 1/(4\sigma^{2})\). Using standard concentration inequalities for subexponential random variables, we then have that for i.i.d. \(\sigma^{2}\)-subgaussian random variables \(Z_{1},\ldots,Z_{m}\),
\[\Pr\Biggl{[}\Biggl{|}\mathrm{E}[Z^{2}]-\frac{1}{m}\sum_{i=1}^{m}Z_{i}^{2} \Biggr{|}\leq\varepsilon\Biggr{]}\geq 1-2\exp\biggl{(}-\min\left(\frac{m \varepsilon^{2}}{64\sigma^{4}},\frac{m\varepsilon}{8\sigma^{2}}\right)\biggr{)}.\]
Thus, by assuming \(\sigma^{2}\)-subgaussianity of \(X_{i}^{\top}\widetilde{\beta}+U_{i}\), we can arrive at much sharper bounds for inference on \(\beta_{0}\) that decay exponentially with the sample size.
**Example 3**.: Suppose we wish to find the \(\tau\) quantile of a random variable with distribution \(\mathcal{D}\). We can model this via \(\mathcal{X}=\{\varnothing\}\), \(\mathcal{Y}=\mathbb{R}\), \(\mathcal{H}=\{x\mapsto\theta\mid\theta\in\Theta\}\), and \(L(y,y^{\prime})=(y^{\prime}-y)(\tau-I(y^{\prime}<y))\); the risk minimizer is precisely the \(\tau\) quantile of the distribution. We have that
\[R(\theta)=(\mathrm{E}[Y]-\theta)\tau-\mathrm{E}[(Y-\theta)\mid Y<\theta]\cdot \Pr[Y<\theta]\]
On the other hand,
\[\widehat{R}_{S}(\theta)=\frac{1}{m}\sum_{i=1}^{m}L(\theta,y)=(\overline{y}- \theta)\tau-\frac{1}{m}\sum_{y_{i}<\theta}(y_{i}-\theta),\]
where \(\overline{y}\) is the sample mean \(\sum_{i=1}^{m}y_{i}/m\). If \(Y\) is a continuous random variable, then \(R(\theta)-\widehat{R}_{S}(\theta)\) is continuous in \(\theta\), and so if \(\Theta\) is compact, then for some \(\widetilde{\theta}\) a function of \(y_{1},\ldots,y_{m}\):
\[\Pr_{S\sim\mathcal{D}^{m}}\left[\sup_{\theta}|R(\theta)-\widehat {R}_{S}(\theta)|\leq\varepsilon\right]\] \[= \Pr_{S\sim\mathcal{D}^{m}}\left[\Biggl{|}\mathrm{E}[Y]\tau- \mathrm{E}[(Y-\widetilde{\theta})\mid Y<\widetilde{\theta}]\cdot\Pr\Bigl{[} Y<\widetilde{\theta}\Bigr{]}-\overline{y}\tau+\frac{1}{m}\sum_{y_{i}< \widetilde{\theta}}(y_{i}-\widetilde{\theta})\Biggr{|}\leq\varepsilon\right]\] \[\geq 1-\frac{1}{\varepsilon^{2}}\operatorname{Var}\left[\overline{y} \tau-\frac{1}{m}\sum_{y_{i}<\widetilde{\theta}}(y_{i}-\widetilde{\theta})\right]\] \[= 1-\frac{1}{m\varepsilon^{2}}\operatorname{Var}\Big{[}\tau Y-(Y- \widetilde{\theta})\cdot I(Y<\widetilde{\theta})\Big{]}.\]
It suffices for this to be at least \(1-\alpha\) for all \(\widetilde{\theta}\) in the parameter space. Thus, solving for \(m\), a uniform convergence function for quantile estimation is given by
\[f(\varepsilon,\alpha)=\frac{1}{\alpha\varepsilon^{2}}\sup_{\theta\in\Theta} \operatorname{Var}\left[\tau Y-(Y-\theta)\cdot I(Y<\theta)\right].\]
We see that we only require bounds on the first and second moments and conditional moments of the distribution in question to construct valid confidence sets for the quantile. Similarly to the previous example, a practitioner that is willing to make stronger assumptions can strengthen the result to yield tighter bounds.
## 5 The Distribution of AERMs
Evidently, understanding the properties of the set \(\widehat{\Theta}_{S}^{\varepsilon}\) of \(\varepsilon\)-AERMs, is critical to understanding validity of ML models. It is of note that \(\widehat{\Theta}_{S}^{\varepsilon}\) is random due to the dependence on the random sample \(S\), but differs from usual random variables or random vectors by being set-valued. Thus, to fully understand the properties of \(\widehat{\Theta}_{S}^{\varepsilon}\), we first need to understand random sets.
### Random Sets and Imprecise Probability
We begin with the formal definition of a random set (Molchanov, 2017):
**Definition 8**: Let \((\Theta,\tau)\) be a Polish space and \((\Omega,\Sigma)\) a measurable space. A function \(X:\Omega\to 2^{\Theta}\) is a closed random set if \(X(\omega)\) is closed for every \(\omega\in\Omega\) and for every \(U\in\tau\),
\[\{\omega\,|\,X(\omega)\cap U\neq\varnothing\}\in\Sigma.\]
This definition can be generalized to non-Polish spaces as well as to non-closed random sets.
Whereas the distribution of a random variable is given by the cumulative distribution function, the distribution of a closed random set \(X\) is given by the _belief_ and _plausibility_ functions
\[\operatorname{bel}(A) =\Pr[X\subseteq A]\] \[\operatorname{pl}(A) =\Pr[X\cap A\neq\varnothing]\]
where \(A\) is a Borel set (Molchanov, 2017). It is straightforward to check that these two functions are duals of each other, in the sense that \(\operatorname{bel}(A)=1-\operatorname{pl}(\Theta\setminus A)\) and \(\operatorname{pl}(A)=1-\operatorname{bel}(\Theta\setminus A)\). As such, knowing either one of the belief or plausibility functions immediately yields the other. A very useful property of belief and plausibility functions is given by Choquet's Theorem (Matheron, 1974):
**Theorem 3**: _Let \(\Theta\) be a locally compact Hausdorff second countable topological space. Then the plausibility function induced by a random closed set in \(\Theta\) is \(\infty\)-alternating and upper semicontinuous. That is,_
\[\operatorname{pl}\left(\bigcup_{i=1}^{k}A_{i}\right)\leq\sum_{\varnothing\neq I \subseteq\{1,\ldots,k}}(-1)^{|I|+1}\operatorname{pl}\left(\bigcap_{i\in I}A_{ i}\right)\]
_for every positive integer \(k\), and \(\operatorname{pl}(K_{n})\to\operatorname{pl}(K)\) for every sequence of compact sets \((K_{n})\) converging from above to a compact set \(K\)._
Similarly, we would have that belief functions are \(\infty\)-monotone (reversing the inequality) and lower semicontinuous. Note that probability measures are both \(\infty\)-alternating and \(\infty\)-monotone; this fact is often referred to as the principle of inclusion-exclusion.
The belief and plausibility functions are often additionally conditioned on the event \(X\neq\varnothing\) so that we have the convenient properties that \(\operatorname{bel}(\varnothing)=\operatorname{pl}(\varnothing)=0\) and \(\operatorname{bel}(\Theta)=\operatorname{pl}(\Theta)=1\); note that these properties trivially hold if the random set is almost surely nonempty. If, in addition, the \(\infty\)-monotone and \(\infty\)-alternating conclusions from Choquet's Theorem also hold, then the belief and plausibility functions fall under the purview of the Dempster-Shafer theory of evidence (Dempster, 1967; Shafer, 1976). Within this framework, Dempster (1967) suggests that given a random set \(X\) that corresponds to a quantity of interest \(\theta_{0}\in\Theta\), we can interpret \(\operatorname{bel}(A)\) and \(\operatorname{pl}(A)\) as _lower_ and _upper_ probabilities for the assertion \(\theta_{0}\in A\). That is to say that \(\operatorname{bel}(A)\) can be interpreted as the probability that \(\theta_{0}\in A\) is "known to be true," \(1-\operatorname{pl}(A)\) can be interpreted as the probability that \(\theta_{0}\in A\) is "known to be false,"
and the remaining probability \(\operatorname{pl}(A)-\operatorname{bel}(A)\) can be interpreted as a residual "don't know" probability (Dempster, 2008). As a simple example, consider the trivial random set \(X=\Theta\); then \(\operatorname{bel}(A)=0\) and \(\operatorname{pl}(A)=1\) for every nonempty proper subset \(A\) of \(\Theta\)--Dempster's interpretation would be that we have no evidence for the truth or falsity of \(\theta_{0}\in A\) for any such \(A\), and so have absolutely no knowledge of the location of \(\theta_{0}\).
A formal theory of _imprecise probability_ can be worked out from belief and plausibility functions. For example, Shafer (1979) demonstrates how to extend belief and plausibility functions from union- or intersection-closed subsets of \(\Theta\) (e.g. the Borel sets) to the entire set of subsets of \(\Theta\) via the notions of allocations and allowments of probability; this is analogous to how the Lebesgue measure is extended from open sets to the Lebesgue measurable sets. Shafer (2016) briefly discusses the theory of Choquet integration for beliefs and plausibilities (analogous to Lebesgue integration for probabilities) as well as the notions of independence, joint imprecise probability distributions, and conditional imprecise probability. In particular, Shafer (2016) proves that Bayes's rule holds for plausibility functions even when \(\Theta\) is infinite; this theorem is known as Dempster's rule of conditioning.
The Dempster-Shafer framework is not the only approach to imprecise probability. An alternative to working with belief and plausibility functions is to work with _necessity_ and _possibility_ functions. Possibility functions still keep the properties \(\operatorname{pos}(\varnothing)=0\) and \(\operatorname{pos}(\Theta)=1\), but impose a maxitivity assumption:
\[\operatorname{pos}\left(\bigsqcup_{i\in I}A_{i}\right)=\sup_{i\in I} \operatorname{pos}(A_{i})\]
for every index set \(I\)(Dubois and Prade, 1980, Definition II.5.\(\eta\)). The necessity function is once again the dual of the possibility function. The necessity and possibility function approach to imprecise probability has a variety of nice properties due to the maxitivity assumption, but is limited in its ability to describe the behavior of random sets: It can be shown that the plausibility function induced by a random set \(X\) is a possibility function if and only if the realizations of \(X\) are nested (i.e. \(X(\omega_{1})\subseteq X(\omega_{2})\) or \(X(\omega_{2})\subseteq X(\omega_{1})\) for all \(\omega_{1},\omega_{2}\in\Omega\)) (Shafer, 1976, Theorem 10.1).
Another approach to developing imprecise probability theory is via previsions; this theory is discussed in detail in Walley (1991). Given a linear space \(\mathcal{F}\) of functions \(f:\Theta\to\mathbb{R}\) (called "gambles"), the lower and upper previsions defined on \(\mathcal{F}\) are
\[\underline{E}[f] =\sup\{\mu\in\mathbb{R}\mid f-\mu\text{ is desirable}\}\] \[\overline{E}[f] =\inf\{\mu\in\mathbb{R}\mid\mu-f\text{ is desirable}\},\]
where we call a gamble "desirable" if one would accept the gamble if offered. Note that previsions can then be naturally extended to larger classes of gambles (e.g. to include all indicator functions). These previsions act analogously to expectations in probability theory, and so upper and lower probabilities can be defined as the upper and lower previsions of indicator functions.
### Validity of ML Models
Since the set \(\widehat{\Theta}^{\varepsilon}_{S}\) of \(\varepsilon\)-AERMs is a random set, its distribution is determined by the induced belief and plausibility functions:
\[\operatorname{bel}_{\varepsilon}(A) =\Pr_{S\sim\mathcal{D}^{m}}[\widehat{\Theta}^{\varepsilon}_{S} \subseteq A\mid\widehat{\Theta}^{\varepsilon}_{S}\neq\varnothing]\] \[\operatorname{pl}_{\varepsilon}(A) =\Pr_{S\sim\mathcal{D}^{m}}[\widehat{\Theta}^{\varepsilon}_{S} \cap A\neq\varnothing\mid\widehat{\Theta}^{\varepsilon}_{S}\neq\varnothing].\]
Notice that these functions are not well-defined when \(\widehat{\Theta}^{\varepsilon}_{S}\) is almost surely empty; this is only possible when \(\varepsilon=0\) and the ERM almost surely does not exist.
**Example 4**.: _Consider \(\mathcal{X}=\{\varnothing\}\), \(\mathcal{Y}=\mathbb{R}\), and \(\mathcal{H}=\{x\mapsto\theta\mid\theta\in\mathbb{Q}\}\) with \(L(y,y^{\prime})=|y-y^{\prime}|\), where the data-generating distribution is a point mass at an irrational number (e.g. \(\Pr(Y=\pi)=1\)). In this example, \(\widehat{\theta}_{S}\) never exists, since there always exists a closer rational approximation to an irrational number. Thus, \(\widehat{\Theta}_{S}^{0}\) is always empty, and the belief and plausibility for \(\varepsilon=0\) are not well-defined._
Because this situation can be circumvented in practice by instead considering \(\varepsilon\)-plausibilities for any choice of \(\varepsilon>0\), we assume for the remainder of this paper that the \(\varepsilon\)-plausibility is well defined.
Knowledge of the distribution of \(\varepsilon\)-AERMs can be used to assign a confidence to the proposition that a given region of the parameter space contains the risk minimizer. In particular, we will show in this section that sets of low plausibility cannot contain the risk minimizer. To this end, we first define the notion of validity for an ML model:
**Definition 9**.: _At a fixed sample size, the model \((\mathcal{H},L)\) is valid at level \(\alpha\) and tolerance \(\varepsilon\) if for every Borel set \(A\subseteq\Theta\) such that there exists \(\delta\geq 0\) so that \(\Theta_{0}^{\delta}\subseteq A\) and \(\Theta_{0}^{\delta}\neq\varnothing\), we have that \(\operatorname{pl}_{\varepsilon}(A)\geq 1-\alpha\)._
In other words, an ML model is valid if every nonempty \(\delta\)-neighborhood of the risk minimizer has a high plausibility. Once again, we note that when the risk minimizer \(\theta_{0}\) exists, it is sufficient to only consider \(\delta=0\), as any \(A\) containing \(\Theta_{0}^{\delta}\) for \(\delta>0\) must contain \(\theta_{0}\in\Theta_{0}^{0}\) itself.
**Lemma 1**.: _Let \((\mathcal{H},L)\) have the uniform convergence function \(f\). Then \(f\) is non-increasing in its first argument._
Proof.: Fix \(\varepsilon_{1},\alpha>0\). Then
\[\Pr_{S\sim\mathcal{D}^{m}}\left[\sup_{\theta\in\Theta}|R(\theta)-\widehat{R} _{S}(\theta)|\leq\varepsilon_{1}\right]\geq 1-\alpha\ \ \ \text{if }m\geq f(\varepsilon_{1},\alpha).\]
Now let \(\varepsilon_{2}>\varepsilon_{1}\). Then we have that
\[\Pr_{S\sim\mathcal{D}^{m}}\left[\sup_{\theta\in\Theta}|R(\theta)-\widehat{R} _{S}(\theta)|\leq\varepsilon_{1}\right]\leq\Pr_{S\sim\mathcal{D}^{m}}\left[ \sup_{\theta\in\Theta}|R(\theta)-\widehat{R}_{S}(\theta)|\leq\varepsilon_{2}\right]\]
and so
\[\Pr_{S\sim\mathcal{D}^{m}}\left[\sup_{\theta\in\Theta}|R(\theta)-\widehat{R} _{S}(\theta)|\leq\varepsilon_{2}\right]\geq 1-\alpha\ \ \ \text{if }m\geq f(\varepsilon_{1},\alpha).\]
We hence have that
\[w(\varepsilon,\alpha)=\begin{cases}f(\varepsilon,\alpha)&\text{if } \varepsilon\neq\varepsilon_{2}\\ f(\varepsilon_{1},\alpha)&\text{if }\varepsilon=\varepsilon_{2}\end{cases}\]
witnesses the uniform convergence property of \((\mathcal{H},L)\). Therefore, by the definition of the uniform convergence function (Definition 6),
\[f(\varepsilon_{2},\alpha)\leq\lceil w(\varepsilon_{2},\alpha)\rceil=\lceil f (\varepsilon_{1},\alpha)\rceil=f(\varepsilon_{1},\alpha)\]
since \(f(\varepsilon_{1},\alpha)\) is an integer, as desired.
**Corollary 1**.: _Let \((\mathcal{H},L)\) have uniform convergence function \(f\). If the risk minimizer \(\theta_{0}\) exists, then \((\mathcal{H},L)\) is valid at level \(\alpha\) and tolerance \(\varepsilon\) if \(m\geq f(\varepsilon/2,\alpha)\). Otherwise, \((\mathcal{H},L)\) is valid at level \(\alpha\) and tolerance \(\varepsilon\) if \(m\geq\inf_{\delta>0}f((\varepsilon-\delta)/2,\alpha)\)._
Proof.: Suppose that the risk minimizer \(\theta_{0}\) exists. Then if \(A\) is a Borel set and \(\theta_{0}\in A\), we have that
\[\mathrm{pl}_{\varepsilon}(A) = \Pr\Bigl{[}\widehat{\Theta}_{S}^{\varepsilon}\cap A\neq\varnothing \mid\widehat{\Theta}_{S}^{\varepsilon}\neq\varnothing\Bigr{]}\] \[\geq \Pr\Bigl{[}\widehat{\Theta}_{S}^{\varepsilon}\cap\{\theta_{0}\} \neq\varnothing\mid\widehat{\Theta}_{S}^{\varepsilon}\neq\varnothing\Bigr{]}\] \[= \Pr\Bigl{[}\widehat{\Theta}_{S}^{\varepsilon}\ni\theta_{0}\text{ and } \widehat{\Theta}_{S}^{\varepsilon}\neq\varnothing\Bigr{]}/\Pr\Bigl{[}\widehat{ \Theta}_{S}^{\varepsilon}\neq\varnothing\Bigr{]}\] \[\geq \Pr\Bigl{[}\widehat{\Theta}_{S}^{\varepsilon}\ni\theta_{0}\Bigr{]} /1,\]
which has probability at least \(1-\alpha\) by the result of Theorem 1.1.
Now suppose that the risk minimizer \(\theta_{0}\) does not exist. Then let \(\zeta>0\) be such that \(\Theta_{0}^{\zeta}\subseteq A\). Without loss of generality, we also let \(\zeta<\varepsilon\). Then for any \(\delta\in(0,\zeta)\), we have that
\[\mathrm{pl}_{\varepsilon}(A)=\Pr\Bigl{[}\widehat{\Theta}_{S}^{\varepsilon} \cap A\neq\varnothing\Bigr{]}\geq\Pr\Bigl{[}\Theta_{0}^{\delta}\subseteq \widehat{\Theta}_{S}^{\varepsilon}\Bigr{]}\]
which has probability at least \(1-\alpha\) by the result of Theorem 1.2 if \(m\geq f((\varepsilon-\delta)/2,\alpha)\). Since this is true for every \(\delta\in(0,\zeta)\) and \(f\) is non-increasing in its first argument by Lemma 1, we have validity if \(m\geq\inf_{\delta\in(0,\zeta)}f((\varepsilon-\delta)/2,\alpha)=\inf_{\delta>0} f((\varepsilon-\delta)/2,\alpha)\), as desired.
Note that when \(f(\cdot,\alpha)\) is left-continuous at \(\varepsilon\), \(\inf_{\delta>0}f((\varepsilon-\delta)/2,\alpha)=f(\varepsilon/2,\alpha)\), and so the sample complexity is the same regardless of the existence of the risk minimizer. However, this is not the case in general, so the sample size necessary for validity is generally easier to attain when the risk minimizer does exist.
The contrapositive of the above corollary is that if a model is valid at level \(\alpha\) and tolerance \(\varepsilon\), then \(\mathrm{pl}_{\varepsilon}(A)<1-\alpha\) implies that \(\theta_{0}\not\in A\). Thus, knowledge of the distribution of the set of \(\varepsilon\)-AERMs yields important knowledge about the location of the risk minimizer. We restate this contrapositive as a theorem in its own right:
**Theorem 1.2**.: _Let \((\mathcal{H},L)\) be valid at level \(\alpha\) and tolerance \(\varepsilon\). If \(\mathrm{pl}_{\varepsilon}(A)<1-\alpha\) then \(\theta_{0}\not\in A\)._
This idea of determining optimal values for parameters via Theorem 1.2 can be useful for hypothesis classes that make use of a tuning parameter \(\gamma\), as calculating the belief and plausibility of sets of the form \(A=\{\gamma\in[a,b]\}\) may help reduce the search space for the tuning parameter.
**Example 1.3**.: _In continuation of Example 1.3, consider LASSO estimation with data generated by \(Y=X\beta_{0}+U\). Our hypothesis class is \(\mathcal{H}=\bigcup_{t^{\prime}\leq t}\mathcal{H}_{t^{\prime}}\), where \(\mathcal{H}_{t^{\prime}}=\{x\mapsto x^{\top}\beta:\left\|\beta\right\|_{1}\leq t ^{\prime}\}\) and \(t\) is an upper bound on the LASSO tuning parameter provided by the practitioner. The value of the tuning parameter \(t^{\prime}\) is typically chosen via cross-validation over the interval \([0,t]\), since the optimal value for \(t^{\prime}\) is \(t_{0}=\min(t,\left\|\beta_{0}\right\|_{1})\) but \(\left\|\beta_{0}\right\|\) is typically unknown._
We draw examples \(X\sim\mathrm{Unif}(-1,1)^{p}\) and \(U\sim\mathrm{Unif}(-1,1)\). A uniform convergence function is then given by
\[f(\varepsilon,\alpha)=1-2\exp\biggl{(}-\frac{m\varepsilon}{8(\left\|\beta_{0} \right\|+1)^{2}}\biggr{)}.\]
This then allows us to compute an \(\varepsilon\) necessary for validity at a given sample size \(m\) and significance level \(\alpha\), given (an upper bound on) the magnitude of \(\left\lVert\beta_{0}\right\rVert_{1}\).
We randomly select a \(\beta\) from \(\operatorname{Unif}(-1,1)^{p}\) and generate \(m=1000\) training examples, computing \(\operatorname{pl}_{\varepsilon}(A)\) for \(A=\left\{\beta\,:\,\left\lVert\beta\right\rVert_{1}\leq t^{\prime}\right\}\) for various choices of tuning parameter \(t^{\prime}\). That is, we conduct hypothesis tests for \(H_{0}:t_{0}\leq t^{\prime}\) for \(t^{\prime}\) ranging over an interval. In Figure 3, we show the results in an example where \(p=10\), \(t=10\), and \(\left\lVert\beta_{0}\right\rVert_{1}\approx 3.34\) (so that the optimal tuning parameter is \(t_{0}\approx 3.34\)). We see that for tuning parameters \(t^{\prime}\) less than about \(1.3\), the plausibility is less than \(0.95\) and so cannot possibly be optimal. With access to this information, a practitioner now knows that it is only worthwhile to cross-validate over the range \([1.3,10]\) rather than the entire range \([0,10]\) since all tuning parameters less than \(1.3\) are provably suboptimal.
## 6 Bootstrapping Belief and Plausibility
Actually calculating the plausibility given a single sample is not particularly straightforward, since typically the sampling distribution \(\mathcal{D}\) is completely unknown and the "sample plausibility" \(\widehat{\operatorname{pl}}_{\varepsilon}(A)=I(\widehat{\Theta}_{S}^{ \varepsilon}\cap A\neq\varnothing)\) is always either zero or one. Hence, it makes sense to estimate belief and plausibility via bootstrapping:
\[\widehat{\operatorname{pl}}_{\varepsilon}^{\operatorname{boot},S}(A)=\frac{1} {B}\sum_{i=1}^{B}I(\widehat{\Theta}_{S_{i}}^{\varepsilon}\cap A\neq\varnothing)\]
where \(S_{1},\ldots,S_{B}\) are subsamples drawn uniformly and independently from the observed sample \(S\) such that \(\widehat{\Theta}_{S_{i}}^{\varepsilon}\) is nonempty. Let us define
\[\operatorname{pl}_{\varepsilon}^{\operatorname{boot},S}(A)=\lim_{B\to\infty} \frac{1}{B}\sum_{i=1}^{B}I(\widehat{\Theta}_{S_{i}}^{\varepsilon}\cap A\neq\varnothing)\]
and similar for the belief. Notice that the quantity \(\operatorname{pl}_{\varepsilon}^{\operatorname{boot},S}\) is the expected value of \(\widehat{\operatorname{pl}}_{\varepsilon}^{\operatorname{boot},S}\) over the resampling distribution. Thus, it is convenient for theoretical purposes to consider \(\operatorname{pl}_{\varepsilon}^{\operatorname{boot},S}\) as "the" bootstrapped plausibility. However, it is \(\widehat{\operatorname{pl}}_{\varepsilon}^{\operatorname{boot},S}\) that is actually used by the practitioner. Luckily, these two quantities are quite close to each other even for modestly large
Figure 3: Estimates for the plausibility of the set \(\left\{\beta\,:\,\left\lVert\beta\right\rVert_{1}\leq t^{\prime}\right\}\) at significance level \(0.05\) for different tuning parameters \(t^{\prime}\). A vertical line is plotted at the \(t^{\prime}\) that maintains plausibility at least 0.95. Plausibilities were estimated via Monte-Carlo simulation with \(10000\) replicates.
\(B\) (as we will see in Theorem6), so the practitioner (approximately) enjoys the guarantees of the bootstrapped plausibility.
Once again, these bootstrapped quantities are not well-defined when \(\widehat{\Theta}^{\varepsilon}_{S_{i}}\) is empty with probability \(1\). This happens if and only if \(\varepsilon=0\) and the ERM on each subsample almost surely does not exist. Because a practitioner can always avoid such an event by simply choosing any \(\varepsilon>0\) when they notice that the ERMs never exist, we will assume throughout this section that the model and sampling distribution are such that these bootstrapping quantities are well-defined.
In order for bootstrapping to be useful, we require slightly stronger conditions on our ML model:
**Definition 10**.: An ML model \((\mathcal{H},L)\) has the _strong_ uniform convergence property with respect to the data-generating distribution \(\mathcal{D}\) over \(\mathcal{X}\times\mathcal{Y}\) if there exists a function \(f:\mathbb{R}^{+}\times\mathbb{R}^{+}\to\mathbb{R}\) such that for any \(\varepsilon>0\) and \(\alpha>0\), if \(m\geq f(\varepsilon,\alpha)\) then
\[\Pr_{S\sim\mathcal{D}^{m}}\left[\sup_{\theta\in\Theta}|R(\theta)-\widehat{R}_ {S}(\theta)|\leq\varepsilon\right]\geq 1-\alpha\]
(i.e. Definition5 holds) and
\[\inf_{\widetilde{\mathcal{D}}\in\mathcal{B}}\Pr_{S\sim\widetilde{\mathcal{D} }^{m}}\left[\sup_{\theta\in\Theta}|R_{\widetilde{\mathcal{D}}}(\theta)- \widehat{R}_{S}(\theta)|\leq\varepsilon\right]\geq 1-\alpha,\]
where \(\mathcal{B}\) denotes the set of bootstrap resampling distributions over \(\mathcal{D}\); that is, the set of \(\mathrm{Unif}(S)^{m}\) for every possible \(S\sim\mathcal{D}^{m}\). We call any such \(f\) a _witness_ to the strong uniform convergence property.
In other words, the strong uniform convergence property requires that the uniform convergence property (Definition5) hold over both the data-generating distribution and the bootstrap resampling distribution. Once again, we note that this condition remains much weaker than that of being a strong uniform Glivenko-Cantelli class (Definition4).
**Definition 11**.: Let \((\mathcal{H},L)\) have the strong uniform convergence property, and let \(\mathcal{W}\) be the set of all corresponding witnesses. The strong uniform convergence function of \((\mathcal{H},L)\) is defined to be
\[f(\varepsilon,\alpha)=\inf_{w\in\mathcal{W}}\lceil w(\varepsilon,\alpha)\rceil,\]
where \(\lceil\cdot\rceil\) is the ceiling function.
Due to the reliance on a random sample, bootstrapped plausibilities do not yield exact knowledge about the location of the risk minimizer. However, we find that bootstrapped plausibilities do yield this knowledge with high probability:
**Theorem 5**.: _Suppose \(A\) is a Borel set and \(\Theta_{0}^{\delta}\subseteq A\) for some \(\delta>0\). Suppose that \((\mathcal{H},L)\) has the strong uniform convergence function \(f\). If either_
1. _the risk minimizer_ \(\theta_{0}\) _exists, the empirical risk minimizer_ \(\widehat{\theta}_{S}\) _exists for every sample_ \(S\)_, and we define_ \(\varepsilon(m,\alpha)=\inf\{\varepsilon\mid m\geq f(\varepsilon/2,\alpha)\}\)__
2. _we define_ \(\varepsilon(m,\alpha)=\inf\{\varepsilon\mid m\geq\inf_{\delta>0}f(( \varepsilon-\delta)/2,\alpha)\}\)__
_then bootstrapping the plausibility function is asymptotically valid, in the sense that_
\[\liminf_{m\to\infty}\Pr_{S\sim\mathcal{D}^{m}}[\mathrm{pl}^{\mathrm{ boot},S}_{\varepsilon(m,\alpha)}(A)\geq 1-\alpha]\geq 1-\alpha.\]
_In particular, in (a) the inequality holds for all \(m\) such that \(\varepsilon(m,\alpha)\leq\delta\), and in (b) the inequality holds for all \(m\) such that \(\varepsilon(m,\alpha)\leq\delta/2\)_
Proof.: We first prove (a). By the strong law of large numbers, we have that
\[\mathrm{pl}^{\mathrm{boot},S}_{\varepsilon(m,\alpha)}(A)\stackrel{{ a.s.}}{{=}}\Pr_{S_{i}\sim\mathrm{Unif}(S)^{m}}[\widehat{\Theta}^{\varepsilon(m, \alpha)}_{S_{i}}\cap A\neq\varnothing\mid\widehat{\Theta}^{\varepsilon(m, \alpha)}_{S_{i}}\neq\varnothing].\]
The right hand side is simply \(\mathrm{pl}_{\varepsilon(m,\alpha)}(A)\) with respect to the uniform distribution on \(S\); we know that this is at least \(1-\alpha\) so long as \(\widehat{\theta}_{S}\in A\) by Corollary 1. We thus have that
\[\Pr_{S\sim\mathcal{D}^{m}}[\mathrm{pl}^{\mathrm{boot},S}_{\varepsilon(m, \alpha)}(A)\geq 1-\alpha]\geq\Pr\Bigl{[}\widehat{\theta}_{S}\in A\Bigr{]}. \tag{5}\]
By hypothesis, \(\Theta^{\delta}_{0}\subseteq A\). Hence,
\[\Pr\Bigl{[}\widehat{\theta}_{S}\in A\Bigr{]}\geq\Pr\Bigl{[}\widehat{\theta}_{ S}\in\Theta^{\delta}_{0}\Bigr{]}=\Pr\Bigl{[}R(\widehat{\theta}_{S})\leq R( \theta_{0})+\delta\Bigr{]}. \tag{6}\]
We know that for any \(m\), we have with probability at least \(1-\alpha\) that
\[R(\widehat{\theta}_{S}) \leq\widehat{R}_{S}(\widehat{\theta}_{S})+\frac{\varepsilon(m, \alpha)}{2}\] (uniform convergence) \[\leq\widehat{R}_{S}(\theta_{0})+\frac{\varepsilon(m,\alpha)}{2}\] (definition of
\[\widehat{\theta}_{S}\]
) \[\leq R(\theta_{0})+\varepsilon(m,\alpha)\] (uniform convergence)
Furthermore, as \(m\to\infty\), we have that \(\varepsilon(m,\alpha)\to 0\). Thus, there exists \(M\in\mathbb{N}\) such that \(\varepsilon(m,\alpha)\leq\delta\) for any \(m\geq M\). Hence, we have for all \(m\geq M\) that
\[\Pr_{S\sim\mathcal{D}^{m}}[R(\widehat{\theta}_{S})\leq R(\theta_{0})+\delta)] \geq\Pr_{S\sim\mathcal{D}^{m}}[R(\widehat{\theta}_{S})\leq R(\theta_{0})+ \varepsilon(m,\alpha)]\geq 1-\alpha.\]
Combining this with equations (5) and (6), we arrive at
\[\liminf_{m\to\infty}\Pr_{S\sim\mathcal{D}^{m}}[\mathrm{pl}^{\mathrm{ boot},S}_{\varepsilon(m,\alpha)}(A)\geq 1-\alpha]\geq 1-\alpha\]
as desired.
We now prove (b). As in the previous part, we have that \(\mathrm{pl}^{\mathrm{boot},S}_{\varepsilon(m,\alpha)}(A)\geq 1-\alpha\) so long as \(\widehat{\Theta}^{\delta/4}_{S}\subseteq A\). Hence,
\[\Pr_{S\sim\mathcal{D}^{m}}[\mathrm{pl}^{\mathrm{boot},S}_{\varepsilon(m, \alpha)}(A)\geq 1-\alpha]\geq\Pr\Bigl{[}\widehat{\Theta}^{\delta/4}_{S}\subseteq A \Bigr{]}\geq\Pr\Bigl{[}\widehat{\Theta}^{\delta/4}_{S}\subseteq\Theta^{ \delta}_{0}\Bigr{]}.\]
Let \(\vartheta_{0}\in\Theta^{\delta/4}_{0}\). By uniform convergence, we have with probability at least \(1-\alpha\) that for all \(\widehat{\vartheta}_{S}\in\widehat{\Theta}^{\delta/4}_{S}\),
\[R(\widehat{\vartheta}_{S}) \leq\widehat{R}_{S}(\widehat{\vartheta}_{S})+\frac{\varepsilon(m, \alpha)}{2}\] (uniform convergence) \[\leq\inf_{\theta\in\Theta}\widehat{R}_{S}(\theta)+\frac{\delta}{4 }+\frac{\varepsilon(m,\alpha)}{2}\] (definition of
\[\widehat{\vartheta}_{S}\in\widehat{\Theta}^{\delta/4}_{S}\]
) \[\leq\widehat{R}_{S}(\vartheta_{0})+\frac{\delta}{4}+\frac{ \varepsilon(m,\alpha)}{2}\] (definition of
\[\leq R(\vartheta_{0})+\varepsilon(m,\alpha)+\frac{\delta}{4} \text{(uniform convergence)}\] \[\leq \inf_{\theta\in\Theta}R(\theta)+\varepsilon(m,\alpha)+\frac{ \delta}{2} \text{(definition of $\vartheta_{0}\in\Theta_{0}^{\delta/4}$)}\]
As \(m\to\infty\), \(\varepsilon(m,\alpha)\to 0\), so for large enough \(m\), we have that \(\varepsilon(m,\alpha)\leq\delta/2\), so
\[\Pr_{S\sim\mathcal{D}^{m}}\left[\widehat{\Theta}_{S}^{\delta/4}\subseteq\Theta _{0}^{\delta}\right]=\Pr_{S\sim\mathcal{D}^{m}}\left[\bigcap_{\widehat{ \vartheta}_{S}\in\widehat{\Theta}_{S}^{\delta/4}}\left\{R(\widehat{\vartheta} _{S})\leq\inf_{\theta\in\Theta}R(\theta)+\delta\right\}\right]\geq 1-\alpha\]
for large \(m\) as desired.
It is worth noting that the sample size necessary for the validity of bootstrapping depends on the "size" of \(A\)--i.e. the magnitude of the largest \(\delta\) such that \(\Theta_{0}^{\delta}\subseteq A\). If \(\delta\) (and thus \(A\)) is large, only a small sample size \(m\) is necessary for \(\varepsilon(m,\alpha)\leq\delta/2\). Inversely, a small \(\delta\) (and thus small \(A\)) requires a larger sample size for validity to hold. One might thus think that given a very large sample size, it would be reasonable to bootstrap on finite sets \(A\); however, if \(A\) is too small then no \(\Theta_{0}^{\delta}\) will be a subset of \(A\) for any \(\delta>0\), possibly hampering validity at every sample size.
The primary consequence of Theorem 5 is that \(1-\operatorname{pl}_{\varepsilon(m,\cdot)}^{\text{boot},S}(A)\) acts as an asymptotic \(p\)-value for the hypothesis \(H_{0}:\theta_{0}\in A\). Hence, we can assign an (asymptotic) confidence to the proposition that the risk minimizer lies in \(A\):
\[\operatorname{Conf}(A)\approx \sup\{1-\alpha\mid\operatorname{pl}_{\varepsilon(m,\alpha)}^{ \text{boot},S}(A)\geq 1-\alpha\}.\]
At first glance, it appears that assigning confidence levels via bootstrapping is unnecessary, as one can also determine a valid confidence level by using the valid confidence set:
\[\operatorname{Conf}(A)= \sup\{1-\alpha\mid\widehat{\Theta}_{S}^{\varepsilon(m,\alpha)} \supseteq A\}.\]
However, assigning confidence levels to arbitrary regions of the parameter space using \(\widehat{\Theta}_{S}^{\varepsilon}\) is clearly inefficient, as illustrated in Figure 4. Consequently, hypothesis testing for \(H_{0}:\theta_{0}\in A\) via bootstrapping is more efficient than the standard method of hypothesis testing via the inversion of the confidence set.
Figure 4: When using \(\widehat{\Theta}_{S}^{\varepsilon(m,\alpha)}\) to determine our confidence in arbitrary regions of the parameter space, both regions \(A\) and \(B\) are assigned the same confidence \(1-\alpha\). However, since \(B\subseteq A\), it is clear that we should have \(\operatorname{Conf}(B)\leq\operatorname{Conf}(A)\), as would occur when using bootstrapped plausibilities to determine our confidence in these regions.
**Theorem 6**: _Suppose that either set of hypotheses in Theorem 5 holds. Then for any \(\gamma\in(0,\alpha)\),_
\[\liminf_{m\to\infty}\Pr_{S\sim\mathcal{D}^{m}}[\widehat{\mathrm{p} \mathrm{l}}^{\mathrm{boot},S}_{\varepsilon(m,\alpha)}(A)\geq 1-\alpha-\gamma] \geq 1-\alpha-\exp\biggl{(}-\frac{6B\gamma^{2}}{4\gamma+3}\biggr{)}.\]
_In particular, we have that_
\[\liminf_{m\to\infty}\Pr_{S\sim\mathcal{D}^{m}}\left[\widehat{\mathrm{p} \mathrm{l}}^{\mathrm{boot},S}_{\varepsilon(m,\alpha-\gamma)}(A)\geq 1-\alpha \right]\geq 1-\alpha\]
_if \(B\geq(4\gamma+3)\log\bigl{(}\gamma^{-1}\bigr{)}/(6\gamma^{2})\)._
First note that
\[\widehat{\mathrm{p}\mathrm{l}}^{\mathrm{boot},S}_{\varepsilon(m, \alpha)}\sim\frac{1}{B}\sum_{i=1}^{B}\mathrm{Bernoulli}(\mathrm{pl}^{\mathrm{ boot},S}_{\varepsilon(m,\alpha)}).\]
Hence, by Bernstein's Inequality, we have that
\[\Pr_{S\sim\mathcal{D}^{m}}[\widehat{\mathrm{p}\mathrm{l}}^{\mathrm{boot},S}_{ \varepsilon(m,\alpha)}\geq\mathrm{pl}^{\mathrm{boot},S}_{\varepsilon(m,\alpha )}-\gamma]\geq 1-\exp\biggl{(}-\frac{6B\gamma^{2}}{4\gamma+3}\biggr{)}.\]
We know from Theorem 5 that for large enough \(m\),
\[\Pr_{S\sim\mathcal{D}^{m}}[\mathrm{pl}^{\mathrm{boot},S}_{\varepsilon(m, \alpha)}\geq 1-\alpha]\geq 1-\alpha.\]
We thus have that
\[\Pr\left[\widehat{\mathrm{p}\mathrm{l}}^{\mathrm{boot},S}_{ \varepsilon(m,\alpha)}\geq 1-\alpha-\gamma\right] \geq\Pr\left[\widehat{\mathrm{p}\mathrm{l}}^{\mathrm{boot},S}_{ \varepsilon(m,\alpha)}\geq\mathrm{pl}^{\mathrm{boot},S}_{\varepsilon(m,\alpha )}-\gamma\text{ and }\mathrm{pl}^{\mathrm{boot},S}_{\varepsilon(m,\alpha)}\geq 1- \alpha\right]\] \[\geq\Pr\left[\widehat{\mathrm{p}\mathrm{l}}^{\mathrm{boot},S}_{ \varepsilon(m,\alpha)}\geq\mathrm{pl}^{\mathrm{boot},S}_{\varepsilon(m,\alpha )}-\gamma\right]+\Pr\left[\mathrm{pl}^{\mathrm{boot},S}_{\varepsilon(m,\alpha )}\geq 1-\alpha\right]-1\] \[\geq 1-\exp\biggl{(}-\frac{6B\gamma^{2}}{4\gamma+3}\biggr{)}+1- \alpha-1\] \[=1-\alpha-\exp\biggl{(}-\frac{6B\gamma^{2}}{4\gamma+3}\biggr{)}\]
as desired. We may then substitute \(\alpha-\gamma\) for \(\alpha\) to arrive at
\[\Pr\left[\widehat{\mathrm{p}\mathrm{l}}^{\mathrm{boot},S}_{\varepsilon(m, \alpha-\gamma)}\geq 1-\alpha\right]\geq 1-\alpha+\gamma-\exp\biggl{(}-\frac{6B \gamma^{2}}{4\gamma+3}\biggr{)},\]
and the right hand side is at least \(1-\alpha\) if \(\gamma-\exp\bigl{(}-6B\gamma^{2}/(4\gamma+3)\bigr{)}\geq 0\), or equivalently if \(B\geq(4\gamma+3)\log\bigl{(}\gamma^{-1}\bigr{)}/(6\gamma^{2})\).
The practical consequence of Theorem 6 is as follows: When estimating plausibility by bootstrapping, the practitioner has to make a choice between the following:
1. The confidence set \(\widehat{\Theta}^{\varepsilon(m,\alpha)}_{S}\) keeps its usual tolerance (i.e. size). In exchange, rather than validity at level \(\alpha\) with high probability, we have validity at level \(\alpha+\gamma\) with a slightly smaller probability, where \(\gamma\) decreases and the probability increases with the number of bootstrap samples. This is a reasonable choice to make when the number of bootstrap samples that can be taken is limited (e.g. due to computational reasons). If one were to this bootstrapping methodology to conduct hypothesis tests, one can minimize the type I error bound on the hypothesis test \(H_{0}:\theta_{0}\in A\) by selecting \(\alpha\) and \(\gamma\) appropriately for fixed \(B\).
2. We maintain validity at level \(\alpha\) with high probability; thus, \(\widehat{\mathsf{pl}}_{\varepsilon}^{\text{boot},S}\) remains an asymptotic \(p\)-value. However, our confidence set \(\widehat{\Theta}_{S}^{\varepsilon(m,\alpha-\gamma)}\), where the number of bootstrap samples necessary increases as \(\gamma\) decreases, has a slightly larger tolerance and hence is less informative about the location of the risk minimizer. This is a reasonable choice to make when one can take as many bootstrap samples as desired. For hypothesis testing, doing so allows one to fix the type I error bound at \(\alpha\) for the hypothesis test \(H_{0}:\theta_{0}\in A\); the choice of \(\gamma\) can impact the power of this test.
**Example 6**.: Recall from Example 1 the model for Bernoulli distributed data: \(\mathcal{X}=\{\varnothing\}\), \(\mathcal{Y}=\{0,1\}\), \(\mathcal{H}=\{x\mapsto\theta\mid\theta\in\{0,1\}\}\), and \(L(y,y^{\prime})=|y-y^{\prime}|\).
The only sets \(A\) to estimate beliefs and plausibilities for are \(\{0\}\), \(\{1\}\), and \(\{0,1\}\). Note that the last set is the entire parameter space \(\Theta\); since \(\operatorname{pl}(\Theta)=1\) necessarily, we focus on estimating the plausibility of the singleton sets. Recall that our theorem for validity of bootstrapping requires that \(\Theta_{0}^{\delta}\subseteq A\). Now, the definition of \(\Theta_{0}^{\delta}\) indicates that
\[\Theta_{0}^{\delta}=\{\theta\in\{0,1\}\mid(1-p)^{\theta}p^{1-\theta}\leq\min\{ p,1-p\}+\delta\}\]
and it is straightforward to check that \(0\in\Theta_{0}^{\delta}\) if and only if \(p<(1+\delta)/2\) and \(1\in\Theta_{0}^{\delta}\) if and only if \(p>(1-\delta)/2\). That is, \(\Theta_{0}^{\delta}\subseteq A\) if and only if \((1-\delta)/2<p<(1+\delta)/2\). In particular, bootstrapping the singleton sets is not necessarily valid for \(p=1/2\). This makes intuitive sense, as the risk minimizer \(\theta_{0}\) is the mode of the data-generating distribution, and when \(p=1/2\) neither singleton set can capture both modes.
Now, our theorem indicates that bootstrapping is valid when the sample size is large enough that \(\varepsilon(m,\alpha)\leq\delta\). Thus, we see that the closer \(p\) is to \(1/2\), the larger the sample size must be to ensure the validity of bootstrapping.
To illustrate, we generate data from \(\operatorname{Bernoulli}(0.499)\) at various sample sizes. We set the significance level to \(\alpha=0.05\) and also set \(\gamma=\alpha/2\), setting the number of bootstrap samples as indicated by the theorem. For each generated sample, we checked whether or not \(\widehat{\mathsf{pl}}_{\varepsilon(m,\alpha-\gamma)}^{\text{boot},S}(\{0\}) \geq 1-\alpha\); we repeated this 1000 times for every sample size and report the frequency of this event. Results are shown in Figure 5.
Note that based on the above calculations, we require a sample size of 1,551,107 to guarantee validity at level \(1-\alpha\) with probability at least \(1-\alpha\). Evidently, our sample size requirement provided by the theorem is very conservative, as we appear to attain 95% coverage nearly a full order of magnitude before 1.5 million.
It is also interesting to note that the coverage probability does not monotonically increase with the sample size \(m\). This is because increasing the sample size has two competing effects: An increase in \(m\) will decrease \(\varepsilon(m,\alpha)\) and thus decrease the probability \(\widehat{\Theta}_{S_{i}}^{\varepsilon(m,\alpha)}\) intersects with the set \(A\) of interest (consequently decreasing the plausibility), but it will also allow the sample to be more representative of the population, allowing the estimated plausibilities to be more likely to attain the correct coverage of at least \(1-\alpha\).
**Example 7**.: Consider a regularized neural network with 10 nodes in the only hidden layer using a sigmoid activation function, whose hypothesis class is given by
\[\mathcal{H}=\left\{\boldsymbol{x}\mapsto\sigma\!\left(\sum_{i=1}^{10} \boldsymbol{w}^{\top}\sigma(\boldsymbol{U}\boldsymbol{x})\right)\,:\,\left\| \boldsymbol{U}_{i}\right\|_{2}\leq M,\left\|\boldsymbol{w}\right\|_{1}\leq \lambda\right\}\]
and is trained under the L1 loss function. A uniform convergence function for this model can be computed via the Rademacher complexity (Mohri, Rostamizadeh and Talwalkar, 2018, see Theorem 3.3 and Exercise 3.11) and is given as
\[\varepsilon(m,\alpha)=4\widehat{\mathfrak{R}}_{S}(\mathcal{H})+6\sqrt{\frac{ \log(4/\alpha)}{2m}},\]
where \(\widehat{\mathfrak{R}}_{S}\) is the empirical Rademacher complexity, upper bounded by
\[\widehat{\mathfrak{R}}_{S}(\mathcal{H})\leq\frac{2M\cdot\lambda}{m}\sqrt{\sum_{i =1}^{m}\|x_{i}\|_{2}^{2}}.\]
Let us suppose we are interested in the regularization parameter \(\lambda\) for the weight vector \(\boldsymbol{w}\), where we limit \(\lambda\in[0,5]\). We may conduct a hypothesis test for the optimal regularization parameter \(\lambda_{0}\) such as \(H_{0}:\lambda_{0}\leq 0.25\). Suppose, furthermore, that the practitioner limits themselves to at most \(B=50\) bootstrap resamples. Then, according to Theorem 6, the practitioner can reject \(H_{0}\) at level \(\alpha+\exp\bigl{(}-6B\gamma^{2}/(4\gamma+3)\bigr{)}\) by rejecting if \(\widehat{\mathrm{p}}_{\varepsilon(m,\alpha)}^{\mathrm{boot},S}<1-\alpha-\gamma\) for some well-chosen \(\alpha\) and \(\gamma\) (supposing that the sample size is large enough). Optimization shows that the type I error rate of this test cannot be guaranteed to be less than roughly \(0.242\) given the limit on the number of bootstrap iterations.
To illustrate the hypothesis testing process, we can train this neural network to classify handwritten digits from the MNIST data set (LeCun, Cortes and Burges, 1998) as either even or odd. We test the aforementioned hypothesis at the best possible type I error rate, and find that for the MNIST data, \(\widehat{\mathrm{p}}_{\varepsilon(m,\alpha)}^{\mathrm{boot},S}=0.400\). The critical value was \(1-\alpha-\gamma\approx 0.585\), so we reject \(H_{0}\) and (correctly) conclude that there is strong evidence (at significance level \(0.242\)) that the optimal regularization parameter is at least \(0.25\).
## 7 Comparison to Generalized Inferential Models
An alternative approach to exploring notions of validity for ML, also based on imprecise probability, is given by generalized inferential models.
The inferential models (IM) framework (Martin and Liu, 2013) is an approach to statistical inference that gives uncertainty quantification on unknown quantities \(\theta_{0}\). This is done by assuming a data-generating mechanism \(X=a(U,\theta_{0})\), where the function \(a\) and the distribution of the random variable \(U\) are known, then creating a random set \(\widehat{\Theta}_{S,E}\) for \(\theta_{0}\) that depends on the observed sample \(S\) as well as a random set \(E\) intended to predict \(U\). The distribution of the predictive random set is then described by the induced belief and plausibility functions:
\[\mathrm{bel}_{S}(A)=\Pr_{E}[\widehat{\Theta}_{S,E}\subseteq A\mid\widehat{ \Theta}_{S,E}\neq\varnothing]\]
\[\operatorname{pl}_{S}(A)=\Pr_{E}[\widehat{\Theta}_{S,E}\cap A\neq\varnothing\mid \widehat{\Theta}_{S,E}\neq\varnothing].\]
For well-chosen predictive random sets, the induced belief and plausibility functions have a desirable validity property: We say that an IM with belief/plausibility functions \(\operatorname{bel}_{S}\) and \(\operatorname{pl}_{S}\) is valid if for every measurable \(A\subseteq 2^{\Theta}\) and every \(\alpha\in(0,1)\),
\[\sup_{\theta\in A}\Pr_{S|\theta}[\operatorname{pl}_{S}(A)\leq\alpha]\leq\alpha.\]
Given a valid IM, the plausibility can be treated as a \(p\)-value--to test \(H_{0}:\theta_{0}\in A\) against \(H_{1}:\theta_{0}\not\in A\), we can reject if and only if \(\operatorname{pl}_{S}(A)\leq\alpha\), and this test would have type I error rate at most \(\alpha\).
The IM framework was recently extended to generalized IMs, or GIMs (Cella and Martin, 2022), which do not need to make an assumption on the data generating mechanism and hence no longer rely on a predictive random set for the an auxiliary random variable. In this framework, an i.i.d. sample is generated from an unknown distribution that has some feature \(\theta_{0}\) of interest. Then given a function \(T_{S}\) which measures how well a given \(\theta\) aligns with the observed sample \(S\), the GIM gives an _upper probability_ for the assertion \(\theta_{0}\in A\) as
\[\operatorname{pl}_{S}(A)=\sup_{\theta\in A}\{1-G(T_{S}(\theta))\},\]
where \(G\) is the cumulative distribution function for \(T_{S}(\theta_{0})\). Cella and Martin (2022) then shows that this is valid in the sense that if \(\theta_{0}\in A\), then
\[\Pr[\operatorname{pl}_{S}(A)\leq\alpha]\leq\alpha.\]
Because the distribution function \(G\) is unknown, these plausibilities are not calculated directly, but rather through bootstrapping. In particular, Cella and Martin (2022) shows that if the consistency condition
\[\sup_{t\in\mathbb{R}}|G(t)-G^{\text{boot}}(t)|\stackrel{{ p}}{{\longrightarrow}}0\]
holds as the sample size \(m\) goes to infinity (where \(G^{\text{boot}}\) is the empirical cumulative distribution function of the bootstrapped values of \(T_{S}(\widehat{\theta})\)), then
\[\limsup_{m\to\infty}\Pr\bigl{[}\operatorname{pl}_{S}^{\text{boot}}(A)\leq \alpha\bigr{]}\leq\alpha\]
if \(\theta_{0}\in A\). Thus, bootstrapped plausibilities from GIMs yield hypothesis tests for \(H_{0}:\theta_{0}\in A\) with an asymptotically correct type I error rate.
In the context of machine learning, Cella and Martin (2022) suggests that the natural choice for \(T_{S}\) is given by \(T_{S}(\theta)=\widehat{R}_{S}(\theta)-\inf_{\vartheta}\widehat{R}_{S}(\vartheta)\). The bootstrapping approach then allows us to construct asymptotically correct confidence regions for the risk minimizer \(\theta_{0}\) so long as the consistency condition is satisfied.
The GIM framework using the suggested \(T_{S}\) function is clearly an alternative approach to exploring the validity of ML models. However, there are key differences between our approach and that of GIMs. Firstly, we note that \(\varepsilon\)-plausibility in ML models is not random, whereas GIM plausibilities are. This yields different definitions for validity: GIMs require plausibilities of "true" statements to be small with low probability, whereas we simply require that the plausibility be large. Furthermore, although both our approach and GIMs use bootstrapping to calculate plausibilities in practice to achieve prespecified type I error guarantees, GIMs only provide asymptotic validity, whereas we are able to arrive at validity at both finite sample sizes and at finite numbers of bootstrap resamples. Finally, the confidence sets for \(\theta_{0}\) in GIMs are obtained from inverting the bootstrap test, and so are only approximately valid for large sample sizes; our sets of \(\varepsilon\)-AERMs are valid for \(\theta_{0}\) at all sample sizes and can be computed without bootstrapping.
## 8 Concluding Remarks and Future Work
We have seen that for machine learning models with the uniform convergence property, it is possible to construct valid confidence sets for the model's risk minimizer \(\theta_{0}\) via the set of \(\varepsilon\)-AERMs. If the data-generating distribution is sufficiently known so that the distribution of the valid confidence set can be calculated, one can often determine whether or not \(\theta_{0}\in A\) by calculating the plausibility of the set \(A\). When the data-generating distribution is completely unknown, one may still use bootstrapping in order to efficiently test hypotheses \(H_{0}:\theta_{0}\in A\) at a given significance level \(\alpha\) if the strong uniform convergence property holds.
In future work, we plan to study to what extent this theory still applies to non-uniformly-learnable ML models. We also plan to investigate the statistical power of the hypothesis tests that use valid plausibilities so that we may determine how best to choose sample sizes and significance levels to attain a desired level of power. Additionally, our examples illustrate that these confidence sets and hypothesis tests can be overly-conservative; tighter bounds on uniform convergence functions for common machine learning models and data-generating scenarios must be further investigated in order to mitigate this phenomenon.
|
2310.01584 | Wine feature importance and quality prediction: A comparative study of
machine learning algorithms with unbalanced data | Classifying wine as "good" is a challenging task due to the absence of a
clear criterion. Nevertheless, an accurate prediction of wine quality can be
valuable in the certification phase. Previously, wine quality was evaluated
solely by human experts, but with the advent of machine learning this
evaluation process can now be automated, thereby reducing the time and effort
required from experts. The feature selection process can be utilized to examine
the impact of analytical tests on wine quality. If it is established that
specific input variables have a significant effect on predicting wine quality,
this information can be employed to enhance the production process. We studied
the feature importance, which allowed us to explore various factors that affect
the quality of the wine. The feature importance analysis suggests that alcohol
significantly impacts wine quality. Furthermore, several machine learning
models are compared, including Random Forest (RF), Support Vector Machine
(SVM), Gradient Boosting (GB), K-Nearest Neighbors (KNN), and Decision Tree
(DT). The analysis revealed that SVM excelled above all other models with a
96\% accuracy rate. | Siphendulwe Zaza, Marcellin Atemkeng, Sisipho Hamlomo | 2023-10-02T19:26:37Z | http://arxiv.org/abs/2310.01584v1 | # Wine feature importance and quality prediction:
###### Abstract
Classifying wine as "good" is a challenging task due to the absence of a clear criterion. Nevertheless, an accurate prediction of wine quality can be valuable in the certification phase. Previously, wine quality was evaluated solely by human experts, but with the advent of machine learning this evaluation process can now be automated, thereby reducing the time and effort required from experts. The feature selection process can be utilized to examine the impact of analytical tests on wine quality. If it is established that specific input variables have a significant effect on predicting wine quality, this information can be employed to enhance the production process. We studied the feature importance, which allowed us to explore various factors that affect the quality of the wine. The feature importance analysis suggests that alcohol significantly impacts wine quality. Furthermore, several machine learning models are compared, including Random Forest (RF), Support Vector Machine (SVM), Gradient Boosting (GB), K-Nearest Neighbors (KNN), and Decision Tree (DT). The analysis revealed that SVM excelled above all other models with a 96% accuracy rate.
Keywords:Random Forest Support Vector Machine Gradient Boosting K-Nearest Neighbors Decision Tree Feature selection Wine
## 1 Introduction
The quality of wine is very important for both consumers and the wine industry therefore, it is imperative to determine wine quality before manufacturing or consumption. However, relying on human expert wine tasting for measuring wine quality can be a time-consuming and subjective process, posing significant challenges for experts in providing accurate predictions. According to [1], wine testing by human experts can also put them at health risk as they are exposed to a range of chemicals and other substances that may be harmful to their health. For example, the inhalation of volatile organic compounds (VOCs) such as ethanol, acetaldehyde, and ethyl acetate, during the process of wine tasting has been linked to a range of health issues, including headaches, coughs, and
respiratory problems [2, 3]. With the aid of machine learning algorithms, it is now possible to analyze the physiochemical properties of wine, which can be used to predict its quality. The aim of this paper is to use the chemical and physical properties of wine to predict its quality and to determine which features are more important for predicting good wine. We use the following algorithms: Decision Tree (DT), Random Forest (RF), Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Gradient Boosting (GB). These models are used due to the nature of the wine data we used to run the experiment. The data is of small samples, and it is also imbalanced. Shallow machine learning models have shown the potential to outperform deep learning models on small datasets. For example, [4] and [5] used some of the above-mentioned shallow machine learning models on small datasets, and these algorithms have shown exceptional performance in addressing the challenges of small sample sizes and imbalanced data.
The contribution of this work is as follows: we have trained five models and compared their performance on an unbalanced dataset, then we move further to use some sampling methods to balance the dataset and then retrain the models. Sampling methods improved the accuracy of the models with SVM resulting from 78% without sampling to 96% with sampling, thereby outperforming other models.
## 2 Related Work
[6] has employed a range of machine learning techniques such as linear regression to find important features for prediction and also used SVM and neural networks to predict values. The conclusion is reached that not all features are important for predicting wine quality hence one can select features that are most likely to be useful for predicting the quality of the wine. They used both the white wine and red wine datasets for their analysis, which is slightly different from our work. In our study, we focused only on the red wine dataset for our analysis and we compared our study with the work of [6] who used two datasets which are the white wine and red wine datasets. Our findings with the red wine dataset aligned with the results in [6] for predicting wine quality.
[7] employed four machine learning techniques namely RF, stochastic gradient descent, SVM, and logistic regression to forecast the quality of the wine. Out of the four techniques, RF outperformed other methods with an accuracy of 88%. In the latter work, the red wine dataset is used [5], which was then divided into two classes namely good wine and bad wine. Our research is similar to this, but we attempted to extend the problem by introducing three classes. We found that SVM was the best-performing model for predicting the quality of wine, with an accuracy of 96% compared to the 88% accuracy achieved by RF in [7]. In [4] the naive Bayes, DT, SVM, and RF are used to predict wine quality. The analysis shows that when the residual sugar is minimal the quality of the wine increases and does not change significantly, suggesting that this feature is not
as important as others such as alcohol and citric acid. We also observed in the research that our machine learning models were producing acceptable results when residual sugar was excluded. This suggests that residual sugar is not an important feature when predicting wine quality.
## 3 Data description and preprocessing
### Data description
The red wine dataset utilized in this study is sourced from the UCI machine learning repository [8]. This dataset comprises 1599 instances of red wine, and its quality is assessed through 11 distinct input variables including Fixed acidity, Volatile acidity, Citric acid, Residual sugar, Chlorides, Free sulfur dioxide, Total sulfur dioxide, Density, PH, Sulphates, and Alcohol. The output variable quality is based on these input parameters and is rated on a scale of 0 to 10, with 0 representing poor wine and 10 signifying excellent wine. Table 1 presents the statistical summary of the red wine dataset employed in this paper.
### Data Pre-processing
We use label encoding, a process that converts the labels into a machine-readable form. We use this method to categorize the data into good, normal, or bad categories. We label bad wine as wine with a quality score that is less than 5, normal wine as wine with a quality score that is between 5 and 6, and good wine as wine with a quality score between 7 and 10, as shown in the flowchart in Figure 1. Also as part of data pre-processing, we excluded duplicate entries and data points with missing values in the dataset to maintain the integrity of the analysis.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Variable Name** & **Mean** & **Sd** & **Min** & **Max** & **Median** \\ \hline \hline Fixed acidity & 8.31 & 1.73 & 4.60 & 15.90 & 7.90 \\ Volatile acidity & 0.52 & 0.18 & 0.12 & 1.58 & 0.52 \\ Citric acid & 0.27 & 0.19 & 0.00 & 1.00 & 0.26 \\ Residual sugar & 2.52 & 1.35 & 0.90 & 15.50 & 2.20 \\ Chlorides & 0.08 & 0.04 & 0.01 & 0.61 & 0.07 \\ Free sulfur dioxide & 15.89 & 10.44 & 1.00 & 72.00 & 14.00 \\ Total sulfur dioxide & 46.82 & 33.40 & 6.00 & 289.00 & 38.00 \\ Density & 0.99 & 0.001 & 0.99 & 1.00 & 0.99 \\ PH & 3.30 & 0.15 & 2.74 & 4.01 & 3.31 \\ Sulphates & 0.65 & 0.17 & 0.33 & 2.00 & 0.62 \\ Alcohol & 10.43 & 1.08 & 8.40 & 14.90 & 10.20 \\ Quality & 5.62 & 0.82 & 3.00 & 8.00 & 8.00 \\ \hline \end{tabular}
\end{table}
Table 1: statistics for red wine dataset
### Data analysis
The covariance matrix provides values within the range of \((-1,1)\) which gives us information about the relationship between variables. A value of 1 indicates a strong positive linear correlation between variables whereas -1 suggests a strong negative linear correlation. On the other hand, a value of 0 indicates no relationship between the variables. This allows us to quickly understand the interconnections between the variables in our analysis. By examining the matrix, we easily identify which features have a high correlation with quality and are likely to be significant contributors to the machine learning models.
In Figure 2, we can see a correlation matrix showing a visual representation of the relationship between several variables, including "quality vs. alcohol," "volatile acidity vs. alcohol", "density vs. alcohol", and "sulphates vs. alcohol". Although the primary objective of this study is to identify features that are most indicative of good wine quality, it is evident from Figure 2 that certain features such as alcohol, volatile acidity, and chlorides, exhibit the highest correlations with quality. This suggests that these variables have the most significant impact on predicting the quality of the wine.
The feature selection process aims to reduce the number of input variables in a machine learning model by identifying and retaining only the relevant data. This can be achieved by choosing the features that are likely to be useful in finding a solution to the problem, thereby reducing noise in the data and enhancing the performance of the model [9]. One of the objectives of this study is to look into the relationship between various features through the use of Pearson's correlation coefficient to quantify the associations between the different features.
In Table 2 features are ranked according to their correlation values. According to [10] Pearson correlation coefficient \(\rho\) given a pair of random variables \((X,Y)\)
Figure 1: Label encoding
where \(X\) and \(Y\) are features, the formula for \(\rho\) is
\[\rho_{x,y}=\frac{cov(X,Y)}{\sigma_{X}\sigma_{Y}}, \tag{1}\]
where \(cov\) is the covariance, \(\sigma_{X}\) is the standard deviation of feature \(X\) and \(\sigma_{Y}\) is the standard deviation of feature \(Y\).
Table 2 presents the selected features, out of which 10 were chosen for further analysis. However, following the principle of selecting essential features for improved model performance as suggested by [6], we excluded'residual sugar' based on our machine learning model's consistently better performance without it. This decision was supported by data indicating that'residual sugar' had a relatively minor impact on wine quality compared to other variables. Figure 3 (shown below) visually illustrates the relationship between quality and residual sugar. It is observed that quality tends to increase when residual sugar is minimal and remains relatively unchanged beyond a certain point. This finding suggests that "residual sugar" is not as crucial as other variables such as alcohol in determining the quality of the wine. Figure 4 depicts quality against alcohol, we can clearly see that alcohol is greatly contributing to the quality of wine, as the quality of wine increases we can see that the alcohol also increases. The results of the analysis revealed that the models performed better with the selected features as compared to when we used all the features.
Data standardization is a process that involves transforming data into a standardized form that will ensure that its distribution has a standard deviation of 1 and a mean of 0. The process of data standardization is essential as it helps
Figure 2: Red wine correlation matrix
in equalizing the range of information [4], allowing for a more fair comparison between different features. For instance, as shown in Table 1 the overall Sulfur Dioxide readings are notably greater than chlorides. When we train machine learning models, having one variable with an exceptionally high value can mask all others, causing bias. Hence we need to standardize our data.
## 4 Classification Methods
### Support Vector Machine
SVM is one of the most well-known supervised learning algorithms that maximizes the margin. The goal of a support vector machine is to find a hyper
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Rank** & **Name** & **Correlation** \\ \hline \hline
1 & alcohol & 48\% \\
2 & volatile acidity & -40\% \\
3 & sulphates & 25\% \\
4 & citric acid & 23\% \\
5 & total sulfur dioxide & -18\% \\
6 & density & -18\% \\
7 & chlorides & -13\% \\
8 & fixed acidity & 12\% \\
9 & pH & -6\% \\
10 & free sulfur dioxide & -5\% \\
11 & residual sugar & 1\% \\ \hline \end{tabular}
\end{table}
Table 2: Correlation with Quality
Figure 3: Residual sugar versus quality
plane that can efficiently separate various classes of data points within a high-dimensional space. This will enable us to swiftly classify new data points [11]. A hyperplane is the optimal decision boundary. The SVM algorithm takes into account the various extreme points that help in creating a hyperplane. The SVM algorithm is used for both linear (separable case) and non-linear (non-separable case) data. Let \(D=\left\{(x_{i},y_{i})\right\}_{i=1}^{N}\) where \((x_{i},y_{i})\) represents an individual data point and its corresponding label, and \(D\in\mathbb{R}^{m\times n}\) be a training data with \(m\) rows and \(n\) columns. Here \(x_{i}\in\mathbb{R}^{n}\) and \(y_{i}\in\left\{0,1,2\right\}\) are indicating a multi-class classification with \(0\) as bad quality wine, \(1\) as normal wine, and \(2\) as good quality wine. We construct a function to classify the quality of the wine based on its features \(x_{i}\).
\[f:\mathbb{R}^{n}\rightarrow\mathbb{R}\] \[x_{i}\mapsto f(x_{i})=\begin{cases}0,\text{ if wine is bad quality, or}\\ 1,\text{ if wine is normal quality, or}\\ 2,\text{ if wine is good quality}\end{cases}\]
#### 4.1.1 Linear SVM (separable case)
According to [11], we first assume that the training data are linearly separable and that there is a hyperplane that separates the data without error. In this case, we look for the maximum margin hyperplane:
\[f(x)=\left\langle w,x\right\rangle+b=w^{T}x+b. \tag{2}\]
Where \(\left\langle\cdot,\cdot\right\rangle\) and \(w^{T}\) are the inner product and the transpose of the vector
Figure 4: Alcohol versus quality
respectively. If \(x_{s}\) is a support vector and \(H=\big{\{}x|w^{T}x+b=0\big{\}}\), then the margin is given by:
\[\begin{split}\text{Marge}&=2d\big{(}x_{s},H\big{)}\\ &=\frac{2|w^{T}x+b|}{||w||},\end{split} \tag{3}\]
where \(w\) is a normal vector called weight, \(x\) is the input vector and \(b\) is a bias. The parameter \(w\) and \(b\) are not unique, and \(kw\) and \(kb\) give the same area of separation:
\[\begin{split} kw^{T}x+kb&=k\big{(}w^{T}x+b\big{)} \\ &=0.\end{split} \tag{4}\]
We then impose the normalization condition \(\big{|}w^{T}x_{s}+b\big{|}=1\) for the \(x_{s}\) support vectors, which leads to:
\[Marge=\frac{2}{||w||}. \tag{5}\]
In order to minimize the margin, we thus need to minimize \(||w||\). Recall the normalization conditions: \(wx_{i}+b=1\) if \(x_{i}\) is a support vector of class \(+1\) and \(wx_{i}+b=-1\) if \(x_{i}\) is a support vector of class \(-1\).:
\[\begin{cases}\text{if }y_{i}=1\text{ then }wx_{i}+b\geq 1\text{ and thus }y_{i}(wx_{i}+b)\geq 1\\ \text{if }y_{i}=-1\text{ then }wx_{i}+b\leq-1\text{ and thus }y_{i}(wx_{i}+b)\geq 1 \end{cases}\]
We now must solve a quadratic programming problem of optimization (called primal problem):
\[\begin{cases}\min_{w,b}\frac{1}{2}||w||^{2}\\ \text{if }y_{i}=-1\text{ then }wx_{i}+b\leq-1\text{ and thus }y_{i}(wx_{i}+b)\geq 1 \end{cases}\]
The two parallel normal constraints of this optimization problem are separated by a Lagrange function. To solve this problem, we can combine the two constraints into a new Lagrangian function. We can also introduce new "slack variables" that denote \(\alpha\) and require the derivative of the function to be zero. According to [11] the Lagrangian is given by:
\[L(w,b,\alpha)=\frac{1}{2}||w||^{2}+\sum_{i=1}^{n}\alpha_{i}\big{[}y_{i}\big{(} w^{T}x_{i}+b-1\big{)}\big{]}, \tag{6}\]
where \(\alpha_{i}\) represents the Lagrange multiplier introduced to solve the constrained optimization problem.
#### 4.1.2 Linear SVM (Non-separable case)
Hyperplane cannot completely segregate binary classes of data in the majority of real-world data, hence we accept some observations in the training data on the incorrect side of the margin or hyperplane. Here, is the primal optimization problem of Soft Margin:
\[\begin{cases}\min_{w,b}\left(\frac{1}{2}||w||^{2}-C\sum_{i=i}^{n}\xi_{i}\right) \\ y_{i}(wx_{i}+b)\geq 1-\xi_{i}\text{ and }\xi_{i}\geq 0,i=1,\cdots,n\end{cases}\]
where \(\xi_{i}\) is the slack variable that allows misclassification; the penalty term \(\sum_{i}^{n}\xi_{i}\) is a measure of the total number of misclassification in the model and \(C\) is a penalty variable for misclassified points [12]. Using the same terminology for separable SVM, we get the dual problem:
\[\begin{cases}\max_{a}\left(\sum_{i=1}^{n}\alpha_{i}-\frac{1}{2}\sum_{i,j=1}^{n }\alpha_{i}\alpha_{j}y_{i}y_{j}(x_{i}x_{j})\right)\\ \sum_{i=1}^{n}\alpha_{i}y_{i}=0\\ C\geq\alpha_{i}\geq 0,\,i=1,\cdots,n.\end{cases}\]
The classification of a new observation \(x\) is determined by the decision function:
\[f(x)=\sum_{i=1}^{n}\alpha_{i}y_{i}(x_{i}x)+b. \tag{7}\]
### Decision Tree
A decision tree is a type of machine learning that takes into account the various inputs and outputs in a given training program. It then continuously splits the data according to a set of parameters. The two entities that comprise a decision tree are the leaves and the decision nodes [13].
Getting the correct attribute for a particular tree's root node is a huge challenge. This is why it is important to consider the various methods that are available to select attributes. There are two main methods that are commonly used to select attributes which are entropy and information gain. Let S be a sample and \(S_{1},\cdots,S_{k}\) the partition of \(S\) according to the classes of the target attribute. The Gini is denoted as \(Gini(S)\) and the entropy is denoted as \(Ent(S)\) are defined by [14].
\[Gini(S)=\sum_{i=1}^{k}\frac{|S_{i}|}{|S|}\times\left(1-\frac{|S_{i}|}{S} \right)=\sum_{i\neq j}\frac{|S_{i}||S_{j}|}{|S|}, \tag{8}\]
and the entropy as:
\[Ent(S)=-\sum_{i=1}^{k}\frac{|S_{i}|}{|S|}\times\log\left(\frac{|S_{i}|}{|S|}\right), \tag{9}\]
where \(|S_{i}|\) is the cardinality in the set \(S_{i}\) and \(|S|\) is the cardinality in the sample \(S\). The variables \(i\),\(j\), and \(k\) represent indices where \(i\) refers to attribute classes, \(j\) indicates different classes for the Gini formula, and \(k\) represents the total class.
### Random Forest
RF is a widely used algorithm that is a part of the supervised learning framework. It can be utilized for regression and classification problems. It's based on the idea of ensemble learning, in which multiple classifiers are combined to solve a given problem and to enhance the model's performance. [13]. Figure 5 demonstrates how random forest predicts the quality of the wine.
The RF classifier combines the power of numerous decision trees. It creates several decision trees using bootstrapped datasets and randomly chooses a subset of the variables for each stage. Figure 5 shows how RF works. It aggregates the predicted outcomes from all the decision trees, and it chooses the mode that is most likely to perform well. This approach ensures that the model is more accurate and reliable, minimizing the risk that a single tree can make an error. By adopting a "majority wins" approach, RF ensures that the ultimate prediction is derived from a collective agreement among the decision trees, instead of relying solely on the outcome of an individual tree.
Figure 5: Random Forest(adapted from [15])
### Gradient Boosting
A Gradient Boosting Machine is a type of tool that creates a strong learner by taking weak individuals and merging them into a single model. It can be used for classification and regression tasks. Although it is mainly utilized for tree-based models, it can also be applied to other weak individuals [16].
The fundamental concept behind GB involves incorporating new models into the ensemble, with each new model focusing on the examples that were incorrectly classified by the previous models. In order to focus on these difficult examples, GB fits each new model to the negative gradient of the loss function with respect to the current ensemble model [17]. The GB method can be used in various applications such as regression, ranking problems, and classification.
### K-Nearest Neighbours
The KNN classifier is a machine learning algorithm used for classification and regression tasks that work on the premise that similar objects are usually located near each other [12]. In order for KNN to find the neighbors of a query point we need to calculate the distance between the query point and the other data points. These distance measures help in the formation of decision boundaries, which divide query points into distinct areas. One of the main drawbacks of the KNN algorithm is that it may be biased towards the majority class in datasets that are imbalanced, meaning that there are significantly more instances in one class than in another [18]. This is because KNN classifies query points by finding the \(k\) nearest neighbours in the training set and if the majority class dominates the neighbourhood of the test instance it is likely to be classified as the majority class.
Let's say we have a dataset with \(X\) representing a matrix that contains the features observed and \(Y\) representing the class label. Lets assume we have a point \(x\) which has coordinates \((x_{1},x_{2},\cdots,x_{p})\) and point \(y\) with coordinates \((y_{1},y_{2},\cdots,y_{p})\)[12]. The KNN algorithm is in this study because it categorizes new cases based on the Euclidean distance between the training data and the test observation. In k-NN, the optimal choice is determined by identifying the set of training data points that are closest to the given test observation in terms of Euclidean distance [19].
\[d(x_{i},x_{t})=\sqrt{\sum_{j=1}^{d}\bigl{(}x_{ij}-x_{tj}\bigr{)}^{2}},\,=\|x_{ i}-x_{t}\| \tag{10}\]
where \(x_{i}\) represent the training data and \(x_{t}\) represent the test observation.
Majority voting is the process of selecting the class that has the highest number of votes among the k-nearest neighbours in the K-nearest neighbours (KNN) algorithm. Majority voting is defined as follows according to [18]:
\[\hat{f}(x_{t})=\operatorname*{argmax}_{c\in\{c_{1},c_{2},c_{3}\}}\,\sum_{(x_{ i},y_{i})\in N_{k}(x_{t})}I(y_{i}=c), \tag{11}\]
where \(x_{t}\) represent the test observation, \(\hat{f}(x_{t})\) represent a forecasted class label, \(N_{k}(x_{t})\) represent a set of training instances and I(\(\cdot\)) represent an indicator function that takes a value as input and returns either 0 or 1 based on whether the input satisfies a certain condition [18].
Figure 6 shows the KNN classifier with K= 3 and K= 7. We need to predict the class for the new observation (red circle) if it belongs to a class of Bad wine, Normal wine, or Class of Good wine respectively. If we choose k=3 (for a small dotted circle) then we have one observation in Class Bad wine, one observation in Class Normal wine, and one observation in Class Good wine. From this we have Pr(Bad wine)=\(\frac{1}{3}\), Pr(Normal wine)=\(\frac{1}{3}\) and Pr(Good wine)=\(\frac{1}{3}\) respectively. We can clearly see that we have a tie among our classes where each class has one observation. Since the number of neighbours in class Bad Wine, class Normal, and class Good Wine are the same, we cannot determine the class of the new data point based on the number of neighbours alone. According to [20], we can use different tie-breaking techniques to determine the class in case of a tie. One common method is to choose the class that has the shortest average distance to the new data point. If we choose k=7 (for a big dotted circle) then we have two observations in Class Bad Wine, three observations in Class Normal Wine, and two observations in Class Good Wine. From this we have Pr(Bad wine)=\(\frac{2}{7}\), Pr(Normal wine)=\(\frac{3}{7}\) and Pr(Good wine)=\(\frac{2}{7}\), so we can clearly see that the small red circle (test observation) belongs to class Normal wine based since class Normal wine has the highest probability as compared to other classes (majority voting). The value of a classier determines the performance of that class. However, selecting the correct different \(k\) values can be very challenging. This is because the value of \(k\) can have a huge impact on the accuracy of the predictive
Figure 6: KNN with different k-values (adapted from [12])
model [21].
## 5 Experimental Settings
### Unbalanced Data
Figure 7 demonstrates the red wine quality classes, the dataset's distribution can be seen with the most significant value being 5 with the class values ranging from 3 to 8.
The dataset depicts an unbalanced distribution of red wine with other classes not being fairly represented, the instances range from 10 in the minority class to 577. As suggested by [22], sampling techniques such as undersampling, oversampling, and SMOTE are used to handle unbalanced datasets. These are further discussed in section 5.2.
### Sampling Techniques
#### 5.2.1 Undersampling and Oversampling
The oversampling method is an intuitive technique that increases the size of a minority class by creating duplicates of samples taken from the under-represented group. Undersampling on the other hand ensures that all of the data from the minority segment are kept and reduces the size of the majority segment to be
Figure 7: Distribution of red wine quality
the same as the minority segment. Undersampling is usually considered to be a disadvantage as it eliminates potentially useful data. Oversampling on the other hand is more likely to cause overfitting since it duplicates existing examples [23].
#### 5.2.2 Synthetic Minority Oversampling Technique
According to [22], using the SMOTE filter proves to be a valuable approach in addressing imbalanced wine datasets. SMOTE employs a k-nearest neighbour method to create synthetic data points. SMOTE starts by selecting K nearest neighbours from the minority samples based on the desired level of oversampling and then randomly selecting a neighbour from K nearest neighbors [22]. The selection process is not deterministic as the K nearest neighbours are chosen randomly. The selection of the K neighbors is done randomly and the random data is combined to generate synthetic data. SMOTE utilizes synthetic data points to add diversity to the minority class, mitigating the issue of overfitting that arises from random sampling techniques. According to [22] SMOTE also creates a more balanced dataset which can help improve the performance of machine learning models when dealing with imbalanced data.
### Hyper parameter tuning
In machine learning, the task of selecting a set of optimal hyperparameters for a learning algorithm is known as hyperparameter tuning. The simplest approach to tuning hyperparameters is undoubtedly grid search. Using this method, we simply construct a model for every possible combination of the supplied hyperparameter values and evaluate each model, and choose the model that yields the best results [24]. According to [25], hyperparameter optimization is expressed as:
\[x^{*}=\underset{x\in X}{\arg\min}f(x), \tag{12}\]
where \(f(x)\) represents a score that we aim to minimize, such as the error rate evaluated on the validation set. \(x^{*}\) refers to the set of hyperparameters that produces the lowest score value while \(x\) can take any value within the \(X\) domain. With this, we want to determine the model hyperparameters that provide the highest score on the validation set metric.
### Model Evaluation
To understand how well and efficiently the model performs, we measure and evaluate its performance. There are four techniques used to determine the accuracy of predictions:
* True Positive (TP): This indicates the percentage of samples that the model correctly identifies as positive.
* False Positive (FP): It represents the percentage of samples that the model mistakenly predicts as positive when they are actually negative.
* False Negative (FN): These are the samples that the model wrongly classifies as negative while they are positive in reality.
* True Negative (TN): These are the samples that the model accurately identifies as negative.
We use the following techniques to assess the model.
1. Accuracy: It can be characterized as either the proportion of all positive classes that the model correctly predicted to be true or the number of accurate outputs that the model provides. Its formula is: \[Accuracy=\frac{TP+TN}{TP+TN+FP+FN}.\] (13)
2. Precision: Precision refers to the ratio of predicted observations to the total number of expected positive observations. Its formula is: \[Precision=\frac{TP}{TP+FP}.\] (14)
3. Recall: Recall is known as the proportion of accurately predicted positive observations to all of the actual class observations. Its formula is: \[Recall=\frac{TP}{TP+FN}.\] (15)
4. \(F_{1}\) Score: \(F_{1}\) score is calculated as the balanced average of recall and accuracy. The test accuracy of the model is evaluated using the \(F_{1}\) score. Its formula is [26]: \[F_{1}Score=2\times\frac{Recall\times Precision}{Recall+Precision}.\] (16)
According to [27], accuracy is the primary metric used to evaluate models, but when dealing with skewed class distributions and imbalanced datasets, it becomes challenging to make accurate judgments. For instance, the recall rate for minority groups has typically dropped to zero. This indicates that the model is not able to properly classify them. The reduction in recall and precision scores for minority groups is due to how the model focuses more on the majority segment instead of the minority segments. This issue is caused by the preference of the accuracy model for the majority group. As a result, the classifier tends to perform poorly on the minority groups.
## 6 Results and Discussion
### Results
For the purpose of this study, we are using five machine learning algorithms to predict the wine quality namely SVM, DT, KNN, GB, and RF. We implemented our models in an unbalanced dataset with default parameters and the results are shown in Table 3 below that our models performed poorly with support vector machine and random forest having the highest accuracy with 78% each. Table 3 provides a comprehensive overview of our models' performance across metrics such, as accuracy, precision, recall, and F1 score.
We also implemented our models on a balanced dataset with tuned parameters. The results are shown in Table 4, indicating that the models perform well compared to when the models were implemented in an unbalanced dataset with default parameters. As shown in Table 4 among the five machine learning algorithms used in this research to predict wine quality, SVM shows the best performance. As mentioned in section 4.5 the KNN classifier in an unbalanced dataset tends to favour the majority class, this is evident in Table 3 as we can see that the precision, recall, and F1-score are high in the majority class (Class 1) as compared to other classes (Class 0 and Class 2). We can see that balancing the data and tuning your models increase the performance of your models as suggested by [22]. This is evident in Table 4 as we can see that the accuracy of our models increases as compared to when they were implemented our model in an unbalanced dataset with default parameters.
### Feature importance
We also graphed the feature importance based on our best-performing machine learning model which is in this case the SVM. As we can see the feature importance graphed in Figure 8 alcohol is the most significant factor impacting wine quality, and this was also suggested by [28] that alcohol plays a crucial
role in determining wine quality. Looking at the feature importance graph it suggests that tuning features such as "alcohol", "sulphates", and "volatile acidity" may increase or decrease the wine scores. This information suggests that winemakers may benefit from tuning their models and playing around with the physio-chemical properties of wine.
### Discussion
The objective of this research is to try and predict the quality of wine by analyzing the physico-chemical properties of the wine. It also looks into which features of the wine are most indicative of its quality. To achieve this goal we applied several machine learning algorithms as mentioned above, including Random Forest (RF), Support Vector Machine (SVM), Gradient Boosting (GB), K-Nearest
\begin{table}
\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|r|r|r|} \hline & SVM & RF & KNN & GB & DT & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline Class & 0.98 & 0.99 & 0.98 & 0.95 & 0.99 & 0.97 & 0.87 & 1.00 & 0.93 & 0.85 & 0.88 & 0.87 & 0.88 & 0.93 & 0.90 \\ \hline
1 & 0.95 & 0.93 & 0.94 & 0.95 & 0.82 & 0.88 & 1.00 & 0.60 & 0.75 & 0.78 & 0.71 & 0.74 & 0.76 & 0.69 & 0.72 \\ \hline
2 & 0.95 & 0.97 & 0.96 & 0.88 & 0.97 & 0.92 & 0.80 & 1.00 & 0.89 & 0.84 & 0.90 & 0.87 & 0.79 & 0.81 & 0.80 \\ \hline Accuracy & 96\% & \multicolumn{1}{c|}{92\%} & \multicolumn{1}{c|}{87\%} & \multicolumn{1}{c|}{83\%} & \multicolumn{1}{c|}{81\%} & \multicolumn{1}{c|}{81\%} \\ \hline \end{tabular}
\end{table}
Table 4: Test results on the balanced dataset with tuned model parameters
Figure 8: Feature importance for our best performing model
Neighbors (KNN), and Decision Tree (DT). We chose to use these machine learning algorithms because they are widely used algorithms for classification problems and are effective for wine quality prediction. We also dug deeper into the data and we found an interesting relationship between our feature variables and the target variable (Quality). We used the correlation coefficient matrix as shown in Figure 1 and features are ranked according to their correlation values. The results shown in Figure 1 suggest that features like "alcohol", "volatile acidity", and "sulphates" have a high correlation with quality while features like "free sulfur dioxide" and "residual sugar" do not. In Table 2, features are ranked according to their correlation values, and the first 10 features are selected during the models' ultimate implementation.
We assessed the effectiveness of the algorithms by analyzing metrics, including precision, recall, accuracy, and \(F_{1}\) score as presented in both Table 3 and Table 4. We then evaluated the performance of our model by applying it to both the imbalanced dataset with default parameters and to a balanced dataset with fine-tuned parameters. The results of the analysis are presented in Table 3 and Table 4 respectively. From the performance results it is evident that the best outcome is achieved with a balanced dataset with fine-tuned parameters. As mentioned above it is evident that balancing the data and tuning your model parameters enhances the models' performance.
## 7 Conclusion
This study showed the importance of feature selection in understanding the impact of analytical tests on wine quality. The results of the feature selection process showed that some input variables such as Alcohol had a more significant influence on predicting wine quality than others such as Residual sugar. Applying machine learning algorithms in conjunction with the results of the feature selection process presented a valuable opportunity to improve the wine production process.
We employed five machine learning models, namely Decision Tree (DT), Random Forest (RF), Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Gradient Boosting (GB). The Support Vector Machine (SVM) outperformed the other models with an accuracy of 96%. Therefore, we conclude that not all features were equally important for predicting wine quality and that tuning your models and balancing the dataset improved the performance of the models. We also saw that in Figure 8 our feature importance graph suggested that tuning the models and playing around with physio-chemical properties such as "Alcohol" and "sulphate" may be beneficial in improving the prediction of wine quality.
Although this study presents promising results in predicting the wine quality using machine learning algorithms some limitations need to be addressed in
future work, such as the small size of the dataset and we did not use all the algorithms. In future work using larger and more diverse datasets could enhance the machine learning algorithm's performance. This will help the algorithms generalize better and reduce the risk of overfitting, thus improving the wine production process. For our study, we only used Five machine learning algorithms and there are still many other algorithms that could be explored in future work. We can evaluate the performance of the algorithms using different metrics and we can explore the impact of different preprocessing techniques such as different feature scaling techniques on the performance of the algorithms.
|
2304.12301 | AssemblyHands: Towards Egocentric Activity Understanding via 3D Hand
Pose Estimation | We present AssemblyHands, a large-scale benchmark dataset with accurate 3D
hand pose annotations, to facilitate the study of egocentric activities with
challenging hand-object interactions. The dataset includes synchronized
egocentric and exocentric images sampled from the recent Assembly101 dataset,
in which participants assemble and disassemble take-apart toys. To obtain
high-quality 3D hand pose annotations for the egocentric images, we develop an
efficient pipeline, where we use an initial set of manual annotations to train
a model to automatically annotate a much larger dataset. Our annotation model
uses multi-view feature fusion and an iterative refinement scheme, and achieves
an average keypoint error of 4.20 mm, which is 85% lower than the error of the
original annotations in Assembly101. AssemblyHands provides 3.0M annotated
images, including 490K egocentric images, making it the largest existing
benchmark dataset for egocentric 3D hand pose estimation. Using this data, we
develop a strong single-view baseline of 3D hand pose estimation from
egocentric images. Furthermore, we design a novel action classification task to
evaluate predicted 3D hand poses. Our study shows that having higher-quality
hand poses directly improves the ability to recognize actions. | Takehiko Ohkawa, Kun He, Fadime Sener, Tomas Hodan, Luan Tran, Cem Keskin | 2023-04-24T17:52:57Z | http://arxiv.org/abs/2304.12301v1 | # AssemblyHands:
###### Abstract
We present AssemblyHands, a large-scale benchmark dataset with accurate 3D hand pose annotations, to facilitate the study of egocentric activities with challenging hand-object interactions. The dataset includes synchronized egocentric and exocentric images sampled from the recent Assembly101 dataset, in which participants assemble and disassemble take-apart toys. To obtain high-quality 3D hand pose annotations for the egocentric images, we develop an efficient pipeline, where we use an initial set of manual annotations to train a model to automatically annotate a much larger dataset. Our annotation model uses multi-view feature fusion and an iterative refinement scheme, and achieves an average keypoint error of 4.20 mm, which is 85% lower than the error of the original annotations in Assembly101. AssemblyHands provides 3.0M annotated images, including 490K egocentric images, making it the largest existing benchmark dataset for egocentric 3D hand pose estimation. Using this data, we develop a strong single-view baseline of 3D hand pose estimation from egocentric images. Furthermore, we design a novel action classification task to evaluate predicted 3D hand poses. Our study shows that having higher-quality hand poses directly improves the ability to recognize actions.
+
Footnote †: * Work done during internship.
## 1 Introduction
Recognizing human activities is a decades-old problem in computer vision [17]. With recent advancements in user-assistive augmented reality and virtual reality (AR/VR) systems, there is an increasing demand for recognizing actions from the _egocentric_ (first-person) viewpoint. Popular AR/VR headsets such as Microsoft HoloLens, Magic Leap, and Meta Quest are typically equipped with egocentric cameras to capture a user's interactions with the real or virtual world. In these scenarios, the user's hands manipulating objects is a very important modality of interaction. In particular, hand poses (, 3D joint locations) play a central role in understanding and enabling hand-object interaction [3, 18], pose-based action recognition [7, 20, 28], and interactive interfaces [10, 11].
Recently, several large-scale datasets for understanding egocentric activities have been proposed, such as EPIC-KITCHENS [5], Ego4D [8], and Assembly101 [28]. In particular, Assembly101 highlights the importance of 3D hand poses in recognizing procedural activities such as assembling toys. 3D hand poses are compact representations, and are highly indicative of actions and even the objects that are interacted with- for example, the "
Figure 1: **High-quality 3D hand poses as an effective representation for egocentric activity understanding. AssemblyHands provides high-quality 3D hand pose annotations computed from multi-view exocentric images sampled from Assembly101 [28], which originally comes with inaccurate annotations computed from egocentric images (see the incorrect left-hand pose prediction). As we experimentally demonstrate on an action classification task, models trained on high-quality annotations achieve significantly higher accuracy.**
a strong cue for the presence of a screwdriver. Notably, the authors of Assembly101 found that, for classifying assembly actions, learning from 3D hand poses is more effective than solely using video features. However, a drawback of this study is that the 3D hand pose annotations in Assembly101 are not always accurate, as they are computed from an off-the-shelf egocentric hand tracker [11]. We observed that the provided poses are often inaccurate (see Fig. 1), especially when hands are occluded by objects from the egocentric perspective. Thus, the prior work has left us with an unresolved question: _How does the quality of 3D hand poses affect action recognition performance?_
To systematically answer this question, we propose a new benchmark dataset named **AssemblyHands**. It includes a total of 3.0M images sampled from Assembly101, annotated with high-quality 3D hand poses. We not only acquire manual annotations, but also use them to train an accurate automatic annotation model that uses multi-view feature fusion from exocentric (, third-person) images; please see Fig. 2 for an illustration. Our model achieves 4.20 mm average keypoint error compared to manual annotations, which is 85% lower than the original annotations provided in Assembly101. This automatic pipeline enables us to efficiently scale annotations to 490K egocentric images from 34 subjects, making AssemblyHands the largest egocentric hand pose dataset to date, both in terms of scale and subject diversity. Compared to recent hand-object interaction datasets, such as DexYCB [3] and H2O [18], our AssemblyHands features significantly more hand-object combinations, as each multi-part toy can be disassembled and assembled at will,
Given the annotated dataset, we first develop a strong baseline for egocentric 3D hand pose estimation, using 2.5D heatmap optimization and hand identity classification. Then, to evaluate the effectiveness of predicted hand poses, we propose a novel evaluation scheme: action classification from hand poses. Unlike prior benchmarks on egocentric hand pose estimation [7, 18, 24], we offer detailed analysis of the quality of 3D hand pose annotation, its influence on the performance of an egocentric pose estimator, and the utility of predicted poses for action classification.
Our contributions are summarized as follows:
* We offer a large-scale benchmark dataset, dubbed AssemblyHands, with 3D hand pose annotations for 3.0M images sampled from the Assembly101 dataset, including 490K egocentric images.
* We propose an automatic annotation pipeline with multi-view feature fusion and iterative refinement, leading to 85% error reduction in the hand pose annotations.
* We define a benchmark task for egocentric 3D hand pose estimation with the evaluation from action classification. We provide a strong single-view baseline that optimizes 2.5D keypoint heatmaps and classifies hand identity. Our results confirm that having high-quality 3D hand poses significantly improves egocentric action recognition performance.
## 2 Related work
**Recognizing actions from pose.** The general framework for recognizing people's actions involves extracting low-level states from sensor observations, such as image features or body/hand motion, and then feeding a temporal sequence of states into a recognition model. There is a long history of using full body pose as the state representation in
Figure 2: **Construction of AssemblyHands dataset and a benchmark task for egocentric 3D hand pose estimation. We first use manual annotations and an automatic annotation network (MVExoNet) to generate accurate 3D hand poses for multi-view images sampled from the Assembly101 dataset [28]. These annotations are used to train a single-view 3D hand pose estimation network (SVEgoNet) from egocentric images. Finally, the predicted hand poses are evaluated by the action classification task.**
recognizing actions [4, 14, 29, 33, 34], since poses are compact representations that contain discriminative information about actions. Also, in the context of AR/VR, pose information carries the benefit that its availability is less affected by privacy concerns, unlike image/video data. On the modeling side, graph convolutional networks, which treat joints as nodes and bones as edges, have been commonly used in skeleton-based action recognition [20, 35].
In the exocentric setting, action recognition from hand poses is less explored compared to using full body pose, and is only studied on rather small datasets [18]. Instead, hand poses are much more relevant in the egocentric setting. Recently, a large-scale dataset, Assembly101 [28], was proposed to investigate action recognition using 3D hand poses. For Assembly101, 3D hand poses were found to be strong predictors of action; in particular, using hand poses was shown to give higher action classification accuracy compared to using video-based features [19].
**Datasets for 3D hand pose estimation.** Table 1 shows statistics on existing RGB-based 3D hand pose datasets and our AssemblyHands. Prior works on egocentric hand pose estimation annotate 2D keypoints on a depth image [24] or use magnetic markers attached to hands [7]. Due to the noise from these sensors, as well as the annotation cost, the accuracy and amount of annotation in these benchmarks are not sufficient. Thus, most 3D hand pose estimation works focus on using inputs from static exocentric cameras [3, 9, 12, 23, 30, 31, 36, 37] or utilize such an exocentric dataset to improve egocentric hand pose prediction [26].
Setups with multiple static cameras have several advantages and have been widely used in the literature [25]. First, the total number of available images proportionately increases with the number of cameras. For instance, InterHand2.6M [23] features numerous camera views (80+), resulting in the largest existing hand pose estimation dataset (non-egocentric) with a moderate amount of distinct frames. Second, 3D keypoint coordinates can be reliably annotated from multiple 2D keypoints by using triangulation [23, 30] or hand template fitting [3, 9, 18, 37] (_e.g_., MANO [27]).
Recently, a few egocentric activity datasets have installed synchronized egocentric cameras along with exocentric cameras, _e.g_., Assembly101 [28] and H2O [18]. The availability of exocentric images can significantly reduce the amount of annotation effort required for egocentric images. Compared to the H2O dataset, AssemblyHands provides more than four times egocentric images with accurate ground truth and eight times the number of subjects. With our higher sampling rate at 30 Hz, the total number of both egocentric and exocentric images (3.0M) surpasses the size of the InterHands2.6M. Due to the goal-oriented nature of assembly actions, the hand poses in our benchmark are totally natural and unscripted, which is less focused in the existing study.
For automatic annotation, we utilize a volumetric convolution network similar to the one used by Zimmermann _et al_. [37]. We further augment our model with an iterative refinement scheme to improve its accuracy, that does not require additional training.
## 3 AssemblyHands dataset generation
The input data in our proposed benchmark comes from the recently introduced Assembly101 [28], a large-scale multi-view video dataset designed for understanding procedural activities, in particular, the assembly and disassembly of take-apart toys. It is recorded with a static rig of 8 RGB cameras, plus 4 monochrome cameras on a synchronized headset worn by the human subject.
The initial hand pose annotations for Assembly101 are generated using an off-the-shelf hand tracker specifically designed for monochrome egocentric images [11]. While it can estimate 3D hand poses with reasonable accuracy, there
\begin{table}
\begin{tabular}{l|c c c c c} Dataset & Modality & \#img & \#ego\_img & \#views & \#subj & Annotation approach \\ \hline \hline EgoDexter [24] & RGB-D & 3K & 3K & 1 (ego) & 4 & Manual \\ Panoptic Studio [30] & RGB & 15K & - & 31 & N/A & 2D + triangulation \\ FPHA [7] & RGB-D & 105K & 105K & 1 (ego) & 6 & Magnetic sensor \\ FreiHAND [37] & RGB & 37K & - & 8 & 32 & Manual + 3D volume + template fitting \\ HO3D [9] & RGB-D & 103K & - & 5 & 10 & 2D + template fitting \\ InterHand2.6M [23] & RGB & 2.59M & - & 80-140 & 27 & Manual + 2D + triangulation \\ DexYCB [3] & RGB-D & 508K & - & 8 & 10 & Manual + template fitting \\ H2O [18] & RGB-D & 571K & 114K & 4 + 1 (ego) & 4 & 2D + template fitting + smoothing \\ \hline AssemblyHands (M) & \multirow{2}{*}{RGB/Mono} & \multirow{2}{*}{2.81M} & \multirow{2}{*}{468K} & \multirow{2}{*}{8 + 4 (ego)} & \multirow{2}{*}{20} & \multirow{2}{*}{Manual + 3D volume + refinement} \\ AssemblyHands (A) & & & & & & 34 \\ \end{tabular}
\end{table}
Table 1: **Comparison of AssemblyHands with existing 3D hand pose datasets.**1 "M" and “A" stand for manual and automatic annotation, respectively. AssemblyHands is the largest existing benchmark for egocentric 3D hand pose estimation.
are several limitations. For example, since the stereo area of the egocentric cameras is relatively narrow, depth estimates become inaccurate as hands move further away from the image center. Also, egocentric-only tracking is prone to severe failure modes due to heavy occlusion during hand-object interaction. These motivate us to develop a multi-view annotation method using exocentric RGB cameras.
While several existing datasets use off-the-shelf RGB-based models (_e.g_., OpenPose [2]) to annotate hand poses, we have observed their accuracy is not satisfactory in Assembly101 (see the supplement for details). Since the OpenPose is trained on images with less hand-object occlusions [30], its predictions are often noisy when novel real-world objects (take-apart toys) and higher levels of occlusion are presented in Assembly101. Thus, it is necessary to develop an annotation method tailored to our novel setup.
### Automatic annotation pipeline
We present our proposed automatic annotation pipeline using multi-view exocentric RGB images. We first prepare manual annotation for the frames sampled from the subset of Assembly101 at 1 Hz. Since obtaining manual annotations is laborious, we use them for training an annotation network that can automatically provide reasonable 3D hand pose annotation. We then introduce the detail of our annotation network: (1) an annotation network using volumetric feature fusion (MVExoNet), and (2) iterative refinement during inference of the network. Compared to the manual annotation, this automatic annotation scheme allows us to assign 21 times more labels in another subset of Assembly101 sampled at 30 Hz.
**Manual annotation.** First, we obtain manual annotations of the 3D locations of 21 joints on both hands in the world coordinate space. We use a setup similar to that of [6, 23], where 2D keypoints are annotated from multiple views and triangulated into 3D. In total, we annotated 62 video sequences from Assembly101 at a sampling rate of 1 Hz, resulting in an annotated set of 22K frames, each having 8 RGB views. We further split it into 54 sequences for training and 8 sequences for testing.
**Volumetric annotation network.** We next design a neural network model for 3D keypoint annotation. With multi-camera setups, a standard approach is to triangulate 2D keypoint detections; we call this the "2D + Triangulation" baseline. For instance, in InterHand2.6M [23] this approach can achieve an accuracy of 2.78 mm, owing to the high number of cameras (80 to 140). However, for Assembly101, 2D + Triangulation only achieves 7.97 mm given the limited number of 8 RGB cameras (see Table 2). On the other hand, end-to-end "learnable triangulation" methods [1, 16] are known to outperform standard triangulation for human pose estimation in this regime. We thus adopt this principle and design a multi-view hand pose estimation network based on 3D volumetric feature aggregation.
We name our volumetric network MVExoNet, and show its design in Fig. 3. First, a feature encoder extracts 2D keypoint features for each view. We then project the features to a single 3D volume, using the softmax-based weighted average proposed in [16]. Later, an encoder-decoder network based on 3D convolutions refines the volumetric features and outputs 3D heatmaps. We obtain 3D joint coordinates with soft-argmax operation on the heatmaps.
For the architecture, we use EfficientNet [32] as an encoder to extract compact 2D features before volumetric aggregation, in order to save GPU memory. We use V2V-PoseNet [22] as the 3D convolutional network. During training, we generate 2D hand crops by slightly expanding the region enclosing the manually annotated 2D keypoints. The 3D volume is 300 mm long on each side, centered on the bottom of the middle finger (_i.e_., the third MCP joint). We also augment the volume's root position by adding random noise to each axis, which prevents the model from always predicting the origin of the volume as the third MCP. At test time, we crop hand regions based on the output of a hand detector, and use the predicted third MCP from the 2D + Triangulation baseline as the volume root.
**Iterative refinement.** During the inference of MVExoNet, we propose a simple iterative refinement heuristic that improves the model's input over several rounds. As mentioned above, MVExoNet requires hand bounding boxes to crop input images and the root position to construct the 3D volume. At test time, the bounding box and volume root come from a hand detector and triangulation of initial 2D keypoint predictions, respectively, which may contain inaccuracies. We found that MVExoNet performs worse than the hypothetical upper bound of having the manually annotated crops and root positions as input.
Our iterative refinement is motivated by this observa
Figure 3: **Architecture of the hand pose annotation model. We use an EfficientNet encoder [32] to extract 2D features from multi-view images, then aggregate them into a 3D feature volume, and apply volumetric convolution with V2V-Posenet [22]. We apply soft-argmax to extract hand joint locations from 3D heatmaps.**
tion: since MVExoNet already generates reasonable predictions, we can use its output to re-initialize the hand crops and volume root position. This gives the network better inputs with each successive round. We call the original model MVExoNet-R1 (the first round of inference), and name the following rounds as MVExoNet-R2, etc. In each additional round, we define input hand crops from projected 2D keypoints generated by the MVExoNetin the previous round, and center the 3D volume on the predicted root position. Note that we freeze MVExoNet during the iterative refinement inference and only update the input (_i.e_., bounding box and volume root) to the model.
### Evaluation of annotated 3D hand poses
We now compare the accuracy of our proposed annotation method to several baselines, including egocentric hand tracker [11] used in the original Assembly101. First, to evaluate in-distribution generalization, we use the manually annotated test set from Assembly101, which contains frames sampled from 8 sequences at 1 Hz. We also consider the generalization to unseen multi-camera setups; for this purpose, we use the _Desktop Activities_ subset from the recently released Aria Pilot Dataset [21]; see our supplementary material for the illustration of the camera setup.
Comparison to egocentric hand pose annotation.We compare the accuracy of annotation methods on a manually-annotated evaluation set in Table 2. The original hand annotations in Assembly101 [28] are computed by an egocentric hand pose estimator, UmeTrack [11], using monochrome images from egocentric cameras. The egocentric annotation (Egocentric-only) achieved a error of 27.55mm, which is significant higher than methods using exocentric cameras, namely 2D + Triangulation and our proposed method. We found that the annotation from egocentric cameras becomes inaccurate when in-hand objects block the user's perspective. For these cases, the keypoint predictions from multiple exocentric cameras help localize the occluded keypoints. By fusing volumetric features from multi-view exocentric images, our MVExoNet performs much better than the standard 2D + Triangulation baseline.
Ablation study of MVExoNet.As shown in Table 2, our initial inference result (MVExoNet-R1) achieved reasonable performance with 5.42 mm error. The iterative refinement further boosts in reducing annotation errors from 5.42 mm to 4.20 mm (22.5% reduction) after two rounds.
In Fig. 4, we visualize the transition of the hand crops and MVExoNet's predictions over the rounds on both Assembly101 and Desktop Activities. At the beginning, hand crops in the first round are not optimal for both datasets. For example, the model cannot distinguish which hand to annotate because both hands are centered on the image in Assembly101 (left). Also, the hand moves above in the image (top right) and appears to be tiny (bottom right). Given these suboptimal hand crops, the prediction becomes noisy, such as keypoint predictions going to the other hand and detaching from the hand position. However, in the later rounds, the hand crops gradually focus on the target hand (_e.g_., left hand on the top left figure), which improves the keypoint localization.
\begin{table}
\begin{tabular}{l|c c} Annotation method & MPJPE & PCK-AUC \\ \hline \hline Egocentric-only [28] & 27.55 & 29.4 \\
2D + Triangulation & 7.97 & 63.8 \\ MVExoNet-R1 (Ours) & 5.42 & 79.2 \\ MVExoNet-R2 (Ours) & 4.30 & 83.1 \\ MVExoNet-R3 (Ours) & **4.20** & **83.4** \\ \end{tabular}
\end{table}
Table 2: **Evaluation of hand pose annotation on manually annotated subset of AssemblyHands.** We use MPJPE (mm) and PCK-AUC (%) as the evaluation metrics.
Figure 4: **Example visualization of iterative refinement on AssemblyHands and Desktop Activities [21].** Over the refinement iterations, the cropped image progressively becomes better centered on the hand, and the predicted hand pose becomes more accurate.
**Generalization to novel camera configurations.** To evaluate the cross-dataset generalization ability of our annotation method, we use the Desktop Activities dataset, which also features hand-object interactions in a multi-camera setup. It is recorded with a multi-view camera rig similar to that of Assembly101, but with 12 exocentric RGB cameras and different camera placements. The objects are from the YCB benchmark [3], which are also unseen in Assembly101. To our knowledge, there are no existing hand pose annotations for Desktop Activities. We use the same manual annotation approach to construct an evaluation set with 1105 annotated frames from three different sequences.
As shown in Table 3, due to the new camera configuration and the presence of novel objects, all methods obtain higher errors than in the Assembly101 setting. In particular, the baseline annotation method 2D + Triangulation degrades significantly when applied to Desktop Activities, to nearly 50 mm MPJPE. In contrast, our MVExoNet is quite robust to the new setting, achieving an initial MPJPE of 21.20 mm, and 13.38 mm after two rounds of iterative refinement (a 36.9% error reduction).
## 4 Egocentric 3D hand pose estimation
To build hand pose estimators for egocentric views, we train models on egocentric images with keypoint annotations generated in Section 3. Training on egocentric images is necessary because existing exocentric datasets do not fully capture egocentric-specific biases in terms of the viewpoint, camera characteristics (egocentric cameras are typically fisheye), and blur from the head motion. Hence, the generalization of exocentric models to egocentric data tends to be limited: for example, in [26], the model trained on DexYCB [3] (exocentric) achieves 14% PCK on FPHA [7] (egocentric), compared to 63% when fine-tuned on FPHA.
**Problem setting.** We conduct an evaluation of a 3D hand pose estimator trained by egocentric images. Given a single egocentric image, the model aims to predict the 3D coordinates of 21 joints in the wrist-relative space. We split both the manually annotated and the automatically annotated datasets (M/A) into training and evaluation. Manually annotated training and evaluation sets contain 19.2K and 3.0K images, respectively, which are sampled at 1 Hz from 62 video sequences with 14 subjects. Automatically annotated sets include 405K and 63K images, respectively, which are sampled at 30 Hz from a disjoint set of 20 sequences with 20 subjects.
**Single-view baseline.** Following standard heatmap-based hand pose estimators [15, 23], we build a single-view network (SVEgoNet) trained on monochrome egocentric images. The model consists of 2.5D heatmap optimization and hand identity classification. The 2.5D heatmaps represent 2D keypoint heatmaps in x-y axis and the wrist-relative distance from the camera in z axis. We use the ResNet-50 [13] backbone. The 3D joint coordinates are computed by applying the argmax operation on the 2.5D heatmaps.
In addition, we observe that learning the correlations between hand poses and the identity of hand is effective in our task. For instance, during the "screw" motion, right-handed participants in Assembly101 are more likely to hold the toy with their left hand and turn the screwdriver with their right hand. In another example, when handling small parts, both hands tend to be closer and appear in the same hand crop. To capture such correlations, we add a hand identity classification branch to SVEgoNet, inspired by [23]. We let the branch classify whether _left_, _right_, or _both_ hands appear in a given hand crop.
**Evaluation.** We compare the predictions from our model and UmeTrack [11] with the ground truth in wrist-relative coordinates. We use two standard metrics: mean per joint position error (MPJPE) in millimeters, and area under curve of percentage of correct keypoints (PCK-AUC).
### Results
**Effect of automatic annotation.** In Table 4, we compare the performance of SVEgoNet trained on datasets with manual (M), automatic (A), and manual + automatic (M+A) annotations, respectively. We provide Eval-M results as the canonical reference and the other results on all evaluation sets. We observe that using Train-A alone, which is 21 times larger than Train-M, slightly increases error on Eval-M by 3% relative. On the other hand, the model trained on the combined annotations, Train-M+A, consistently gives the lowest error, which validates our efforts in scaling annota
\begin{table}
\begin{tabular}{l|c c c} Subsets & Eval-M & Eval-A & Eval-M+A \\ \hline \hline Train-M & 24.38 & 28.58 & 28.35 \\ Train-A & 25.18 & 22.29 & 22.45 \\ Train-M+A & **23.46** & **21.84** & **21.92** \\ \end{tabular}
\end{table}
Table 4: **Effect of automatic annotation for the training of SVEgoNet.** We use egocentric image sets with manual (M), automatic (A), and manual and automatic (M + A) annotation for training and evaluation. We report MPJPE (mm) as the evaluation metric (lower is better).
\begin{table}
\begin{tabular}{l|c c} Annotation method & MPJPE & PCK-AUC \\ \hline \hline
2D + Triangulation & 49.21 & 23.9 \\ MVExoNet-R1 (Ours) & 21.20 & 51.3 \\ MVExoNet-R2 (Ours) & 14.57 & 67.2 \\ MVExoNet-R3 (Ours) & **13.38** & **70.4** \\ \end{tabular}
\end{table}
Table 3: **Evaluation of multi-view annotation on the Desktop Activities dataset [21].** We use MPJPE (mm) and PCK-AUC (%) as the evaluation metrics.
tions with automatic methods. This study also shows that having a hybrid of manual and automatic annotations is a pragmatic solution to improving the model performance.
Qualitative results.Fig. 5 shows qualitative examples of 3D hand poses generated by UmeTrack [11], our automatic annotation pipeline, and our trained egocentric baseline SVEgoNet. We visualize the prediction of each model from different viewpoints. The egocentric baseline UmeTrack can estimate hand poses reasonably well when seen from the egocentric view; however, visualization in exocentric views reveals that it tends to make errors along the z-axis. In particular, the accuracy of the prediction degrades in hard examples with self-occlusion (left example) or hand-object occlusion (middle and right examples). On the other hand, our multi-view automatic annotation overcomes these failures using the cues of multiple exocentric images. Owing to it, the SVEgoNet trained on the annotation achieves more robust results to these occlusion cases.
## 5 Action classification from 3D hand poses
Finally, we revisit our motivating question: _How does the quality of 3D hand poses affect action recognition performance?_ We answer this question with a novel evaluation scheme: verb classification with hand poses as input. In Assembly101 [28], an action is defined at a fine-grained level as the combination of a single verb describing a movement plus an interacting object, _e.g_., _pick up a screwdriver_. We use six verb labels to evaluate predicted hand poses, including _pick up_, _position_, _screw_, _put down_, _remove_, and _unscrew_ (see the left figure in Fig. 6). This is because these verbs frequently appear in the dataset and heavily depend on the user's hand movements, which hand pose estimation aims to encode.
For classifying verbs, we train MS-G3D [20], a graph convolutional network, using the output of egocentric hand pose estimators. We note that verbs like _screw_ and _unscrew_ are cyclic actions that usually take a long time but have relatively fewer instances (see Fig. 6). To address this, we augment the training data for these verb classes. Following the experiments of Assembly101, for each segment, we input the sequence of 42 keypoints (21 for each hand). We use the same train/eval split as our automatic annotation, AssemblyHands-A, sampled at a frequency of 30 Hz (_vs_. the original 60 Hz). The model constructs time-series graphs from 3D hand poses, and classifies each segment of poses into a single verb.
### Results
In Table 5, we report the verb classification accuracy given 3D hand poses estimated from the egocentric cameras. First, we establish an empirical upper bound for verb classification accuracy in AssemblyHands-A using the automatically annotated hand poses. The verb classifier trained on our automatic annotations achieves 60% verb accuracy on average.
We compare our single-view SVEgoNet to the off-the-shelf egocentric hand pose estimator UmeTrack [11], which
Figure 5: **Qualitative examples of 3D hand poses given by our automatic annotation, SVEgoNet, and UmeTrack [11]. We visualize the 2D projection of 3D poses in one egocentric image and another synchronized exocentric image. We use colored borders to indicate the source images from which hand poses are computed: exocentric (red, additional views omitted) or egocentric (green). The egocentric-based UmeTrack exhibits multiple failure modes, such as inaccurate relative depth prediction of keypoints (left) and entire hand (middle), and completely losing track during occlusion (right). Our multi-view automatic annotation overcomes these failures, resulting in a more robust SVEgoNet when trained on such annotations.**
was used to provide the original annotations for Assembly101, and uses a feature fusion module from multiple egocentric images. First, we report on the pose estimation metric, where SVEgoNet achieves 21.92 mm MPJPE, which is 33% lower than UmeTrack. Next, for verb classification accuracy, using hand poses predicted by SVEgoNet also outperforms using UmeTrack by a large margin (54.7 50.3). When using the upper bound performance of 60.0 as a reference, using SVEgoNet poses attains 91.1% relative performance, which is significantly better than the 83.8% that can be achieved with UmeTrack.
Additionally, we present classification confusion matrices for UmeTrack and SVEgoNet in Fig. 6. Using SVEgoNet predictions significantly reduces the off-diagonal confusions. Measuring the performance individually per verb, SVEgoNet improves the verb accuracy from the UmeTrack by 2.1%, 6.2%, 13.1%, 1.8%, and 4.1% for _pick up_, _put down_, _position_, _screw_, and _unscrew_, respectively, while dropping the accuracy for _remove_ by 1.8%.
The fact that we achieve more than 90% relative performance compared to the upper bound is very encouraging, as SVEgoNet only uses a single egocentric image as input, as opposed to performing complex inference with multi-view exocentric images. This again speaks to the large potential in recognizing activities using lightweight egocentric setups, such as head-mounted monochrome cameras.
## 6 Conclusion
We present **AssemblyHands**, a novel benchmark dataset for studying egocentric activities in the presence of strong hand-object interactions. We provide accurate 3D hand pose annotations on a large scale, using an automatic annotation method based on multi-view feature aggregation, which far outperforms the egocentric-based annotation from the original Assembly101. The accurate annotations allow us to carry out in-depth analysis of how hand pose estimates inform action recognition. We provide a baseline for single-view egocentric hand pose estimation, and propose a novel evaluation scheme based on verb classification. Our results have confirmed that the quality of 3D hand poses significantly affects verb recognition performance. We hope that AssemblyHands can inspire new methods and insights for understanding human activities from the egocentric view.
**Limitations and future work.** We have focused on hand pose annotations and action classification from hand poses in this work. While object cues (_e.g_., object pose) would further benefit the task, its annotation creates a bigger challenge due to the presence of many small object parts in the assembly task. In future work, we first plan to extend hand pose annotation to the entire Assembly101 at higher sampling rates. We also plan to obtain object-level annotation, _e.g_., object bounding boxes. Finally, we are interested in exploring the interplay between hands, objects, and actions with multi-task learning.
## Acknowledgments
The authors would like to thank Kevin Harris for help with data collection, and Lingni Ma, Svetoslav Kolev, Bugra Tekin, Edoardo Remelli, Shangchen Han, Robert Wang for helpful discussions.
\begin{table}
\begin{tabular}{l|c|c c c c c c c} Method & MPJPE & pick up & put down & position & remove & screw & unscrew & Avg. Verb Acc. \\ \hline \hline UmeTrack [11] & 32.91 & 66.0 & 41.5 & 51.2 & **29.2** & 42.5 & 59.6 & 50.3 (83.8\%) \\ SVEgoNet (Ours) & **21.92** & **68.1** & **47.7** & **64.3** & 27.4 & **44.3** & **63.7** & **54.7** (91.1\%) \\ \hline AssemblyHands-A & - & 70.0 & 57.4 & 67.5 & 36.4 & 49.8 & 64.1 & 60.0 (100\%) \\ \end{tabular}
\end{table}
Table 5: **Evaluation of action classification from hand poses.** We train and evaluate a MS-G3D [20] action classification model using hand pose sequences as input, and report Verb Accuracy (%). AssemblyHands-A represents the empirical upper bound where automatically annotated hand poses are used as input. Our SVEgoNet predicts more accurate 3D hand poses, which leads to better classification accuracy.
Figure 6: **Verb label distribution and confusion matrices of verb classification.** We show the distribution of the six verb labels (left) used in our experiments and confusion matrices of UmeTrack [11] (middle) and our SVEgoNet (right). |
2305.15747 | Union Subgraph Neural Networks | Graph Neural Networks (GNNs) are widely used for graph representation
learning in many application domains. The expressiveness of vanilla GNNs is
upper-bounded by 1-dimensional Weisfeiler-Leman (1-WL) test as they operate on
rooted subtrees through iterative message passing. In this paper, we empower
GNNs by injecting neighbor-connectivity information extracted from a new type
of substructure. We first investigate different kinds of connectivities
existing in a local neighborhood and identify a substructure called union
subgraph, which is able to capture the complete picture of the 1-hop
neighborhood of an edge. We then design a shortest-path-based substructure
descriptor that possesses three nice properties and can effectively encode the
high-order connectivities in union subgraphs. By infusing the encoded neighbor
connectivities, we propose a novel model, namely Union Subgraph Neural Network
(UnionSNN), which is proven to be strictly more powerful than 1-WL in
distinguishing non-isomorphic graphs. Additionally, the local encoding from
union subgraphs can also be injected into arbitrary message-passing neural
networks (MPNNs) and Transformer-based models as a plugin. Extensive
experiments on 18 benchmarks of both graph-level and node-level tasks
demonstrate that UnionSNN outperforms state-of-the-art baseline models, with
competitive computational efficiency. The injection of our local encoding to
existing models is able to boost the performance by up to 11.09%. Our code is
available at https://github.com/AngusMonroe/UnionSNN. | Jiaxing Xu, Aihu Zhang, Qingtian Bian, Vijay Prakash Dwivedi, Yiping Ke | 2023-05-25T05:52:43Z | http://arxiv.org/abs/2305.15747v3 | # Union Subgraph Neural Networks
###### Abstract
Graph Neural Networks (GNNs) are widely used for graph representation learning in many application domains. The expressiveness of vanilla GNNs is upper-bounded by 1-dimensional Weisfeiler-Leman (1-WL) test as they operate on rooted subtrees through iterative message passing. In this paper, we empower GNNs by injecting neighbor-connectivity information extracted from a new type of substructure. We first investigate different kinds of connectivities existing in a local neighborhood and identify a substructure called union subgraph, which is able to capture the complete picture of the 1-hop neighborhood of an edge. We then design a shortest-path-based substructure descriptor that possesses three nice properties and can effectively encode the high-order connectivities in union subgraphs. By infusing the encoded neighbor connectivities, we propose a novel model, namely Union Subgraph Neural Network (UnionSNN), which is proven to be strictly more powerful than 1-WL in distinguishing non-isomorphic graphs. Additionally, the local encoding from union subgraphs can also be injected into arbitrary message-passing neural networks (MPNNs) and Transformer-based models as a plugin. Extensive experiments on 17 benchmarks of both graph-level and node-level tasks demonstrate that UnionSNN outperforms state-of-the-art baseline models, with competitive computational efficiency. The injection of our local encoding to existing models is able to boost the performance by up to 11.09%.
## 1 Introduction
With the ubiquity of graph-structured data emerging from various modern applications, Graph Neural Networks (GNNs) have gained increasing attention from both researchers and practitioners. GNNs have been applied to many application domains, including quantum chemistry [8; 10; 31], social science [14; 44; 60], transportation [9; 40] and neuroscience [2; 52], and have attained promising results on graph classification [31; 60], node classification [43] and link prediction [45; 63] tasks.
Most GNNs are limited in terms of their expressive power. Xu et al., [57] show that GNNs are at most as powerful as 1-dimentional Weisfeiler-Leman (1-WL) test [54] in distinguishing non-isomorphic
graph structures. This is because a vanilla GNN essentially operates on a subtree rooted at each node in its message passing, _i.e._, it treats every neighbor of the node equally in its message aggregation. In this regard, it overlooks any discrepancy that may exist in the connectivities between neighbors. To address this limitation, efforts have been devoted to incorporating local substructure information to GNNs. Several studies attempt to encode such local information through induced subgraphs [65], overlap subgraphs [55] and spatial encoding [4], to enhance GNNs' expressiveness. But the local structures they choose are not able to capture the complete picture of the 1-hop neighborhood of an edge. Some others incorporate shortest path information to edges in message passing via distance encoding [26], adaptive breath/depth functions [29] and affinity matrix [51] to control the message from neighbors at different distances. However, the descriptor used to encode the substructure may overlook some connectivities between neighbors. Furthermore, some of the above models also suffer from high computational cost due to the incorporation of certain substructures.
In this paper, we aim to develop a model that overcomes the above drawbacks and yet is able to empower GNNs' expressiveness. (1) We define a new type of substructures named union subgraphs, each capturing the entire closed neighborhood w.r.t. an edge. (2) We design an effective substructure descriptor that encodes high-order connectivities and it is easy to incorporate with arbitrary message-passing neural network (MPNN) or Transformer-based models. (3) We propose a new model, namely Union Subgraph Neural Network (UnionSNN), which is strictly more expressive than the vanilla GNNs (1-WL) in theory and also computationally efficient in practice. Our contributions are summarized as follows:
* We investigate different types of connectivities existing in the local neighborhood and identify the substructure, named "union subgraph", that is able to capture the complete 1-hop neighborhood.
* We abstract three desirable properties for a good substructure descriptor and design a shortest-path-based descriptor that possesses all the properties with high-order connectivities encoded.
* We propose a new model, UnionSNN, which incorporates the information extracted from union subgraphs into the message passing. We also show how our local encoding can be flexibly injected into any arbitrary MPNNs and Transformer-based models. We theoretically prove that UnionSNN is more expressive than 1-WL. We also show that UnionSNN is stronger than 3-WL in some cases.
* We perform extensive experiments on both graph-level and node-level tasks. UnionSNN consistently outperforms baseline models on 17 benchmark datasets, with competitive efficiency. The injection of our local encoding is able to boost the performance of base models by up to 11.09%, which justifies the effectiveness of our proposed union subgraph and substructure descriptor in capturing local information.
## 2 Related Work
### Substructure-Enhanced GNNs
In recent years, several GNN architectures have been designed to enhance their expressiveness by encoding local substructures. GraphSNN [55] brings the information of overlap subgraphs into the message passing scheme as a structural coefficient. However, the overlap subgraph and the substructure descriptor used by GraphSNN are not powerful enough to distinguish all non-isomorphic substructures in the 1-hop neighborhood. Zhao et al. [65] encode the induced subgraph for each node and inject it into node representations. Graph Substructure Network [4] introduces structural biases in the aggregation function to break the symmetry in message passing. For these two methods, the neighborhood under consideration should be pre-defined, and the subgraph matching is extremely expensive (\(O(n^{k})\) for \(k\)-tuple substructure) when the substructure gets large. Similarly, a line of research [3; 17; 47] develops new WL aggregation schemes to take into account substructures like cycles or cliques. Despite these enhancements, performing cycle counting is very time-consuming. Other Transformer-based methods [11; 25; 33; 56] incorporate local structural information via positional encoding [27; 62]. Graphormer [59] combines the node degree and the shortest path information for spatial encoding, while other works [13; 26] employ random walk based encodings that can encode \(k\)-hop neighborhood information of a node. However, these positional encodings
only consider relative distances from the center node and ignore high-order connectivities between the neighbors.
### Path-Related GNNs
A significant amount of works have focused on the application of shortest paths and their related techniques to GNNs. Li et al. [26] present a distance encoding module to augment node features and control the receptive field of message passing. GeniePath [29] proposes an adaptive breath function to learn the importance of different-sized neighborhoods and an adaptive depth function to extract and filter signals from neighbors within different distances. PathGNN [46] imitates how the Bellman-Ford algorithm solves the shortest path problem in generating weights when updating node features. SPN [1] designs a scheme, in which the representation of a node is propagated to each node in its shortest path neighborhood. Some recent works adapt the concept of curvature from differential geometry to reflect the connectivity between nodes and the possible bottleneck effects. CurvGN [58] reflects how easily information flows between two nodes by graph curvature information, and exploits curvature to reweigh different channels of messages. Topping et al. [48] propose Balanced Forman curvature that better reflects the edges having bottleneck effects, and alleviate the over-squashing problem of GNNs by rewiring graphs. SNALS [51] utilizes an affinity matrix based on shortest paths to encode the structural information of hyperedges. Our method is different from these existing methods by introducing a shortest-path-based substructure descriptor for distinguishing non-isomorphic substructures.
## 3 Local Substructures to Empower MPNNs
In this section, we first introduce MPNNs. We then investigate what kind of local substructures are beneficial to improve the expressiveness of MPNNs.
### Message Passing Neural Networks
We represent a graph as \(G=(V,E,X)\), where \(V=\{v_{1},...,v_{n}\}\) is the set of nodes, \(E\in V\times V\) is the set of edges, and \(X=\{\mathbf{x}_{v}\mid v\in V\}\) is the set of node features. The set of neighbors of node \(v\) is denoted by \(\mathcal{N}(v)=\{u\in V\mid(v,u)\in E\}\). The \(l\)-th layer of an MPNN [57] can be written as:
\[\mathbf{h}_{v}^{(l)}=\mathrm{AGG}^{(l-1)}(\mathbf{h}_{v}^{(l-1)},\mathrm{MSG }^{(l-1)}(\{\mathbf{h}_{u}^{(l-1)},u\in\mathcal{N}(v)\})), \tag{1}\]
where \(\mathbf{h}_{v}^{(l)}\) is the representation of node \(v\) at the \(l\)-th layer, \(\mathbf{h}_{v}^{(0)}=\mathbf{x}_{v}\), \(\mathrm{AGG}(\cdot)\) and \(\mathrm{MSG}(\cdot)\) denote the aggregation and message functions, respectively.
### Local Substructures to Improve MPNNs
According to Eq. (1), MPNN updates the representation of a node isotropously at each layer and ignores the structural connections between the neighbors of the node. Essentially, the local substructure utilized in the message passing of MPNN is a subtree rooted at the node. Consequently, if two non-isomorphic graphs have the same set of rooted subtrees, they cannot be distinguished by MPNN (and also 1-WL). Such an example is shown in Figure 1(a). A simple fix to this problem is to encode the local structural information about each neighbor, based on which neighbors are treated unequally in the message passing. One natural question arises: **which substructure shall we choose to characterize the 1-hop local information?**
To answer the above question, we consider two adjacent nodes \(v\) and \(u\), and discuss different types of edges that may exist in their neighbor sets, \(\mathcal{N}(v)\) and \(\mathcal{N}(u)\). We define the closed neighbor set of node \(v\) as \(\tilde{\mathcal{N}}(v)=\mathcal{N}(v)\cup\{v\}\). The induced subgraph of \(\tilde{\mathcal{N}}(v)\) is denoted by \(S_{v}\), which defines the closed neighborhood of \(v\). The common closed neighbor set of \(v\) and \(u\) is \(\mathcal{N}_{vu}=\tilde{\mathcal{N}}(v)\cap\tilde{\mathcal{N}}(u)\) and the exclusive neighbor set of \(v\) w.r.t \(u\) is defined as \(\mathcal{N}_{v}^{-u}=\tilde{\mathcal{N}}(v)-\mathcal{N}_{vu}\). As shown in Figure 1(b), there are four types of edges in the closed neighborhood of \(\{v,u\}\):
* \(E_{1}^{vu}\in\mathcal{N}_{vu}\times\mathcal{N}_{vu}\): edges between the common closed neighbors of \(v\) and \(u\), such as \((a,b)\)
* \(E_{2}^{vu}\in(\mathcal{N}_{vu}\times\mathcal{N}_{v}^{-u})\cup(\mathcal{N}_{vu} \times\mathcal{N}_{u}^{-v})\): edges between a common closed neighbor of \(v\) and \(u\) and an exclusive neighbor of \(v\)/\(u\), such as \((a,d)\);
* \(E_{3}^{vu}\in\mathcal{N}_{v}^{-u}\times\mathcal{N}_{u}^{-v}\): edges between two exclusive neighbors of \(v\) and \(u\) from different sides, such as \((c,d)\);
* \(E_{4}^{vu}\in(\mathcal{N}_{v}^{-u}\times\mathcal{N}_{v}^{-u})\cup(\mathcal{N}_ {u}^{-v}\times\mathcal{N}_{u}^{-v})\): edges between two exclusive neighbors of \(v\) or \(u\) from the same side, such as \((d,f)\).
We now discuss three different local substructures, each capturing a different set of edges.
**Overlap Subgraph[55].** The overlap subgraph of two adjacent nodes \(v\) and \(u\) is defined as \(S_{v\cap u}=S_{v}\cap S_{u}\). The overlap subgraph contains only edges in \(E_{1}^{vu}\).
**Union Minus Subgraph.** The union minus subgraph of two adjacent nodes \(v\) and \(u\) is defined as \(S_{v\cup u}^{-}=S_{v}\cup S_{u}\). The union minus subgraph consists of edges in \(E_{1}^{vu}\), \(E_{2}^{vu}\) and \(E_{4}^{vu}\).
**Union Subgraph**. The union subgraph of two adjacent nodes \(v\) and \(u\), denoted as \(S_{v\cup u}\), is defined as the induced subgraph of \(\tilde{\mathcal{N}}(v)\cup\tilde{\mathcal{N}}(u)\). The union subgraph contains all four types of edges mentioned above.
It is obvious that the union subgraph captures the whole picture of the 1-hop neighborhood of two adjacent nodes. This subgraph captures all types of connectivities within the neighborhood, providing an ideal local substructure for enhancing the expressive power of MPNNs. We illustrate how effective different local substructures are in improving MPNNs through an example in Appendix A. Note that we restrict the discussion to the 1-hop neighborhood because we aim to develop a model based on the MPNN scheme, in which a single layer of aggregation is performed on the 1-hop neighbors.
### Union Isomorphism
We now proceed to define the isomorphic relationship between the neighborhoods of two nodes \(i\) and \(j\) based on union subgraphs. The definition follows that of overlap isomorphism in [55].
**Overlap Isomorphism.**\(S_{i}\) and \(S_{j}\) are overlap-isomorphic, denoted as \(S_{i}\simeq_{overlap}S_{j}\), if there exists a bijective mapping \(g\): \(\tilde{\mathcal{N}}(i)\rightarrow\tilde{\mathcal{N}}(j)\) such that \(g(i)=j\), and for any \(v\in\mathcal{N}(i)\) and \(g(v)=u\), \(S_{i\cap v}\) and \(S_{j\cap u}\) are isomorphic (ordinary graph isomorphic).
**Union Isomorphism**. \(S_{i}\) and \(S_{j}\) are union-isomorphic, denoted as \(S_{i}\simeq_{union}S_{j}\), if there exists a bijective mapping \(g\): \(\tilde{\mathcal{N}}(i)\rightarrow\tilde{\mathcal{N}}(j)\) such that \(g(i)=j\), and for any \(v\in\mathcal{N}(i)\) and \(g(v)=u\), \(S_{i\cup v}\) and \(S_{j\cup u}\) are isomorphic (ordinary graph isomorphic).
**Theorem 1**.: _If \(S_{i}\simeq_{union}S_{j}\), then \(S_{i}\simeq_{overlap}S_{j}\); but not vice versa._
Theorem 1 states that union-isomorphism is stronger than overlap-isomorphism. The proofs of all theorems are provided in Appendix B. We provide an example of a pair of non-isomorphic graphs that are distinguishable under union-isomorphism but not overlap-isomorphism or 1-WL (subtree). Please refer to Figure 6 in Appendix C for detailed discussions.
Figure 1: (a) A pair of non-isomorphic graphs not distinguishable by 1-WL; (b) An example of various local substructures for two adjacent nodes \(v\) and \(u\).
UnionSNN
In this section, we first discuss how to design our substructure descriptor so that it well captures the structural information in union subgraphs with several desirable properties. We then present our model UnionSNN, which effectively incorporates the information encoded by the substructure descriptor to MPNNs and Transformer-based models. Finally, we show that UnionSNN has a stronger expressiveness than 1-WL and is superior to GraphSNN in its design.
### Design of Substructure Descriptor Function
Let \(\mathcal{U}=\{S_{v\cup u}|(v,u)\in E\}\) be the set of union subgraphs in \(G\). In order to fuse the information of union subgraphs in message passing, we need to define a function \(f(\cdot)\) to describe the structural information of each \(S_{v\cup u}\in\mathcal{U}\). Ideally, given two union subgraphs centered at node \(v\), \(S_{v\cup u}=(V_{v\cup u},E_{v\cup u})\) and \(S_{v\cup u^{\prime}}=(V_{v\cup u^{\prime}},E_{v\cup u^{\prime}})\), we want \(f(S_{v\cup u})=f(S_{v\cup u^{\prime}})\) iff \(S_{v\cup u}\) and \(S_{v\cup u^{\prime}}\) are isomorphic. We abstract the following properties of a good substructure descriptor function \(f(\cdot)\):
* **Size Awareness**. \(f(S_{v\cup u})\neq f(S_{v\cup u^{\prime}})\) if \(|V_{v\cup u}|\neq|V_{v\cup u^{\prime}}|\) or \(|E_{v\cup u}|\neq|E_{v\cup u^{\prime}}|\);
* **Connectivity Awareness**. \(f(S_{v\cup u})\neq f(S_{v\cup u^{\prime}})\) if \(|V_{v\cup u}|=|V_{v\cup u^{\prime}}|\) and \(|E_{v\cup u}|=|E_{v\cup u^{\prime}}|\) but \(S_{v\cup u}\) and \(S_{v\cup u^{\prime}}\) are not isomorphic;
* **Isomorphic Invariance**. \(f(S_{v\cup u})=f(S_{v\cup u^{\prime}})\) if \(S_{v\cup u}\) and \(S_{v\cup u^{\prime}}\) are isomorphic.
Figure 2 illustrates the properties. Herein, we design \(f(\cdot)\) as a function that transforms \(S_{v\cup u}\) to a path matrix \(\mathbf{P}^{vu}\in\mathbb{R}^{|V_{v\cup u}|\times|V_{v\cup u}|}\) such that each entry:
\[\mathbf{P}^{vu}_{ij}=\mathrm{PathLen}(i,j,S_{v\cup u}),i,j\in V_{v\cup u}, \tag{2}\]
where \(\mathrm{PathLen}(\cdot)\) denotes the length of the shortest path between \(i\) and \(j\) in \(S_{v\cup u}\). We choose the path matrix over the adjacency matrix or the Laplacian matrix as it explicitly encodes high-order connectivities between the neighbors. In addition, with a fixed order of nodes, we can get a unique \(\mathbf{P}^{vu}\) for a given \(S_{v\cup u}\), and vice versa. We formulate it in Theorem 2.
**Theorem 2**.: _With a fixed order of nodes in the path matrix, we can obtain a unique path matrix \(\mathbf{P}^{vu}\) for a given union subgraph \(S_{v\cup u}\), and vice versa._
It is obvious that our proposed \(f(\cdot)\) satisfies the above-mentioned three properties, with a node permutation applied in the isomorphic case.
**Discussion on other substructure descriptor functions**. In the literature, some other functions have also been proposed to describe graph substructures. (1) Edge Betweenness [5] is defined by the number of shortest paths between any pair of nodes in a (sub)graph \(G\) that pass through an edge. When applying the edge betweenness to \((v,u)\) in \(S_{v\cup u}\), the metric would remain the same on two different union subgraphs, one with an edge in \(E_{4}^{u}\) and one without. This shows that edge betweenness does not satisfy Size Awareness; (2) Wijesinghe and Wang [55] puts forward a substructure descriptor as a function of the number of nodes and edges. This descriptor fails to distinguish non-isomorphic subgraphs with the same size, and thus does not satisfy Connectivity Awareness; (3) Discrete Graph Curvature, e.g., Olliveier Ricci curvature [28; 37], has been introduced to MPNNs in recent years [58]. Ricci curvature first computes for each node a probability vector of length \(|V|\) that characterizes a uniform propagation distribution in the neighborhood. It then defines the curvature of two adjacent nodes as the Wasserstein distance of their corresponding probability vectors. Similar to edge betweenness, curvature doesn't take into account the edges in \(E_{4}^{vu}\) in its
Figure 2: Three properties that a good substructure descriptor function \(f(\cdot)\) should exhibit.
computation and thus does not satisfy Size Awareness either. We detail the definitions of these substructure descriptor functions in Appendix D.
### Network Design
For the path matrix of an edge \((v,u)\) to be used in message passing, we need to further encode it as a scalar. We choose to perform Singular Value Decomposition (SVD) [18] on the path matrix and extract the singular values:
\[\mathbf{P}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{*}. \tag{3}\]
The sum of the singular values of \(\mathbf{P}^{vu}\), denoted as \(a^{vu}=\mathrm{sum}(\mathbf{\Sigma}^{vu})\), is used as the local structural coefficient of the edge \((v,u)\in E\). Note that since the local structure never changes in message passing, we can compute the structural coefficients in preprocessing before the training starts. A nice property of this structural coefficient is that, it is **permutation invariant** thanks to the use of SVD and the sum operator. With an arbitrary order of nodes, the computed \(a^{vu}\) remains the same, which removes the condition required by Theorem 2.
**UnionSNN.** We now present our model, namely Union Subgraph Neural Network (UnionSNN), which utilizes union-subgraph-based structural coefficients to incorporate local substructures in message passing. For each vertex \(v\in V\), the node representation at the \(l\)-th layer is generated by:
\[\mathbf{h}_{v}^{(l)}=\mathrm{MLP}_{1}{}^{(l-1)}((1+\epsilon^{(l-1)})\mathbf{h}_{v}^{( l-1)}+\sum_{u\in\mathcal{N}(v)}\mathrm{Trans}^{(l-1)}(\tilde{a}^{vu})\mathbf{h}_{u}^ {(l-1)}), \tag{4}\]
where \(\epsilon^{(l-1)}\) is a learnable scalar parameter and \(\tilde{a}^{vu}=\frac{a^{vu}}{\sum_{u\in\mathcal{N}(v)}a^{vu}}\). \(\mathrm{MLP}_{1}(\cdot)\) denotes a multilayer perceptron (MLP) with a non-linear function ReLU. To transform the weight \(\tilde{a}^{vu}\) to align with the multi-channel representation \(\mathbf{h}_{u}^{(l-1)}\), we follow [58] and apply a transformation function \(\mathrm{Trans}(\cdot)\) for better expressiveness and easier training:
\[\mathrm{Trans}(a)=\mathrm{softmax}(\mathrm{MLP}_{2}(a)), \tag{5}\]
where \(\mathrm{MLP}_{2}\) denotes an MLP with ReLU and a channel-wise softmax function \(\mathrm{softmax}(\cdot)\) normalizes the outputs of MLP separately on each channel. For better understanding, we provide the pseudo-code of UnionSNN in Appendix E.
**As a Plugin to Empower Other GNNs.** In addition to a standalone UnionSNN network, our union-subgraph-based structural coefficients could also be incorporated into other GNNs in a flexible and yet effective manner. For arbitrary MPNNs as in Eq. (1), we can plugin our structural coefficients via an element-wise multiplication:
\[\mathbf{h}_{v}^{(l)}=\mathrm{AGG}^{(l-1)}(\mathbf{h}_{v}^{(l-1)},\mathrm{MSG}^{(l-1)}(\{\mathrm{Trans}^{(l-1)}(\tilde{a}^{vu}) \mathbf{h}_{u}^{(l-1)},u\in\mathcal{N}(v)\})). \tag{6}\]
For transformer-based models, inspired by the spatial encoding in Graphormer [59], we can inject our structural coefficients into the attention matrix as a bias term:
\[A_{vu}=\frac{\left(h_{v}W_{Q}\right)\left(h_{u}W_{K}\right)^{T}}{\sqrt{d}}+ \mathrm{Trans}(\tilde{a}^{vu}), \tag{7}\]
where the definition of \(\mathrm{Trans}(\cdot)\) is the same as Eq. (5) and shared across all layers, \(h_{v},h_{u}\in\mathbb{R}^{1\times d}\) are the node representations of \(v\) and \(u\), \(W_{Q},W_{K}\in\mathbb{R}^{d\times d}\) are the parameter matrices, and \(d\) is the hidden dimension of \(h_{v}\) and \(h_{u}\). The detailed interpretation is presented in Appendix F.
### Expressive Power of UnionSNN
We formalize the following theorem to show that UnionSNN is more powerful than 1-WL test in terms of expressive power.
**Theorem 3**.: _UnionSNN is more expressive than 1-WL in testing non-isomorphic graphs._
The stronger expressiveness of UnionSNN over 1-WL is credited to its use of union subgraphs, with an effective encoding of local neighborhood connectivities via the shortest-path-based design of structural coefficients. We further provide a special case to show some graphs can be distinguished by UnionSNN but not by 3-WL or GraphSNN in Appendix G.
**Design Comparisons with GraphSNN** Our UnionSNN is similar to GraphSNN in the sense that both improve the expressiveness of MPNNs (and 1-WL) by injecting the information of local substructures. However, UnionSNN is superior to GraphSNN in the following aspects. (1) Union subgraphs in UnionSNN are stronger than overlap subgraphs in GraphSNN, as ensured by Theorem 1. (2) The shortest-path-based substructure descriptor designed in UnionSNN is more powerful than that in GraphSNN: the latter fails to possess the property of Connectivity Awareness (as elaborated in Section 4.1). An example of two non-isomorphic subgraphs \(S_{v^{\prime}\cap u}\) and \(S_{v^{\prime}\cap u^{\prime}}\) is shown in Figure 3. They have the same structural coefficients in GraphSNN. (3) The aggregation function in UnionSNN works on adjacent nodes in the input graph, while that in GraphSNN utilizes the structural coefficients on all pairs of nodes (regardless of their adjacency). Consequently, GraphSNN requires to pad the adjacency matrix and feature matrix of each graph to the maximum graph size, which significantly increases the computational complexity. The advantages of UnionSNN over GraphSNN are also evidenced by experimental results in Section 5.4.
## 5 Experimental Study
In this section, we evaluate the effectiveness of our proposed model under various settings and aim to answer the following research questions: **RQ1.** Can UnionSNN outperform existing MPNNs and transformer-based models? **RQ2.** Can other GNNs benefit from our structural coefficient? **RQ3.** How do different components affect the performance of UnionSNN? **RQ4.** Is our runtime competitive with other substructure descriptors? We conduct experiments on three tasks: graph classification, graph regression and node classification. When we use UnionSNN to plugin other models we use the prefix term "Union-", such as UnionGCN.
**Datasets.** For graph classification, we use 10 benchmark datasets. Eight of them were selected from the TUDataset [22], including MUTAG, PROTEINS, ENZYMES, DD, FRANKENSTEIN (denoted
\begin{table}
\begin{tabular}{l|c c c c c c c c} \hline \hline & MUTAG & PROTEINS & ENZYMES & DD & FRANK & Tox21 & NCI1 & NCI109 \\ \hline GAT & 77.56 \(\pm\) 10.49 & 74.34 \(\pm\) 2.09 & 67.67 \(\pm\) 3.74 & 74.25 \(\pm\) 3.76 & 62.85 \(\pm\) 1.59 & 90.35 \(\pm\) 0.71 & 78.07 \(\pm\) 1.94 & 74.34 \(\pm\) 2.18 \\
3WL-GNN & 84.06 \(\pm\) 6.62 & 60.18 \(\pm\) 6.35 & 54.17 \(\pm\) 6.25 & 74.84 \(\pm\) 2.63 & 58.68 \(\pm\) 1.93 & 90.31 \(\pm\) 1.33 & 78.39 \(\pm\) 1.54 & 77.97 \(\pm\) 2.22 \\ UGformer & 75.66 \(\pm\) 8.67 & 70.17 \(\pm\) 5.42 & 64.57 \(\pm\) 4.53 & 75.51 \(\pm\) 5.52 & 56.13 \(\pm\) 2.51 & 88.06 \(\pm\) 0.50 & 68.84 \(\pm\) 1.54 & 66.37 \(\pm\) 2.74 \\ MEWISPool & 84.73 \(\pm\) 4.73 & 68.10 \(\pm\) 3.97 & 53.66 \(\pm\) 6.07 & 76.03 \(\pm\) 2.59 & 64.63 \(\pm\) 2.83 & 88.13 \(\pm\) 0.05 & 74.21 \(\pm\) 3.26 & 75.30 \(\pm\) 1.45 \\ CurvGN & 87.25 \(\pm\) 6.28 & 75.73 \(\pm\) 2.87 & 56.50 \(\pm\) 7.13 & 72.16 \(\pm\) 1.88 & 61.89 \(\pm\) 2.41 & 90.87 \(\pm\) 0.38 & 79.32 \(\pm\) 1.26 & 77.30 \(\pm\) 1.78 \\ NestedGIN & 86.23 \(\pm\) 8.82 & 86.85 \(\pm\) 3.22 & 54.67 \(\pm\) 9.90 & 70.44 \(\pm\) 3.67 & 67.14 \(\pm\) 19.42 & 11.81 \(\pm\) 82.04 & 22.32 \(\pm\) 79.94 \(\pm\) 1.59 \\ GatedGCN-LSPE & 88.33 \(\pm\) 8.38 & 73.94 \(\pm\) 4.22 & 64.50 \(\pm\) 5.92 & 76.74 \(\pm\) 2.69 & 67.44 \(\pm\) 2.65 & 91.71 \(\pm\) 78.05 & 78.15 \(\pm\) 68.01 & 3.23 \(\pm\) 2.33 \\ GraphSNN & 84.04 \(\pm\) 4.09 & 71.78 \(\pm\) 4.11 & 67.67 \(\pm\) 3.74 & 76.03 \(\pm\) 2.59 & 67.17 \(\pm\) 2.25 & **92.24**\(\pm\) 0.59 & 70.87 \(\pm\) 2.78 & 70.11 \(\pm\) 1.86 \\ \hline GCN & 77.13 \(\pm\) 5.24 & 73.89 \(\pm\) 2.85 & 64.33 \(\pm\) 5.83 & 72.16 \(\pm\) **2.28** 58.80 \(\pm\) 1.06 & 90.10 \(\pm\) 0.77 & 79.73 \(\pm\) 0.95 & 75.91 \(\pm\) 1.53 \\ UnionGCN (ours) & **81.87 \(\pm\) 3.81** & 75.02 \(\pm\) **2.50** & **64.67**\(\pm\)**7.14** & 69.69 \(\pm\) 4.18 & 61.72 \(\pm\) 1.76 & 91.63 \(\pm\) **0.72** & 80.41 \(\pm\) **1.84** & 79.50 \(\pm\) 1.82 \\ \hline GatedGCN & 77.11 \(\pm\) 10.05 & 76.18 \(\pm\) 3.12 & 66.83 \(\pm\) 5.08 & 72.58 \(\pm\) 3.061 & 61.40 \(\pm\) 1.92 & 90.83 \(\pm\) 0.96 & 80.32 \(\pm\) 2.07 & 78.19 \(\pm\) 2.39 \\ UnionGCN(ours) & 77.14 \(\pm\) **8.14** & **76.91** & 8.06 \(\pm\) **6.73** & 8.68 \(\pm\) 6.89 & 72.50 \(\pm\) 2.22 & 61.44 \(\pm\) **9.81** & 91.31 \(\pm\) **0.88** & 80.95 \(\pm\) 2.11 & 61.82 \(\pm\) **0.88** \\ \hline GraphSAGE & 80.38 \(\pm\) 0.98 & 74.87 \(\pm\) 3.35 & 52.50 \(\pm\) 5.65 & 73.10 \(\pm\) 3.44 & 52.95 \(\pm\) 4.01 & 88.36 \(\pm\) 0.15 & 63.94 \(\pm\) 2.65 & 64.6 \(\pm\) 1.12 \\ UnionSAGE(ours) & 83.04 \(\pm\) 8.20 & 74.57 \(\pm\) 3.55 & 88.32 \(\pm\) 26.64 & 73.85 \(\pm\) 4.06 & 88.59 \(\pm\) 0.12 & 69.36 \(\pm\) 1.84 & 69.87 \(\pm\) 0.06 \\ \hline GIN & 86.23 \(\pm\) 8.17 & 72.86 \(\pm\) 4.14 & 65.83 \(\pm\) 5.93 & 70.29 \(\pm\) 2.96 & 66.50 \(\pm\) 2.37 & 91.74 \(\pm\) 0.95 & 82.29 \(\pm\) 1.77 & 80.95 \(\pm\) 1.87 \\ UnionGIN (ours) & **88.86 \(\pm\) 4.33** & 73.22 \(\pm\) 3.90 & 67.83 \(\pm\) **6.10** & 70.47 \(\pm\) **4.98** & **68.02** & **91.74** & **74.04** & **82.29** & **9.185** & **82.24** & **9.158** \\ \hline UnionSNN (ours) & 87.31 \(\pm\) 5.29 & 75.02 \(\pm\) 2.50 & **68.17** \(\pm\) 7.50 & **77.00** & \(\pm\) 3.73 & 67.83 \(\pm\) 1.99 & 91.76 \(\pm\) 0.85 & **82.34** & \(\pm\) 1.93 & 81.61 \(\pm\) 1.78 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Graph classification results (average accuracy \(\pm\) standard deviation) over 10-fold-CV. The first and second best results on each dataset are highlighted in **bold
as FRANK in our tables), Tox21, NCI1 and NCI109. The other two datasets OGBG-MOLHIV and OGBG-MOLBBBP were selected from Open Graph Benchmark [19]. For graph regression, we conduct experiments on ZINC10k and ZINC-full datasets [12]. For node classification, we test on five datasets, including citation networks (Cora, Citeseer, and PubMed [42]) and Amazon co-purchase networks (Computer and Photo [32]). These datasets cover various graph sizes and densities. The statistics of datasets are summarized in Appendix H.
**Baseline Models.** We select various GNN models as baselines, including (1) classical MPNNs such as GCN [24], GIN [57], GraphSAGE [16], GAT [49], GatedGCN [6]; (2) WL-based GNNs such as 3WL-GNN [30]; (3) transformer-based methods such as UGformer [35], Graphormer [59] and GPS [41]; (4) state-of-the-art graph pooling methods such as MEWISPool [36]; (5) methods that introduce structural information by shortest paths or curvature, such as GeniePath [29], CurvGN [58], and NestedGIN [64]; (6) GNNs with positional encoding, such as GatedGCN-LSPE [13]; (7) GraphSNN [55]. Model implementation details are provided in Appendix I.
### Performance on Different Graph Tasks
**Graph-Level Tasks**. For graph classification, we report the results on 8 TUDatasets in Table 1 and the results on 2 OGB datasets in Appendix J. Our UnionSNN outperforms all baselines in 7 out of 10 datasets (by comparing UnionSNN with all baselines without "ours"). We further apply our structural coefficient as a plugin component to four MPNNs: GCN, GatedGCN, GraphSAGE and GIN. The results show that our structural coefficient is able to boost the performance of the base model in almost all cases, with an improvement of up to 11.09%. For graph regression, we report the mean absolute error (MAE) on ZINC10k and ZINC-full. As shown in Table 2, the performance of MPNNs with our structural coefficient (UnionGCN, UnionGIN and UnionSAGE) dramatically beat their counterparts. Additionally, when injecting our structural coefficient into Transformer-based models, Unionormer and UnionGPS make further improvements over Graphormer and GPS.
**Node-Level Tasks**. We report the results of node classification in Table 3. UnionSNN outperforms all baselines on all 5 datasets. Again, injecting our structural coefficient to GCN, GIN, and GraphSNN achieves performance improvement over base models in almost all cases.
\begin{table}
\begin{tabular}{l|c c c} \hline & ZINC10k & ZINC-full \\ \hline GCN & 0.3800 \(\pm\) 0.0171 & 0.1152 \(\pm\) 0.0010 \\ UnionGCN (ours) & 0.2811 \(\pm\) 0.0050 & 0.0877 \(\pm\) 0.0003 \\ \hline GIN & 0.5090 \(\pm\) 0.0365 & 0.1552 \(\pm\) 0.0079 \\ UnionGIN (ours) & 0.4625 \(\pm\) 0.0222 & 0.1334 \(\pm\) 0.0013 \\ \hline GraphSAGE & 0.3953 \(\pm\) 0.0290 & 0.1205 \(\pm\) 0.0034 \\ UnionSAGE (ours) & 0.3768 \(\pm\) 0.0011 & 0.1146 \(\pm\) 0.0017 \\ \hline Graphormer & 0.1269 \(\pm\) 0.0033 & 0.039 \(\pm\) 0.0031 \\ Unionformer (ours) & 0.1241 \(\pm\) 0.0066 & 0.0252 \(\pm\) 0.0026 \\ \hline GPS & 0.0740 \(\pm\) 0.0022 & 0.0262 \(\pm\) 0.0025 \\ UnionGPS (ours) & 0.0681 \(\pm\) 0.0013 & 0.0236 \(\pm\) 0.0017 \\ \hline \end{tabular}
\end{table}
Table 2: Graph regression results (average test MAE \(\pm\) standard deviation) on ZINC10k and ZINC-full datasets. The best result is highlighted in **bold**. The winner between a base model with and without our structural coefficient injected is highlighted in **gray background**.
\begin{table}
\begin{tabular}{l|c c c c c} \hline & Cora & Citeseer & PubMed & Computer & Photo \\ \hline GraphSAGE & \(70.60\pm 0.64\) & \(55.02\pm 3.40\) & \(70.36\pm 4.29\) & \(80.30\pm 1.30\) & \(89.16\pm 1.03\) \\ GAT & \(74.82\pm 1.95\) & \(63.82\pm 2.81\) & \(74.02\pm 1.11\) & \(85.94\pm 2.35\) & \(91.86\pm 0.47\) \\ GeniePath & \(72.16\pm 2.69\) & \(57.40\pm 2.16\) & \(70.96\pm 2.06\) & \(82.68\pm 0.45\) & \(89.98\pm 1.14\) \\ CurvGN & \(74.06\pm 1.54\) & \(62.08\pm 0.85\) & \(74.54\pm 1.61\) & \(86.30\pm 0.70\) & \(92.50\pm 0.50\) \\ \hline GCN & \(72.56\pm 4.41\) & \(85.30\pm 3.2\) & \(74.44\pm 0.71\) & \(84.58\pm 3.02\) & \(91.71\pm 0.55\) \\ UnionGCN (ours) & \(74.48\pm 0.42\) & \(59.02\pm 3.64\) & \(74.82\pm 1.10\) & \(88.84\pm 0.27\) & \(92.33\pm 0.53\) \\ \hline GIN & \(75.86\pm 1.09\) & \(63.10\pm 2.24\) & \(76.62\pm 0.64\) & \(86.26\pm 0.56\) & \(92.11\pm 0.32\) \\ UnionGIN (ours) & \(75.90\pm 0.80\) & \(63.66\pm 1.75\) & \(76.78\pm 1.02\) & \(86.81\pm 2.12\) & \(92.28\pm 0.19\) \\ \hline GraphSNN & \(75.44\pm 0.73\) & \(64.68\pm 2.72\) & \(76.76\pm 0.54\) & \(84.11\pm 0.57\) & \(90.82\pm 0.30\) \\ UnionGraphSNN (ours) & \(75.58\pm 0.49\) & \(65.22\pm 1.12\) & \(76.99\pm 0.56\) & \(84.58\pm 0.46\) & \(90.60\pm 0.58\) \\ \hline UnionSNN (ours) & \(76.86\pm 1.58\) & \(65.02\pm 1.02\) & \(77.06\pm 1.07\) & \(87.76\pm 0.36\) & \(92.92\pm 0.38\) \\ \hline \end{tabular}
\end{table}
Table 3: Node classification results (average accuracy \(\pm\) standard deviation) over 10 runs. The first and second best results on each dataset are highlighted in **bold** and underlined. The winner between a base model with and without our structural coefficient injected is highlighted in **gray background**.
### Ablation Study
In this subsection, we validate empirically the design choices made in different components of our model: (1) the local substructure; (2) the substructure descriptor; (3) the encoding method from a path matrix to a scalar. All experiments were conducted on 6 graph classification datasets.
**Local Substructure**. We test three types of local substructures defined in Section 3.2: overlap subgraphs, union minus subgraphs and union subgraphs. They are denoted as "overlap", "minus", and "union" respectively in Table 4. The best results are consistently achieved by using union subgraphs.
**Substructure Descriptor**. We compare our substructure descriptor with four existing ones discussed in Section 4.1. We replace the substructure descriptor in UnionSNN with edge betweenness, node/edge counting, Ricci curvature, and Laplacian matrix (other components unchanged), and obtain four variants, namely BetSNN, CountSNN, CurvNN, and LapSNN. Table 5 shows our UnionSNN is a clear winner: it achieves the best result on 5 out of 6 datasets. This experiment demonstrates that our path matrix better captures structural information.
**Path Matrix Encoding Method**. We test three methods that transform a path matrix to a scalar: (1) sum of all elements in the path matrix (matrix sum); (2) maximum eigenvalue of the path matrix (eigen max); (3) sum of all singular values of the matrix (svd sum) used by UnionSNN in Section 4.2. Table 6 shows that the encoding method "svd sum" performs the best on 5 out of 6 datasets.
### Case Study
In this subsection, we investigate how the proposed structural coefficient \(a^{vu}\) reflects local connectivities. We work on an example union subgraph \(S_{v\cup u}\) in Figure 4 and modify its nodes/edges to study how the coefficient \(a^{vu}\) varies with the local structural change. We have the following observations: (1) with the set of nodes unchanged, deleting an edge increases \(a^{vu}\); (2) deleting a node (and its incident edges) decreases \(a^{vu}\); (3) the four types of edges in the closed neighborhood (Section 3.2) have different effects to \(a^{vu}\): \(E_{1}^{vu}\) <\(E_{2}^{vu}\) <\(E_{3}^{vu}\) <\(E_{4}^{vu}\) (by comparing -ab, -ad, -de, and +df). These observations indicate that a smaller coefficient will be assigned
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline & MUTAG & PROTEINS & ENZYMES & DD & NCI1 & NCI109 \\ \hline \hline \multirow{2}{*}{
\begin{tabular}{l} BetSNN \\ CountSNN \\ CurvNN \\ LapSNN \\ UnionSNN \\ \end{tabular} } & \(80.94\pm 6.60\) & \(69.44\pm 6.15\) & \(65.00\pm 5.63\) & \(70.20\pm 5.15\) & \(74.91\pm 2.48\) & \(73.70\pm 1.87\) \\ & \(84.65\pm 6.76\) & \(70.79\pm 5.07\) & \(66.50\pm 6.77\) & \(74.36\pm 7.21\) & \(81.74\pm 2.35\) & \(79.80\pm 1.67\) \\ & \(85.15\pm 7.35\) & \(72.77\pm 4.42\) & \(67.17\pm 6.54\) & \(75.88\pm 3.24\) & \(81.34\pm 2.27\) & \(80.64\pm 1.85\) \\ & \(89.39\pm 5.24\) & \(68.32\pm 3.49\) & \(66.17\pm 4.15\) & \(76.31\pm 2.85\) & \(81.39\pm 2.08\) & \(81.34\pm 2.93\) \\ & \(87.31\pm 5.29\) & \(75.02\pm 2.50\) & \(68.17\pm 5.70\) & \(77.00\pm 2.37\) & \(82.34\pm 1.93\) & \(81.61\pm 1.78\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation study on substructure descriptor. The best result is highlighted in **bold**.
\begin{table}
\begin{tabular}{l|c c c c c c} \hline \hline & MUTAG & PROTEINS & ENZYMES & DD & NCI1 & NCI109 \\ \hline \hline \multirow{2}{*}{
\begin{tabular}{l} overlap \\ minus \\ union \\ \end{tabular} } & \(85.70\pm 7.40\) & \(71.33\pm 5.35\) & \(65.00\pm 5.63\) & \(73.43\pm 4.07\) & \(73.58\pm 1.73\) & \(72.96\pm 2.01\) \\ & \(\mathbf{87.31\pm 5.29}\) & \(68.70\pm 3.61\) & \(65.33\pm 4.58\) & \(74.79\pm 4.63\) & \(80.66\pm 1.90\) & \(78.70\pm 2.48\) \\ & \(\mathbf{87.31\pm 5.29}\) & \(\mathbf{75.02\pm 2.50}\) & \(\mathbf{68.17\pm 5.70}\) & \(\mathbf{77.00\pm 2.37}\) & \(\mathbf{82.34\pm 1.93}\) & \(\mathbf{81.61\pm 1.78}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study on local substructure. The best result is highlighted in **bold**.
Figure 4: Structural coefficient analysis.
\begin{table}
\begin{tabular}{l|c c c c c c} \hline \hline & MUTAG & PROTEINS & ENZYMES & DD & NCI1 & NCI109 \\ \hline \hline \multirow{2}{*}{
\begin{tabular}{l} BetSNN \\ CountSNN \\ CurvNN \\ LapSNN \\ UnionSNN \\ \end{tabular} } & \(80.94\pm 6.60\) & \(69.44\pm 6.15\) & \(65.00\pm 5.63\) & \(70.20\pm 5.15\) & \(74.91\pm 2.48\) & \(73.70\pm 1.87\) \\ & \(84.65\pm 6.76\) & \(70.79\pm 5.07\) & \(66.50\pm 6.77\) & \(74.36\pm 7.21\) & \(81.74\pm 2.35\) & \(79.80\pm 1.67\) \\ & \(85.15\pm 7.35\) & \(72.77\pm 4.42\) & \(67.17\pm 6.54\) & \(75.88\pm 3.24\) & \(81.34\pm 2.27\) & \(80.64\pm 1.85\) \\ & \(\mathbf{89.39\pm 5.24}\) & \(68.32\pm 3.49\) & \(66.17\pm 4.15\) & \(76.31\pm 2.85\) & \(81.39\pm 2.08\) & \(81.34\pm 2.93\) \\ & \(87.31\pm 5.29\) & \(\mathbf{75.02\pm 2.50}\) & \(\mathbf{68.17\pm 5.70}\) & \(\mathbf{77.00\pm 2.37}\) & \(\mathbf{82.34\pm 1.93}\) & \(\mathbf{81.61\pm 1.78}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation study on substructure descriptor. The best result is highlighted in **bold**.
to an edge with a denser local substructure. This matches our expectation that the coefficient should be small for an edge in a highly connected neighborhood. The rationale is, such edges are less important in message passing as the information between their two incident nodes can flow through more paths. By using the coefficients that well capture local connectivities, the messages from different neighbors could be properly adjusted when passing to the center node. This also explains the effectiveness of UnionSNN in performance experiments. A quantitative analysis of cycle detection is provided in Appendix J.2 to show the ability of our proposed structural coefficients to capture local substructure information.
### Efficiency Analysis
In this subsection, we conduct experiments on PROTEINS, DD and FRANKENSTEIN datasets, which cover various number of graphs and graph sizes.
**Preprocessing computational cost.** UnionSNN computes structural coefficients in preprocessing. We compare its preprocessing time with the time needed in baseline models for pre-computing their substructure descriptors, including edge betweenness (Betweenness) in BetSNN, node/edge counting (Count_ne) in GraphSNN, Ricci curvature (Curvature) in CurvGN, and counting cycles (Count_cycle) in [3]. As shown in Table 7, the preprocessing time of UnionSNN is comparable to that of other models. This demonstrates that our proposed structural coefficient is able to improve classification performance without significantly sacrificing efficiency. Theoretical time complexity analysis of the structure descriptors is provided in Appendix K.
**Runtime computational cost.** We conduct an experiment to compare the total runtime cost of UnionSNN with those in other MPNNs. The results are reported in Table 8. Although UnionSNN runs slightly slower than GCN and GIN, it runs over 4.56 times faster than WL-based MPNN (3WL-GNN) and is comparable to MPNN with positional encoding (GatedGCN-LSPE). Compared with GraphSNN, UnionSNN runs significantly faster: the efficiency improvement approaches an order of magnitude on datasets with large graphs, e.g., DD. This is because UnionSNN does not need to pad the adjacency matrix and the feature matrix of each graph to the maximum graph size in the dataset, as what GraphSNN does.
## 6 Conclusions
We present UnionSNN, a model that outperforms 1-WL in distinguishing non-isomorphic graphs. UnionSNN utilizes an effective shortest-path-based substructure descriptor applied to union subgraphs, making it more powerful than previous models. Our experiments demonstrate that UnionSNN surpasses state-of-the-art baselines in both graph-level and node-level classification tasks while maintaining its computational efficiency. The use of union subgraphs enhances the model's ability to capture neighbor connectivities and facilitate message passing. Additionally, when applied to existing MPNNs and Transformer-based models, UnionSNN improves their performance by up to 11.09%.
|
2307.12889 | Estimates on the Neumann and Steklov principal eigenvalues of collapsing
domains | We investigate the relationship between the Neumann and Steklov principal
eigenvalues emerging from the study of collapsing convex domains in
$\mathbb{R}^2$. Such a relationship allows us to give a partial proof of a
conjecture concerning estimates of the ratio of the former to the latter: we
show that thinning triangles maximize the ratio among convex thinning sets,
while thinning rectangles minimize the ratio among convex thinning with some
symmetry property. | Paolo Acampora, Vincenzo Amato, Emanuele Cristoforoni | 2023-07-24T15:34:34Z | http://arxiv.org/abs/2307.12889v1 | # Estimates on the Neumann and Steklov principal eigenvalues of collapsing domains
###### Abstract
We investigate the relationship between the Neumann and Steklov principal eigenvalues emerging from the study of collapsing convex domains in \(\mathbb{R}^{2}\). Such a relationship allows us to give a partial proof of a conjecture concerning estimates of the ratio of the former to the latter: we show that thinning triangles maximize the ratio among convex thinning sets, while thinning rectangles minimize the ratio among convex thinning with some symmetry property.
**MSC 2020:** 35P15, 49Q10, 52A40
**Keywords:** Neumann eigenvalue, Steklov eigenvalue, thinning convex sets, Sturm-Liouville
## 1 Introduction
Let \(\Omega\subset\mathbb{R}^{2}\) be a bounded, open, connected and Lipschitz set. We define the Neumann and Steklov eigenvalues as follows: find positive constants \(\mu,\sigma\) such that there exist non-zero solutions to the boundary value problems
\[\begin{cases}-\Delta u=\mu u&\text{in }\Omega,\\ \frac{\partial u}{\partial\nu}=0&\text{on }\partial\Omega,\end{cases} \begin{cases}\Delta v=0&\text{in }\Omega,\\ \frac{\partial v}{\partial\nu}=\sigma v&\text{on }\partial\Omega.\end{cases}\]
The regularity assumption we made on \(\Omega\) ensures that we can find two increasing and divergent sequences of eigenvalues
\[0 =\mu_{0}(\Omega)<\mu_{1}(\Omega)\leq\mu_{2}(\Omega)\leq\cdots\leq \mu_{k}(\Omega)\leq\ldots,\] \[0 =\sigma_{0}(\Omega)<\sigma_{1}(\Omega)\leq\sigma_{2}(\Omega)\leq \cdots\leq\sigma_{k}(\Omega)\leq\ldots,\]
which are the spectrum of the Neumann laplacian and the spectrum of the Dirichlet-to-Neumann map respectively. We recall the variational characterization of the eigenvalues, for \(k\geq 0\):
\[\mu_{k}(\Omega)=\inf_{E\in\mathcal{S}_{k+1}(\Omega)}\;\sup_{w\in E \backslash\{0\}}\frac{\int_{\Omega}\lvert\nabla w\rvert^{2}\,dx}{ \int_{\Omega}w^{2}\,dx}, \sigma_{k}(\Omega)=\inf_{E\in\mathcal{S}_{k+1}(\Omega)}\; \sup_{w\in E\backslash\{0\}}\frac{\int_{\Omega}\lvert\nabla w \rvert^{2}\,dx}{\int_{\partial\Omega}w^{2}\,d\mathcal{H}^{n-1}},\]
where \(\mathcal{S}_{k+1}(\Omega)\) is the family of all linear subspaces of \(H^{1}(\Omega)\) of dimension \(k+1\). In particular, we are interested in the principal eigenvalues, i.e. \(k=1\), namely
\[\mu_{1}(\Omega)=\inf_{\begin{subarray}{c}w\in H^{1}(\Omega) \backslash\{0\}\\ \int_{\Omega}w=0\end{subarray}}\frac{\int_{\Omega}\lvert\nabla w \rvert^{2}\,dx}{\int_{\Omega}w^{2}\,dx}, \sigma_{1}(\Omega)=\inf_{\begin{subarray}{c}w\in H^{1}( \Omega)\backslash\{0\}\\ \int_{\partial\Omega}w=0\end{subarray}}\frac{\int_{\Omega}\lvert \nabla w\rvert^{2}\,dx}{\int_{\partial\Omega}w^{2}\,d\mathcal{H}^{n-1}}.\]
Many authors in the literature identified remarkable similarities between the two families of eigenvalues. Moreover, an underlying relationship holds between the two quantities. For instance, Steklov eigenvalues can be seen as limits of weighted Neumann eigenvalues, while Neumann eigenvalues can be obtained as limits of Steklov eigenvalues by suitably perforating the set \(\Omega\). We refer, for instance, to [8], and [4] for these results. We want to explore the same relationship between the two eigenvalues, from the shape optimization point of view.
Namely, we could be interested in the scale invariant ratio
\[F(\Omega)=\frac{|\Omega|\mu_{1}(\Omega)}{P(\Omega)\sigma_{1}(\Omega)},\]
and, consequently, in the two problems
\[\min_{\Omega\in\mathcal{K}}F(\Omega),\qquad\qquad\max_{\Omega\in\mathcal{K}} F(\Omega), \tag{1.1}\]
where \(\mathcal{K}\) is a suitable class of sets, \(|\cdot|\) denotes the area, and \(P(\cdot)\) denotes the perimeter. Unfortunately, the choice
\[\mathcal{K}=\big{\{}\,\Omega\subset\mathbb{R}^{2}\;\big{|}\;\Omega\text{ bounded, open and Lipschitz}\,\big{\}}\]
causes the problems in (1.1) to be ill-posed, in the sense that
\[\inf_{\mathcal{K}}F(\Omega)=0,\qquad\qquad\sup_{\mathcal{K}}F(\Omega)=+\infty,\]
as shown in [6], [2], and [5].
In order to obtain some comparison between Neumann and Steklov eigenvalues, we address the problems in (1.1) restricting the class of admissible sets to
\[\mathcal{K}_{c}=\big{\{}\,\Omega\subset\mathbb{R}^{2}\;\big{|}\;\Omega\text{ bounded, open and convex}\,\big{\}}. \tag{1.2}\]
This choice of \(\mathcal{K}_{c}\) avoids shapes that could make \(F\) degenerate, and precisely it could be shown, as in [6], that there exist two constants \(c,C>0\) such that
\[c\leq F(\Omega)\leq C\qquad\forall\,\Omega\in\mathcal{K}_{c}.\]
Additionally, numerical simulations lead the authors to state the following
**Conjecture 1.1** (Henrot, Michetti [6]).: _Let \(\mathcal{K}_{c}\) be as in (1.2), then_
\[1<F(\Omega)<2\qquad\forall\,\Omega\in\mathcal{K}_{c}.\]
_Moreover, the inequalities are sharp in the following sense: there exists a sequence \(R_{n}\) of thinning rectangles, and a sequence \(T_{n}\) of thinning triangles such that_
\[\lim_{n}F(R_{n})=1,\qquad\qquad\lim_{n}F(T_{n})=2.\]
The aim of this paper is to take steps towards proving the conjecture; however, we do not provide an exhaustive solution.
The numerical simulations which support Conjecture 1.1 also suggest that the infimum and the supremum of \(F(\Omega)\), in the class \(\mathcal{K}_{c}\), are asymptotically achieved by particular sequences of thinning domains. Therefore we focus on the limits of \(F(\Omega_{\varepsilon})\), where \(\Omega_{\varepsilon}\) is a family of thinning domains of the type (2.1). Indeed, following in the footsteps of [6], for such a family, there exists a non-negative concave function \(h:[0,1]\to\mathbb{R}\) such that
\[\lim_{\varepsilon\to 0}\mu_{1}(\Omega_{\varepsilon})=\mu_{1}(h)\qquad\qquad \lim_{\varepsilon\to 0}\frac{P(\Omega_{\varepsilon})\sigma_{1}(\Omega_{ \varepsilon})}{|\Omega_{\varepsilon}|}=\sigma_{1}(h)\left(\int_{0}^{1}h(t)\, dt\right)^{-1},\]
where \(\mu_{1}(h)\) is the first eigenvalue of the Sturm-Liouville problem
\[\begin{cases}-\frac{d}{dx}\left(h(x)\frac{dv}{dx}(x)\right)=\mu_{1}(h)h(x)v(x)&x \in(0,1),\\ \\ h(0)\frac{dv}{dx}(0)=h(1)\frac{dv}{dx}(1)=0,\end{cases} \tag{1.3}\]
while \(\sigma_{1}(h)\) is the first eigenvalue of the Sturm-Liouville problem
\[\begin{cases}-\frac{d}{dx}\left(h(x)\frac{dv}{dx}(x)\right)=\sigma_{1}(h)v(x)& x\in(0,1),\\ \\ h(0)\frac{dv}{dx}(0)=h(1)\frac{dv}{dx}(1)=0.\end{cases} \tag{1.4}\]
The function \(h\), in some sense, represents the profile of the thinning sets \(\Omega_{\varepsilon}\), and, in particular, we have that \(h\equiv 1\) represents the limit of a family of thinning rectangles. On the other hand, for every \(x_{0}\in(0,1)\), we let
\[T_{x_{0}}(x):=\begin{cases}\frac{2x}{x_{0}}&x\in[0,x_{0}),\\ \\ \frac{2(1-x)}{1-x_{0}}&x\in[x_{0},1],\end{cases}\]
and
\[T_{0}(x)=2(1-x),\qquad\qquad T_{1}(x)=2x,\]
which represents the limit of a family of thinning triangles. Consequently, familiarizing oneself with the properties of \(\mu_{1}(h)\) and \(\sigma_{1}(h)\) can offer advantages when it comes to analyzing the eigenvalues \(\mu_{1}(\Omega)\) and \(\sigma_{1}(\Omega)\). It is worth mentioning that the quantities \(\mu_{1}(h)\) and \(\sigma_{1}(h)\) are in a way related to a weighted Hardy constant (see [7], [10], [11], and Proposition4.1).
Following this path, we refer to [11], [6] and [12] for the proof of the subsequent properties: let
\[\mathcal{P}=\left\{\;h\in L^{\infty}(0,1)\colon\,h\text{ non negative, concave and not identically zero}\;\right\},\]
and
\[\mathcal{P}_{1}=\left\{\;h\in\mathcal{P}\;\middle|\;\int_{0}^{1}h(t)\,dt=1\; \right\},\]
then for every \(h\in\mathcal{P}_{1}\), we have that
\[\pi^{2}=\mu_{1}(1)\leq\mu_{1}(h) \leq\mu_{1}(T_{1/2})\] \[\sigma_{1}(h) \leq\sigma_{1}(p)=12,\]
where \(p\) is the arc of parabola \(p(x)=6x(1-x)\).
Here we state the main results of this work
**Theorem 1.2**.: _The minimum problem_
\[\min_{h\in\mathcal{P}_{1}}\sigma_{1}(h) \tag{1.5}\]
_admits the functions \(T_{0}\) and \(T_{1}\) as unique solutions._
We prove the theorem following two distinct approaches. Section3 is devoted to the former, while Section4 is devoted to the latter, which relies on a rearrangement method that, up to our knowledge, appears to be new. Finally, in Section5 we establish a relationship between \(\mu_{1}(h)\) and \(\sigma_{1}(h)\).
**Theorem 1.3**.: _There exists an invertible operator_
\[\mathcal{G}:\mathcal{P}\to\mathcal{P}\]
_such that, for every \(h,k\in\mathcal{P}\), we have_
\[\left(\int_{0}^{1}h(t)\,dt\right)^{2}\mu(h)=\sigma\left(\mathcal{G}(h)\right), \tag{1.6}\]
_and_
\[\left(\int_{0}^{1}\frac{1}{\sqrt{k(t)}}\,dt\right)^{2}\sigma(k)=\mu(\mathcal{G }^{-1}(k)). \tag{1.7}\]
It may help to solve problems obtained by studying (1.1) among thinning domains, namely
\[\min_{h\in\mathcal{P}}\frac{\mu_{1}(h)\int_{0}^{1}h(t)\,dt}{\sigma_{1}(h)}, \qquad\qquad\max_{h\in\mathcal{P}}\frac{\mu_{1}(h)\int_{0}^{1}h(t)\,dt}{\sigma _{1}(h)}.\]
In particular, we can fully solve the maximizing problem, and partially solve the minimizing problem. We summarize these results in the following two theorems.
**Theorem 1.4**.: _Let \(h\in\mathcal{P}_{1}\). Then_
\[\frac{\mu_{1}(h)}{\sigma_{1}(h)}\leq 2,\]
_and the equality holds if and only if \(h=T_{x_{0}}\) for some \(x_{0}\in[0,1]\). If, in addition, \(h(x)=h(1-x)\) for every \(x\in[0,1]\), then_
\[\frac{\mu_{1}(h)}{\sigma_{1}(h)}\geq 1.\]
## 2 Notations and tools
Here we define standard quantities for convex sets and the formal definition of thin domain. This definition passes through the ones of support function and minimal width (or thickness).
We refer to [6] for the proof of the lemmas in this section.
**Definition 2.1**.: Let \(\Omega\subset\mathbb{R}^{N}\) be a bounded, open, and convex set. We define the _support function_ of \(\Omega\) as
\[h_{\Omega}(y)=\sup_{x\in\Omega}\left(x\cdot y\right),\qquad y\in\mathbb{R}^{n}.\]
**Definition 2.2**.: Let \(\Omega\subset\mathbb{R}^{N}\) be a bounded, open and convex set, and let \(y\in\mathbb{R}^{n}\). We define the _width_ of \(\Omega\) in the direction \(y\) as
\[\omega_{\Omega}(y)=h_{\Omega}(y)+h_{\Omega}(-y)\]
and we define the _minimal width_ of \(\Omega\) as
\[w_{\Omega}=\min\{\omega_{\Omega}(y)\,|\,\,y\in\mathbb{S}^{n-1}\}.\]
Hence, if \(\operatorname{diam}(\Omega)\) denotes the diameter of \(\Omega\), then we have
**Definition 2.3**.: Let \(\Omega_{\delta}\subset\mathbb{R}^{n}\) be a family of non-empty, bounded, open, and convex sets. We say that \(\Omega_{\delta}\) is a family of _thinning domains_ if
\[\lim_{\delta\to 0}\,\frac{w_{\Omega_{\delta}}}{\operatorname{diam}(\Omega_{ \delta})}=0.\]
Let us now consider a particular family of thinning domains. Let \(h^{+},h^{-}\in\mathcal{P}\) such that \(h^{+}+h^{-}\in\mathcal{P}_{1}\). We consider the family of thinning domains
\[\Omega_{\varepsilon}=\left\{\begin{array}{c}(x,y)\in\mathbb{R}^{2} \left|\begin{array}{c}0\leq x\leq 1,\\ -\varepsilon h_{-}(x)\leq y\leq\varepsilon h_{+}(x).\end{array}\right.\right\} \tag{2.1}\]
For such a sequence we have that both the principal eigenvalues of the Neumann and Steklov problems converge to the principal eigenvalues of the Sturm-Liouville problems (1.3) and (1.4) respectively. More precisely, if we define
\[\mu_{1}(h)=\inf_{\begin{subarray}{c}u\in H^{1}(0,1)\\ \int_{0}^{1}u\,\text{h}\,dx=0\end{subarray}}\frac{\int_{0}^{1}(u^{ \prime})^{2}h\,dx}{\int_{0}^{1}u^{2}h\,dx},\qquad(\ref{eq:1})\qquad\sigma_{1} (h)=\inf_{\begin{subarray}{c}v\in H^{1}(0,1)\\ \int_{0}^{1}v\,dx=0\end{subarray}}\frac{\int_{0}^{1}(v^{\prime})^{2}h\,dx}{ \int_{0}^{1}v^{2}\,dx}, \tag{2.3}\]
we have the following lemmas
**Lemma 2.4**.: _Let \(\{\Omega_{\varepsilon}\}\) be family of thinning domains as in (2.1) and let \(h=h_{-}+h_{+}\). Then_
\[\mu_{1}(\Omega_{\varepsilon}) =\mu_{1}(h)+o(1)\text{ as }\varepsilon\to 0,\] \[\sigma_{1}(\Omega_{\varepsilon}) =\frac{\sigma_{1}(h)}{2}\varepsilon+o(\varepsilon)\text{ as } \varepsilon\to 0.\]
The following compactness result for \(\mathcal{P}\) holds true
**Lemma 2.5**.: _Let \(h_{n}\in\mathcal{P}_{1}\) be a sequence of functions, then there exists \(h\in\mathcal{P}\) such that, up to a subsequence, we have:_
* \(h_{n}\) _converges to_ \(h\) _in_ \(L^{2}(0,1)\)_;_
* \(h_{n}\) _converges to_ \(h\) _uniformly on every compact subset of_ \((0,1)\)_._
We also recall a continuity property of the eigenvalues \(\mu_{1}(h)\) and \(\sigma_{1}(h)\).
**Lemma 2.6**.: _Let \(h_{n},h\in\mathcal{P}\) be a sequence such that \(h_{n}\) converges in \(L^{2}(0,1)\) to \(h\). Then we have_
\[\lim_{n}\mu_{1}(h_{n}) =\mu_{1}(h),\] \[\lim_{n}\sigma_{1}(h_{n}) =\sigma_{1}(h).\]
### Other tools
Here we recall some other tools that will be useful in the next pages. We refer to [9, 1].
Figure 1: Minimal width and diameter of a convex set.
**Theorem 2.7** (Coarea formula).: _Let \(\Omega\subset\mathbb{R}^{n}\) be an open set with Lipschitz boundary. Let \(f\in W^{1,1}_{\text{loc}}(\Omega)\), and let \(u:\Omega\to\mathbb{R}\) be a measurable function. Then,_
\[\int_{\Omega}u(x)|\nabla f(x)|dx=\int_{\mathbb{R}}dt\int_{\Omega\cap f^{-1}(t)} u(y)\,d\mathcal{H}^{n-1}(y).\]
Here we define the notion of decreasing rearrangement
**Definition 2.8**.: Let \(\Omega\subset\mathbb{R}^{n}\) be an open set, and let \(u:\Omega\to\mathbb{R}\) be a measurable function. We define the _distribution function_\(\eta_{u}:[0,+\infty[\,\to[0,+\infty[\) of \(u\) as the function
\[\eta_{u}(t)=|\{\;x\in\Omega\,:\,|u(x)|>t\,\}|\]
**Definition 2.9**.: Let \(u:\Omega\to\mathbb{R}\) be a measurable function. We define the _increasing rearrangement_\(u_{*}\) of \(u\) as
\[u_{*}(s)=\inf\left\{\;t>0\;|\;\eta(t)\leq|\Omega|-s\,\right\}.\]
**Remark 2.10**.: Let \(\Omega\subset\mathbb{R}^{n}\) be an open set, and let \(u:\Omega\to\mathbb{R}\) be a measurable function. Then \(u\) and its increasing rearrangement \(u_{*}\) are equi-measurable namely
\[\eta_{u}=\eta_{u_{*}},\]
and, in addition, for every \(p\in[1,+\infty)\),
\[\|u\|_{L^{p}(\Omega)}=\|u_{*}\|_{L^{p}(0,|\Omega|)}.\]
Finally, here is an important property of extreme points convex sets.
**Definition 2.11**.: Let \(V\) be a vector space, let \(C\subset V\) be a convex set, and let \(z\in C\). We say that \(z\) is an _extreme point_ of \(C\) if it cannot be written as a convex combination of distinct elements of \(C\). More precisely, if \(z=(1-t)x+ty\), with \(x,y\in C\) and \(t\in[0,1]\), then \(x=y=z\).
**Proposition 2.12**.: _Let \(h\in\mathcal{P}_{1}\). Then \(h\) is an extreme point for \(\mathcal{P}_{1}\) if and only if there exists \(x_{0}\in[0,1]\) such that \(h=T_{x_{0}}\)._
Proof.: Let us start by proving that for every \(x_{0}\in[0,1]\) the triangle \(T_{x_{0}}\) is an extreme point of \(\mathcal{P}_{1}\). Let \(h\in\mathcal{P}_{1}\) and let \(x_{M}\) be a maximum point for \(h\), then the concavity of \(h\) ensures
\[h\geq\frac{h(x_{M})}{2}T_{x_{M}}.\]
Recalling that \(\int_{0}^{1}h\,dx=1\), we get that
\[h(x_{M})=\max_{[0,1]}h\leq 2, \tag{2.4}\]
and the equality holds if and only if \(h=T_{x_{M}}\).
Let now \(x_{0}\in[0,1]\), and assume that
\[T_{x_{0}}(x)=(1-t)h_{0}(x)+th_{1}(x)\qquad x\in[0,1],\]
with \(h_{0},h_{1}\in\mathcal{P}_{1}\), and \(t\in[0,1]\). Since
\[2=T_{x_{0}}(x_{0})\leq\max\{h_{0}(x_{0}),h_{1}(x_{0})\},\]
and
\[2=(1-t)h_{0}(x_{0})+th_{1}(x_{0}),\]
we get equality in (2.4) for both \(h_{0}\) and \(h_{1}\). Therefore, \(h_{0}=h_{1}=T_{x_{0}}\), and we have proved that \(T_{x_{0}}\) is an extreme point of \(\mathcal{P}_{1}\).
We now prove that the triangles are the only extreme points of \(\mathcal{P}_{1}\). Let \(h\in\mathcal{P}_{1}\) be such that \(h\neq T_{x_{0}}\) for every \(x_{0}\in[0,1]\).
We begin by assuming that \(h(1)>0\). Notice that, in this setting, there exists \(s\in(0,1)\), such that the function
\[h_{s}=\frac{h-sT_{0}}{1-s}\in\mathcal{P}_{1}.\]
In particular, we get
\[h=(1-s)h_{s}+sT_{0},\]
that is, \(h\) is not an extreme point of \(\mathcal{P}\). An analogous computation can be done when \(h(0)>0\).
Assume now that \(h(0)=h(1)=0\) and let \(\nu\) be the positive Radon measure representing \(-h^{\prime\prime}\). Since \(h\neq T_{x_{0}}\) for every \(x_{0}\), then there exists \(y_{0}\in(0,1)\) such that \(\nu([0,y_{0}])>0\) and \(\nu((y_{0},1])>0\). Let
\[\nu_{1}=\nu|_{[0,x_{0}]}, \nu_{2}=\nu|_{(x_{0},1]},\]
and let \(h_{1},h_{2}\) be the solutions to
\[\begin{cases}-h_{1}^{\prime\prime}=\nu_{1},\\ h_{1}(0)=h_{1}(1)=0,\end{cases}\begin{cases}-h_{2}^{\prime\prime}=\nu_{2},\\ h_{2}(0)=h_{2}(1)=0.\end{cases}\]
We have that \(h_{1},h_{2}\in\mathcal{P}\) and \(h=h_{1}+h_{2}\), so that, letting
\[\tilde{h}_{i}=\frac{h_{i}}{\int_{0}^{1}h_{i}\,dx},\qquad i=1,2,\]
we get \(\tilde{h}_{1},\tilde{h_{2}}\in\mathcal{P}_{1}\), and
\[h=t\tilde{h}_{1}+(1-t)\tilde{h}_{2},\]
with \(t\in(0,1)\). Hence, \(h\) is not an extreme point of \(\mathcal{P}_{1}\).
Finally, we recall the definition of a quasiconvex function.
**Definition 2.13**.: A function \(f:\mathbb{R}\to\mathbb{R}\) is quasiconcave if for all \(x,y\in\mathbb{R}\) and \(\lambda\in[0,1]\) we have
\[f(\lambda x+(1-\lambda)y)\geq\min\big{\{}f(x),f(y)\big{\}}.\]
A function defined on an interval is quasiconcave if and only if it is monotone or 'increasing then decreasing', i.e. if there are two complementary intervals (one of which may be empty) such that it is increasing on the former and decreasing on the latter.
## 3 Minimization of the Steklov eigenvalue
For every \(h\in\mathcal{P}_{1}\) we consider the Sturm-Liouville eigenvalue \(\sigma_{1}(h)\) defined in (2.3)
Lemmas Lemma 2.5 and Lemma 2.6 prove that the problems
\[\max\,\{\,\sigma_{1}(h)\colon h\in\mathcal{P}_{1}\,\}\]
\[\min\,\{\,\sigma_{1}(h)\colon h\in\mathcal{P}_{1}\,\}\]
admit solutions. In particular, the solution to the maximization problem (see for instance [12]) is given by the parabola \(p(x)=6x(1-x)\), with corresponding eigenvalue \(\sigma_{1}(p)=12\). In this section, we aim to prove Theorem1.2, namely that the problem
\[\min\left\{\ \sigma_{1}(h)\colon h\in\mathcal{P}_{1}\ \right\},\]
admits as unique solutions the functions \(T_{0}(x)=2(1-x)\) and \(T_{1}(x)=2x\) with corresponding eigenvalue
\[\sigma_{1}(T_{0})=\sigma_{1}(T_{1})=(j_{0,1}^{\prime})^{2}/2,\]
where \(j_{0,1}^{\prime}\) is the first positive zero of the first derivative of the Bessel function \(J_{0}\).
**Remark 3.1**.: The function
\[h\in\mathcal{P}\longmapsto\sigma_{1}(h),\]
satisfies the following properties:
* **monotonicity**: for every \(h_{0},h_{1}\in\mathcal{P}\), if \(h_{0}\leq h_{1}\) then \[\sigma_{1}(h_{0})\leq\sigma_{1}(h_{1});\]
* **homogeneity**: for every \(h\in\mathcal{P}\) and for every \(\alpha>0\), \[\sigma_{1}(\alpha h)=\alpha\sigma_{1}(h);\]
* **concavity**: for every \(h_{0},h_{1}\in\mathcal{P}\) and for every \(t\in[0,1]\), letting \(h_{t}=(1-t)h_{0}+th_{1}\), we have that \[\sigma_{1}(h_{t})\geq(1-t)\sigma_{1}(h_{0})+t\,\sigma_{1}(h_{1});\]
* **symmetry**: let \(h\in\mathcal{P}\), and let \(k(x)=h(1-x)\), then \[\sigma_{1}(k)=\sigma_{1}(h).\] (3.1)
**Proposition 3.2**.: _Let \(h\in\mathcal{P}_{1}\) be a solution to problem (1.5), then \(h\) is an extreme point of \(\mathcal{P}_{1}\)._
Proof.: Let \(h\in\mathcal{P}_{1}\) be a solution to problem (1.5). By contradiction assume that \(h\) is not an extreme point of \(\mathcal{P}_{1}\). Let \(h_{0},h_{1}\in\mathcal{P}_{1}\setminus\left\{\,h\,\right\}\) and \(t\in(0,1)\) such that
\[h=(1-t)h_{0}+th_{1}.\]
Let \(v\in H^{1}(0,1)\) be an eigenfunction for \(\sigma_{1}(h)\) with
\[\int_{0}^{1}v^{2}\,dx=1,\]
then
\[\sigma_{1}(h) =\int_{0}^{1}(v^{\prime})^{2}h\,dx=(1-t)\int_{0}^{1}(v^{\prime})^ {2}h_{0}\,dx+t\int_{0}^{1}(v^{\prime})^{2}h_{1}\,dx\] \[\geq(1-t)\sigma_{1}(h_{0})+t\sigma_{1}(h_{1}).\]
On the other hand, by the minimality of \(\sigma_{1}(\Omega)\), we have
\[\sigma_{1}(h_{0}) =\int_{0}^{1}(v^{\prime})^{2}h_{0}\,dx, \sigma_{1}(h_{1}) =\int_{0}^{1}(v^{\prime})^{2}h_{1}\,dx.\]
Therefore, \(v\) is also an eigenfunction for \(\sigma_{1}(h_{0})\) and \(\sigma_{1}(h_{1})\). Let us now prove that \(h_{0}=h\), thus reaching a contradiction. From the weak form of equation (1.4), we have that for every \(\varphi\in H^{1}(0,1)\)
\[\int_{0}^{1}v^{\prime}\varphi^{\prime}h\,dx =\sigma_{1}(h)\int_{0}^{1}v\varphi\,dx\] \[=\sigma_{1}(h_{0})\int_{0}^{1}v\varphi\,dx =\int_{0}^{1}v^{\prime}\varphi^{\prime}h_{0}\,dx,\]
that is
\[\int_{0}^{1}(h-h_{0})v^{\prime}\varphi^{\prime}\,dx=0\]
for every \(\varphi\in H^{1}(0,1)\), which yields \(h=h_{0}\), since, for every \(\psi\in L^{2}(0,1)\), we can choose
\[\varphi(x)=\int_{0}^{x}\psi(t)\,dt.\]
In order to study the minimum problem (1.5), we need to evaluate \(\sigma_{1}\) on triangles, and we will need the following result, whose proof can be found in [6].
**Lemma 3.3**.: _Let \(x_{0}\in[0,1]\). Then \(\sigma_{1}(T_{x_{0}})\) is the first non-zero root \(\sigma\) of the equation_
\[J_{0}\left(\sqrt{2\sigma}x_{0}\right)J_{0}^{\prime}\left(\sqrt{2\sigma}(1-x_{ 0})\right)+J_{0}\left(\sqrt{2\sigma}(1-x_{0})\right)J_{0}^{\prime}\left(\sqrt{ 2\sigma}x_{0}\right)=0. \tag{3.2}\]
In addition, here we summarize the properties of the Bessel function \(J_{0}\) which we will use.
**Proposition 3.4**.: _Let \(J_{0}\) be the Bessel function of the first kind of order 0, and let \(j_{0,1}\) and \(j_{0,1}^{\prime}\) be the first zero of \(J_{0}\) and \(J_{0}^{\prime}\) respectively. Then_
\[0<j_{0,1}<j_{0,1}^{\prime},\]
_and_
\[J_{0}(x)\geq 0 \forall\,x\in(0,j_{0,1}),\] \[J_{0}^{\prime}(x)\leq 0 \forall\,x\in(0,j_{0,1}^{\prime}),\] \[J_{0}(x)<0 \forall\,x\in(j_{0,1},j_{0,1}^{\prime}).\]
Proof of Theorem 1.2.: By Lemma 2.5 and Lemma 2.6 we have that the minimum problem (1.5) admits a solution. On the other hand, by Proposition 3.2 and Proposition 2.12 we have that a solution to (1.5) has to be a triangle \(T_{x_{0}}\) for some \(x_{0}\in[0,1]\). By the symmetry of \(\sigma_{1}\) stated in (3.1), we notice that to prove the theorem it is sufficient to show that the function
\[x_{0}\in\left[0,\frac{1}{2}\right]\mapsto\sigma_{1}(T_{x_{0}}),\]
attains its minimum for \(x_{0}=0\).
Let \(j_{0,1}\) and \(j_{0,1}^{\prime}\) be the first positive roots of \(J_{0}\) and \(J_{0}^{\prime}\) respectively. For every \(x\in[0,1/2]\), and \(s\in[0,+\infty)\), let
\[F(x,s)=J_{0}(sx)J_{0}^{\prime}(s(1-x))+J_{0}(s(1-x))J_{0}^{\prime}(sx),\]
which is the function defined in Lemma3.3 that determines the value \(\sigma_{1}(T_{x_{0}})\). Let \(x_{0}\in(0,1/2)\) and let \(s(x_{0})\) be the smallest non-zero root of the equation
\[F(x_{0},s)=0. \tag{3.3}\]
We claim that
\[s(x_{0})\in I_{x_{0}}=\left(\frac{j_{0,1}}{(1-x_{0})},\min\left\{\;\frac{j_{0,1 }}{x_{0}},\frac{j_{0,1}^{\prime}}{1-x_{0}}\;\right\}\right). \tag{3.4}\]
Indeed, since \(J_{0}\) and \(-J_{0}^{\prime}\) are positive in \((0,j_{0,1})\), and \(x_{0}<1-x_{0}\), then
\[F(x_{0},s)<0\qquad\forall\,s\in\left(0,\frac{j_{0,1}}{1-x_{0}}\right].\]
On the other hand, using again the properties in Proposition3.4, a direct computation gives
\[F\left(x_{0},\min\left\{\;\frac{j_{0,1}}{x_{0}},\frac{j_{0,1}^{\prime}}{1-x_{0 }}\;\right\}\right)>0,\]
thus proving the claim. Notice that (3.4) gives
\[J_{0}(s(x_{0})x_{0})>0, J_{0}(s(x_{0})(1-x_{0}))<0, \tag{3.5}\] \[J_{0}^{\prime}(s(x_{0})x_{0})<0, J_{0}^{\prime}(s(x_{0})(1-x_{0}))<0.\]
Since \(J_{0}\) solves the equation
\[J_{0}^{\prime\prime}(t)+\frac{J_{0}^{\prime}(t)}{t}+J_{0}(t)=0, \tag{3.6}\]
then we have
\[\partial_{s}F(x_{0},s)= J_{0}^{\prime}(sx_{0})J_{0}^{\prime}(s(1-x_{0}))-J_{0}(sx_{0})J_{0}(s (1-x_{0}))\] \[-\frac{1}{s}\left(J_{0}(sx_{0})J_{0}^{\prime}(s(1-x_{0}))+J_{0}(s (1-x_{0}))J_{0}^{\prime}(sx_{0})\right).\]
In particular, (3.3) and (3.5) ensure that
\[\partial_{s}F(x_{0},s(x_{0}))>0. \tag{3.7}\]
By the implicit function theorem, the function \(x_{0}\mapsto s(x_{0})\) is continuous, differentiable and
\[s^{\prime}(x_{0})\partial_{s}F(x_{0},s(x_{0}))+\partial_{x}F(x_{0},s(x_{0}))=0.\]
Using (3.6), direct computations give
\[\partial_{x}F(x_{0},s(x_{0}))=-\frac{J_{0}(s(x_{0})(1-x_{0}))J_{0}^{\prime}(s (x_{0})x_{0})}{x_{0}}+\frac{J(s(x_{0})x_{0})J_{0}^{\prime}(s(x_{0})(1-x_{0}))} {1-x_{0}}.\]
As before, (3.5) ensure that
\[\partial_{x}F(x_{0},s(x_{0}))<0. \tag{3.8}\]
Joining (3.7) and (3.8), we have that \(s^{\prime}(x_{0})>0\) and \(x_{0}\mapsto s(x_{0})\) is increasing. Finally,
\[\sigma_{1}(T_{x_{0}})=s^{2}(x_{0})/2\]
is increasing for \(x_{0}\in(0,1/2)\), and the minimum is achieved when \(x_{0}=0\).
**Remark 3.5**.: Equation (3.2) for \(x_{0}=0\) reduces to
\[J_{0}^{\prime}\left(\sqrt{2\sigma}\right)=0\]
that is, \(\sigma_{1}(2x)=\sigma_{1}(T_{0})=(j_{0,1}^{\prime})^{2}/2\).
An alternative proof for the minimum of \(\sigma(h)\)
In this section, we minimize \(\sigma_{1}(h)\) using an alternative approach that avoids the explicit computation of the eigenvalue. In particular, our aim is to define a particular kind of symmetrization that allows us to prove that solutions to (1.5) have to be monotone. Before defining the aforementioned symmetrization we prove an equivalent formulation for the eigenvalue \(\sigma_{1}(h)\), referring to the ideas for the proof in [10, Lemma 4.2]
**Proposition 4.1**.: _Let \(h\in\mathcal{P}_{1}\), then_
\[\sigma_{1}(h)=\min\left\{\begin{array}{c}\int_{0}^{1}(\varphi ^{\prime})^{2}\,dx\\ \int_{0}^{1}\frac{\varphi^{2}}{h}\,dx\end{array} \right|\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
Letting \(x_{1}\) go to \(x_{0}\) we have
\[\int_{0}^{x_{0}}\frac{\varphi^{2}(x)}{h(x)}\,dx\leq\frac{1}{\sigma_{1}(h)}\int_{0 }^{x_{0}}(\varphi^{\prime}(x))^{2}\,dx. \tag{4.3}\]
Similar computations can be done in the case \(x>x_{0}\), so that we have
\[\int_{x_{0}}^{1}\frac{\varphi^{2}(x)}{h(x)}\,dx\leq\frac{1}{\sigma_{1}(h)}\int_ {x_{0}}^{1}(\varphi^{\prime}(x))^{2}\,dx. \tag{4.4}\]
Joining (4.3) and (4.4) we have
\[\int_{0}^{1}\frac{\varphi^{2}(x)}{h(x)}\,dx\leq\frac{1}{\sigma_{1}(h)}\int_{0 }^{1}(\varphi^{\prime}(x))^{2}\,dx,\]
that is
\[\frac{\int_{0}^{1}(\varphi^{\prime})^{2}\,dx}{\int_{0}^{1}\frac{ \varphi^{2}}{h}\,dx}\geq\sigma_{1}(h). \tag{4.5}\]
Since \(w\) is an admissible function, the assertion follows from (4.2) and (4.5).
We now define the rearrangement mentioned above. Let
\[w:[0,1]\to\mathbb{R}\]
be a quasi-concave piecewise \(C^{1}\) function such that
\[|\{w^{\prime}=0\}|=0,s\qquad w(0)=w(1)=0,\]
and let us denote by
\[w_{M}=\max_{[0,1]}w,\]
and by \(x_{M}\) the maximum point of \(w\). We aim to rearrange \(w\) in such a way that the derivative of the rearranged function \(w^{\sharp}\) concentrates at the left of the new maximum point \(x_{M}^{*}\).
For every \(t\in(0,w_{M})\), we define \((x_{t},y_{t}):=\{w(x)>t\}\), and the distribution functions
\[\begin{split}\eta_{1}(t)&=x_{M}-x_{t}=|\{\,w>t\, \}\cap(0,x_{M})|,\\ \eta_{2}(t)&=y_{t}-x_{M}=|\{\,w>t\,\}\cap(x_{M},1)|. \end{split} \tag{4.6}\]
Notice that
\[\eta_{1}:(0,w_{M})\to(0,x_{M}),\qquad\qquad\eta_{2}:(0,w_{M})\to(0,1-x_{M})\]
are both decreasing, invertible, absolutely continuous functions, and that, for a.e. \(t\in[0,1]\),
\[\eta_{1}^{\prime}(t)=-\frac{1}{|w^{\prime}(x_{t})|},\qquad\qquad\eta_{2}^{ \prime}(t)=-\frac{1}{|w^{\prime}(y_{t})|}.\]
Let us now define the rearranged distribution functions in such a way that, for a.e. \(t\in[0,1]\),
\[\begin{split}\eta_{*,1}^{\prime}(t)&=\max\{\eta_{1 }^{\prime}(t),\eta_{2}^{\prime}(t)\},\\ \eta_{*,2}^{\prime}(t)&=\min\{\eta_{1}^{\prime}(t), \eta_{2}^{\prime}(t)\},\end{split} \tag{4.7}\]
namely,
\[\begin{split}\eta_{*,1}(t)&:=-\int_{t}^{w_{M}}\max\{ \eta_{1}^{\prime}(s),\eta_{2}^{\prime}(s)\}\,ds,\\ \eta_{*,2}(t)&:=-\int_{t}^{w_{M}}\min\{\eta_{1}^{ \prime}(s),\eta_{2}^{\prime}(s)\}\,ds.\end{split} \tag{4.8}\]
**Remark 4.2**.: Here we emphasize some properties of these distribution functions:
* for every \(t\in(0,w_{M})\), we have \[\eta_{1}(t)+\eta_{2}(t)=\eta_{*,1}(t)+\eta_{*,2}(t)=|\{w>t\}|;\]
* by (4.7), we have that, for a.e. \(t\in(0,w_{M})\), \[\frac{1}{|\eta_{*,1}^{\prime}(t)|} =\max\left\{\frac{1}{|\eta_{1}^{\prime}(t)|},\frac{1}{|\eta_{2}^ {\prime}(t)|}\right\}\] \[=\max\{|w^{\prime}(x_{t})|,|w^{\prime}(y_{t})|\},\] and \[\frac{1}{|\eta_{*,2}^{\prime}(t)|} =\min\left\{\frac{1}{|\eta_{1}^{\prime}(t)|},\frac{1}{|\eta_{2}^ {\prime}(t)|}\right\}\] \[=\min\{|w^{\prime}(x_{t})|,|w^{\prime}(y_{t})|\}.\]
* By (4.7), we have \[\frac{1}{|\eta_{*,1}^{\prime}(t)|^{\alpha}}+\frac{1}{|\eta_{*,2}^{\prime}(t) |^{\alpha}}=\frac{1}{|\eta_{1}^{\prime}(t)|^{\alpha}}+\frac{1}{|\eta_{2}^{ \prime}(t)|^{\alpha}}\] (4.9) for every \(\alpha\in\mathbb{R}\).
* for \(t=0\), we denote by \[x_{M}^{*}:=\eta_{*,1}(0)=1-\eta_{*,2}(0),\] this will play the role of the maximum point for the rearranged function.
* the functions \[\eta_{*,1}:(0,w_{M})\to(0,x_{M}^{*}),\qquad\qquad\eta_{*,2}:(0,w_{M})\to(0,1- x_{M}^{*})\] are decreasing, invertible, absolutely continuous functions.
We now define the rearrangement \(w^{\sharp}\) as follows:
**Definition 4.3**.: Let \(w\) be a quasi-concave piecewise \(C^{1}\) function such that
\[|\{w^{\prime}=0\}|=0,\qquad w(0)=w(1)=0,\]
and let \(\eta_{1},\eta_{2},\eta_{*,1}\) and \(\eta_{*,2}\) be the functions defined in (4.6), and (4.8). We define the competitor \(w^{\sharp}\) as
\[w^{\sharp}(x)=\begin{cases}w_{m}-\eta_{*,1}^{-1}(x)&\text{if }x\leq x_{M}^{*}, \\ w_{m}-\eta_{*,2}^{-1}(1-x)&\text{if }x>x_{M}^{*}.\end{cases}\]
**Remark 4.4**.: From the definition we have that \(w^{\sharp}\) is increasing in \([0,x_{M}^{*})\) and decreasing in \((x_{M}^{*},0]\), so that \(w^{\sharp}\) is quasi-concave. Moreover, we have that \(w^{\sharp}\) and \(w\) are equi-measurable, i.e.
\[\|w^{\sharp}\|_{L^{p}(0,1)}=\|w\|_{L^{p}(0,1)}\]
for every \(p\in[1,+\infty]\).
We now prove some useful properties of this rearrangement.
**Lemma 4.5**.: _Let \(w\) be a quasi-concave piecewise \(C^{1}\) function such that_
\[|\{w^{\prime}=0\}|=0,\qquad w(0)=w(1)=0,\]
_and let \(w^{\sharp}\) be the competitor defined in Definition 4.3. Then_
\[w^{\sharp}(x)=(w(1-x))^{\sharp}.\]
Proof.: Let us set \(v(x)=w(1-x)\) and \(\nu_{1},\nu_{2},\nu_{*,1}\) and \(\nu_{*,2}\) the equivalent quantities defined for \(v\). Then we have
\[\nu^{\prime}_{1}(t)=\eta^{\prime}_{2}(t),\qquad\qquad\nu^{\prime}_{2}(t)=\eta ^{\prime}_{1}(t),\]
and, in particular,
\[\nu^{\prime}_{*,1}(t)=\eta^{\prime}_{*,1}(t),\qquad\qquad\nu^{\prime}_{*,2}(t) =\eta^{\prime}_{*,2}(t).\]
**Lemma 4.6**.: _Let \(w\) be a quasi-concave piecewise \(C^{1}\) function such that_
\[|\{w^{\prime}=0\}|=0,\qquad w(0)=w(1)=0,\]
_and let \(w^{\sharp}\) be its competitor defined in Definition 4.3. Then,_
\[\|(w^{\sharp})^{\prime}\|_{L^{p}(0,1)}=\|w^{\prime}\|_{L^{p}(0,1)}\qquad \forall\,p\geq 1. \tag{4.10}\]
Proof.: Let us compute separately the norms: by the coarea formula (see 1), we get
\[\begin{split}\int_{0}^{1}|w^{\prime}(x)|^{p}\,dx&= \int_{0}^{w_{M}}\int_{\{w=t\}}|w^{\prime}(x)|^{p-1}\,d\mathcal{H}^{0}(x)\,dt\\ &=\int_{0}^{w_{M}}\left(|w^{\prime}(x_{t})|^{p-1}+|w^{\prime}(y_{ t})|^{p-1}\right)\,dt\\ &=\int_{0}^{w_{M}}\left(\frac{1}{|\eta^{\prime}_{1}(t)|^{p-1}}+ \frac{1}{|\eta^{\prime}_{2}(t)|^{p-1}}\right)\,dt.\end{split} \tag{4.11}\]
Analogously,
\[\int_{0}^{1}|(w^{\sharp})^{\prime}(x)|^{p}\,dx=\int_{0}^{w_{M}}\frac{1}{|\eta^ {\prime}_{*,1}(t)|^{p-1}}+\frac{1}{|\eta^{\prime}_{*,2}(t)|^{p-1}}\,dt. \tag{4.12}\]
Joining (4.11), (4.12), and (4.9), we get (4.10).
Figure 2: Function \(w^{\#}\) when \(w\) is a quasi-concave affine function
We now state the property of \(w^{\sharp}\) that will be crucial in the proof of Theorem1.2.
**Lemma 4.7**.: _Let \(w\) be a quasi-concave piecewise \(C^{1}\) function such that_
\[|\{w^{\prime}=0\}|=0,\qquad w(0)=w(1)=0,\]
_and let \(w^{\sharp}\) be its competitor defined in Definition4.3. Assume that_
\[h:(0,1)\to[0,+\infty)\]
_is a concave function, and let \(h^{*}\) be its increasing rearrangement. Then_
\[\int_{0}^{1}\frac{w^{2}}{h}\,dx\leq\int_{0}^{1}\frac{(w^{\sharp})^{2}}{h^{*}} \,dx.\]
Proof.: By Fubini's theorem, we can write
\[\int_{0}^{1}\frac{w^{2}(x)}{h(x)}\,dx=\int_{0}^{1}w^{2}(x)\int_{0}^{\frac{1}{h( x)}}\,dt\,dx=\int_{0}^{\infty}\int_{\left\{\frac{1}{h^{*}(x)}>t\right\}}w^{2}(x) \,dx\,dt.\]
The same computation leads to
\[\int_{0}^{1}\frac{(w^{\sharp})^{2}(x)}{h^{*}(x)}\,dx=\int_{0}^{\infty}\int_{ \left\{\frac{1}{h^{*}(x)}>t\right\}}(w^{\sharp})^{2}(x)\,dx\,dt.\]
Hence, to prove the lemma it is sufficient to prove that for a.e. \(t>0\)
\[\int_{\left\{\frac{1}{h(x)}>t\right\}}w^{2}(x)\,dx\leq\int_{\left\{\frac{1}{ h^{*}(x)}>t\right\}}(w^{\sharp})^{2}(x)\,dx.\]
For every \(t\in(0,\|1/h\|_{\infty})\), let us define
\[D_{t}:=\left\{\frac{1}{h(x)}>t\right\}=(0,\tilde{x}_{t})\cup(\tilde{y}_{t},1),\]
for some \(\tilde{x}_{t},\tilde{y}_{t}\in(0,1)\). In an analogous way, by the definition of increasing rearrangement (see Definition2.9), we have
\[D_{t}^{*}=\left\{\frac{1}{h^{*}(x)}>t\right\}=(0,\tilde{x}_{t}+1-\tilde{y}_{t }).\]
Let \(m=\min\{w(\tilde{x}_{t}),w(\tilde{y}_{t})\}\), and let us define the following auxiliary functions
\[f(x)=\min\{w(x),m\}^{2},\qquad\qquad g(x)=(w^{2}-m^{2})_{+},\]
so that
\[\int_{D_{t}}w^{2}\,dx=\int_{D_{t}}f\,dx+\int_{D_{t}}g\,dx.\]
Similarly, we define
\[f_{0}(x)=\min\{w^{\sharp}(x),m\}^{2},\qquad\qquad g_{0}(x)=((w^{\sharp})^{2}- m^{2})_{+},\]
so that
\[\int_{D_{t}^{*}}(w^{\sharp})^{2}\,dx=\int_{D_{t}^{*}}f_{0}\,dx+\int_{D_{t}^{* }}g_{0}\,dx.\]
We now evaluate separately the two terms:
1. By the definition of \(m\), we have that \[w(x)>m\qquad\forall\,x\in(0,1)\setminus D_{t}.\] Therefore, since \(f\) and \(f_{0}\) are equi-measurable, we get \[\begin{split}\int_{D_{t}}f(x)\,dx&=\int_{0}^{1}f(x) \,dx-(1-|D_{t}|)m^{2}\\ &=\int_{0}^{1}f_{0}(x)\,dx-\int_{(0,1)\setminus D_{t}^{*}}m^{2}\, dx\\ &\leq\int_{0}^{1}f_{0}(x)\,dx-\int_{(0,1)\setminus D_{t}^{*}}f_{ 0}(x)\,dx\\ &=\int_{D_{t}^{*}}f_{0}(x)\,dx,\end{split}\] (4.13) where we have used that \(|D_{t}|=|D_{t}^{*}|\), and that \(m\geq f_{0}\);
2. Lemma 4.5 allows us to assume without loss of generality that \(w(\tilde{y}_{t})=m\). Therefore, the quasi-concavity of \(w\) ensures that \[w(x)\leq m\qquad\forall\,x\in(\tilde{y}_{t},1),\] and we can write \[\int_{D_{t}}g(x)\,dx=\int_{0}^{\tilde{x}_{t}}(w^{2}(x)-m^{2})_{+}\,dx=\int_{m }^{w_{M}}2r|\{w>r\}\cap(0,\tilde{x}_{t})|\,dr.\] (4.14) On the other hand, \[\begin{split}\int_{D_{t}^{*}}g_{0}(x)\,dx&=\int_ {0}^{\tilde{x}_{t}+1-\tilde{y}_{t}}g_{0}(x)\,dx\\ &\geq\int_{0}^{\tilde{x}_{t}}g_{0}(x)\,dx\\ &=\int_{m}^{w_{M}}2r|\{w^{\sharp}>r\}\cap(0,\tilde{x}_{t})|\,dr. \end{split}\] (4.15) We now claim that \[|\{w^{\sharp}>r\}\cap(0,\tilde{x}_{t})|\geq|\{w>r\}\cap(0,\tilde{x}_{t})|.\] (4.16) Indeed, if we let \[\{w>r\}=(x_{r},y_{r}),\qquad\qquad\{w^{\sharp}>r\}=(x_{r}^{*},y_{r}^{*}),\] then (4.7) gives \[x_{r}^{*}=-\int_{0}^{r}\eta_{*,1}^{\prime}(s)\,ds\leq-\int_{0}^{r}\eta_{1}^{ \prime}(s)\,ds=x_{r},\] while the equi-misurability of \(w\) and \(w^{\sharp}\) gives \[y_{r}^{*}=(y_{r}^{*}-x_{r}^{*})+x_{r}^{*}=(y_{r}-x_{r})+x_{r}^{*}\leq y_{r}.\]
Therefore we get
\[|\{w^{\sharp}>r\}\cap(0,\tilde{x}_{t})| =|\{w>r\}\cap(0,\tilde{x}_{t})| \text{if }y_{r}\leq\tilde{x}_{t},\] \[|\{w^{\sharp}>r\}\cap(0,\tilde{x}_{t})| >|\{w>r\}\cap(0,\tilde{x}_{t})| \text{if }y_{r}>\tilde{x}_{t},\] thus the claim is proved. Finally, joining (4.14), (4.15), and (4.16), we have that \[\int_{D_{t}^{*}}g_{0}(x)\,dx\geq\int_{D_{t}}g(x)\,dx,\] (4.17) and the result follows from (4.17), and (4.13).
We now turn our attention to the eigenvalue problem.
Alternative proof of Theorem 1.2.: Let \(h\in\mathcal{P}_{1}\), by Proposition 4.1 we have that
\[\sigma_{1}(h)=\min\left\{\,\frac{\int_{0}^{1}(\varphi^{\prime})^{2}\,dx}{\int_ {0}^{1}\frac{\varphi^{2}}{h}\,dx}\colon\,\varphi\in H_{0}^{1}(0,1)\,\right\}. \tag{4.18}\]
Let \(w\) be a minimizer in (4.18), then by Lemma 4.6, and Lemma 4.7, we have
\[\sigma_{1}(h)=\frac{\int_{0}^{1}(w^{\prime})^{2}}{\int_{0}^{1} \frac{w^{2}}{h}}\geq\frac{\int_{0}^{1}((w^{\sharp})^{\prime})^{2}}{\int_{0}^ {1}\frac{(w^{\sharp})^{2}}{h^{*}}}\geq\sigma_{1}(h^{*}). \tag{4.19}\]
By Proposition 3.2, and Proposition 2.12, we have that the minimum of \(\sigma_{1}\) is a triangle \(T_{x_{0}}\) for some \(x_{0}\in[0,1]\). Let \(h=T_{x_{0}}\), then \(h_{*}=T_{1}\) and, from (4.19), we have
\[\sigma_{1}(T_{x_{0}})\geq\sigma_{1}(T_{1}),\]
which concludes the proof.
## 5 Ratio \(\mu/\sigma\)
In this section, we prove Theorem 1.3 and Theorem 1.4. We begin by defining an operator \(\mathcal{G}\) on \(\mathcal{P}\) as follows: let \(h\in\mathcal{P}\), and let
\[H(x)=\frac{1}{\int_{0}^{1}h(t)\,dt}\int_{0}^{x}h(t)\,dt; \tag{5.1}\]
we notice that \(H\) is a strictly increasing function such that \(H(0)=0\) and \(H(1)=1\). We then define
\[\mathcal{G}(h)(x)=h^{2}(H^{-1}(x)).\]
**Lemma 5.1**.: _Let \(h\in\mathcal{P}\). Then \(\mathcal{G}(h)\in\mathcal{P}\), and the map_
\[\mathcal{G}:\mathcal{P}\to\mathcal{P}\]
_is invertible._
Proof.: Since \(h\in\mathcal{P}\), then \(h^{\prime}\) is defined a.e. in \([0,1]\), and \(h^{\prime}\) is decreasing. We also have that \(H^{-1}\) is a locally Lipschitz function and
\[\frac{d}{dx}H^{-1}(x)=\frac{1}{h(H^{-1}(x))}\int_{0}^{1}h(t)\,dt. \tag{5.2}\]
Therefore, \(\mathcal{G}(h)\) is a.e. differentiable and
\[\frac{d}{dx}\mathcal{G}(h)(x)=2\alpha h^{\prime}(H^{-1}(x)),\]
where
\[\alpha=\int_{0}^{1}h(t)\,dt.\]
Since \(H^{-1}\) is an increasing function and \(h^{\prime}\) is decreasing, then \(\mathcal{G}(h)\) is a concave function, and \(\mathcal{G}(h)\in\mathcal{P}\).
Let \(k\in\mathcal{P}\) and define
\[K(x)=\frac{1}{\int_{0}^{1}\frac{1}{\sqrt{k(t)}}\,dt}\int_{0}^{x}\frac{1}{ \sqrt{k(t)}}\,dt, \tag{5.3}\]
then we want to prove that
\[\sqrt{k(K^{-1}(x))}=\mathcal{G}^{-1}(k)(x). \tag{5.4}\]
First we prove that \(\sqrt{k\circ K^{-1}}\in\mathcal{P}\). By direct computation,
\[\frac{d}{dx}\sqrt{k(K^{-1}(x))}=\frac{\beta k^{\prime}(K^{-1}(x))}{2k(K^{-1}( x))},\]
where
\[\beta=\int_{0}^{1}\frac{1}{\sqrt{k(t)}}\,dt.\]
This proves that \(\sqrt{k\circ K^{-1}}\) is concave, since \(K^{-1}\) is increasing and \(h^{\prime}/h\) is decreasing because of the concavity of \(h\). On the other hand, to prove (5.4), we observe that with a change of variables we get
\[\int_{0}^{x}\sqrt{k(K^{-1}(t))}\,dt=\frac{K(x)}{\int_{0}^{1}\frac{1}{\sqrt{k( t)}}\,dt},\]
and by definition of \(\mathcal{G}\) we get
\[\mathcal{G}\left(\sqrt{k\circ K^{-1}}\right)(x)=k(x).\]
We now prove that \(\mathcal{G}\) is the operator in Theorem1.3.
Proof of Theorem1.3.: Let \(v\in H^{1}(0,1)\) be a function such that
\[\int_{0}^{1}v(t)h(t)\,dt=0,\]
and let \(H\) denote the integral function defined in (5.1). The change of variables \(H(t)=s\) yields
\[\left(\int_{0}^{1}h(t)\,dt\right)\int_{0}^{1}v(t)\,h(t)\,dt=\int_{0}^{1}v(H^{ -1}(s))\,ds,\]
\[\left(\int_{0}^{1}h(t)\,dt\right)\int_{0}^{1}(v^{\prime})^{2}(t)\,h(t)\,dt=\int_{0 }^{1}(v^{\prime})^{2}(H^{-1}(s))\,ds,\]
and
\[\left(\int_{0}^{1}h(t)\,dt\right)\int_{0}^{1}v^{2}(t)\,h(t)\,dt=\int_{0}^{1}v^{ 2}(H^{-1}(s))\,ds.\]
Let \(w(x)=v(H^{-1}(x))\), then by (5.2),
\[w^{\prime}(x)=\left(\int_{0}^{1}h(t)\,dt\right)\,v^{\prime}(H^{-1}(x))\left( \mathcal{G}(h)(x)\right)^{-\frac{1}{2}}.\]
Hence,
\[\frac{\int_{0}^{1}(v^{\prime})^{2}(t)\,h(t)\,dt}{\int_{0}^{1}v^{2}(t)\,h(t)\, dt}=\left(\int_{0}^{1}h(t)\,dt\right)^{-2}\frac{\int_{0}^{1}(w^{\prime})^{2}(t) \,\mathcal{G}(h)(t)\,dt}{\int_{0}^{1}w^{2}(t)\,dt}.\]
Choosing \(v=v_{\mu}\) to be the eigenfunction of \(\mu_{1}(h)\), then we get
\[\mu_{1}(h)\geq\left(\int_{0}^{1}h(t)\,dt\right)^{-2}\sigma_{1}(h).\]
On the other hand, choosing \(w=w_{\sigma}\) to be the eigenfunction of \(\sigma_{1}(\mathcal{G}(h))\), we get
\[\mu_{1}(h)\leq\left(\int_{0}^{1}h(t)\,dt\right)^{-2}\sigma(\mathcal{G}(h)),\]
which gives (1.6).
Let \(k\in\mathcal{P}\) and let \(K\) be the integral function defined in (5.3). If we evaluate the integral on the right-hand side by means of the change of variables \(t=K(s)\), we finally get
\[\int_{0}^{1}h(t)\,dt=\left(\int_{0}^{1}\frac{1}{\sqrt{k(t)}}\,dt\right)^{-1},\]
which gives (1.7).
The following punctual estimate will be crucial.
**Proposition 5.2**.: _Let \(h\in\mathcal{P}\). Then_
\[\left(\int_{0}^{1}h(t)\,dt\right)^{-1}\mathcal{G}(h)(x)\leq 2h(x).\]
Proof.: Up to rescaling \(h\), we can assume without loss of generality that \(h\in\mathcal{P}_{1}\). Notice, in addition, that if \(h\equiv 1\), then the proof is trivial. Therefore, let \(h\in\mathcal{P}_{1}\) and \(h\neq 1\), and define
\[H(x)=\int_{0}^{x}h(t)\,dt.\]
We claim that there exists a unique \(\bar{x}\in[0,1]\) such that
\[\begin{array}{ll}H(x)\leq x&x\in[0,\bar{x}],\\ H(x)\geq x&x\in[\bar{x},1].\end{array} \tag{5.5}\]
Indeed, if we denote by
\[f(x)=H(x)-x,\]
then, by the concavity of \(h\) and the integral constraint, we have that the equation \(h=1\) admits at most two solutions (\(h\) cannot be equal to \(1\) in an entire interval, otherwise the concavity would give \(\|h\|_{1}<1\)). Therefore, we have that there exist two points \(x_{1}\in[0,1)\), and \(x_{2}\in(0,1]\) such that
\[f^{\prime}(x)<0 x\in[0,x_{1})\cup(x_{2},1],\] \[f^{\prime}(x)>0 x\in(x_{1},x_{2}).\]
Finally, noticing that \(f(0)=f(1)=0\), then we have that there exists a unique zero \(\bar{x}\) of \(f\) in the interval \([x_{1},x_{2}]\), thus the claim is proved. In particular, we have that
\[H^{-1}(x)\geq x x\in[0,\bar{x}],\] \[H^{-1}(x)\leq x x\in[\bar{x},1],\]
and \(h(\bar{x})>1\).
These estimates allow us to compare the derivatives of \(\mathcal{G}(h)\) and \(h\). Denoting by \(g(x)=\mathcal{G}(h)(x)\), we have that
\[g^{\prime}(x)=2h^{\prime}(H^{-1}(x))\leq 2h^{\prime}(x) x\in[0,\bar{x}], \tag{5.6}\] \[g^{\prime}(x)=2h^{\prime}(H^{-1}(x))\geq 2h^{\prime}(x) x\in[\bar{x},1]. \tag{5.7}\]
We recall that, as in (2.4), the concavity of \(h\) ensures that
\[\|h\|_{\infty}\leq 2.\]
Therefore, we get
\[g(0)=h^{2}(0)\leq 2h(0),\]
and by (5.6),
\[g(x)\leq 2h(x)\qquad\qquad x\in[0,\bar{x}].\]
Analogously,
\[g(1)=h^{2}(1)\leq 2h(1),\]
and, by (5.7),
\[g(x)\leq 2h(x)\qquad\qquad x\in[\bar{x},1].\]
**Proposition 5.3**.: _Let \(h\in\mathcal{P}_{1}\)._
\[\mathcal{G}(h)=2h\]
_if and only if \(h=T_{x_{0}}\) for some \(x_{0}\in[0,1]\)._
Proof.: By direct computation, one can prove that if \(h=T_{x_{0}}\) for some \(x_{0}\in[0,1]\), then
\[h^{2}(x)=2h(H(x)).\]
Let us now assume that \(\mathcal{G}(h)=2h\). Notice that, if \(y\in[0,1]\) is a fixed point of the integral function \(H\), then
\[h^{2}(y)=h^{2}(H^{-1}(y))=\mathcal{G}(h)(y)=2h(y), \tag{5.8}\]
so that either \(h(y)=0\) or \(h(y)=2\). In particular, if \(h(y)=2\), then by the concavity of \(h\), we have that
\[h(x)=T_{y}(x)\qquad\forall\,x\in[0,1].\]
Since \(0\) and \(1\) are always fixed points of \(H\), if either \(h(0)=2\) or \(h(1)=2\) the assertion is proved. Therefore, let us assume that
\[h(0)=0=h(1),\]
then the equation \(h=1\) admits at least two distinct solutions \(0<x_{1}<x_{2}<1\) and, arguing as in the proof of Proposition5.2, we have that there exists a fixed point \(\bar{x}\in[x_{1},x_{2}]\) for the function \(H\) and, by (5.8), necessarily \(h(\bar{x})=2\) and \(h=T_{\bar{x}}\).
**Proposition 5.4**.: _Let \(h\in\mathcal{P}_{1}\), then_
\[\frac{\mu_{1}(h)}{\sigma_{1}(h)}\leq 2 \tag{5.9}\]
_and the equality holds if and only if \(h=T_{x_{0}}\) for some \(x_{0}\in[0,1]\)._
Proof.: Let \(w\) be an eigenfunction for \(\sigma_{1}(h)\). Using Theorem1.3, Proposition5.2, and the variational characterization of \(\sigma_{1}(\mathcal{G}(h))\), we obtain
\[\mu_{1}(h)=\sigma_{1}(\mathcal{G}(h))\leq\frac{\int_{0}^{1}(w^{ \prime})^{2}\mathcal{G}(h)\,dx}{\int_{0}^{1}w^{2}\,dx}\leq\frac{ 2\int_{0}^{1}(w^{\prime})^{2}h\,dx}{\int_{0}^{1}w^{2}\,dx}=2 \sigma_{1}(h), \tag{5.10}\]
thus proving (5.9). Assume now that for some \(h\in\mathcal{P}_{1}\) equality holds, then by (5.10) we have
\[\int_{0}^{1}(w^{\prime})^{2}(\mathcal{G}(h)-2h)\,dx=0. \tag{5.11}\]
Since \(\mathcal{G}(h)\leq 2h\), then (5.11) yields \(\mathcal{G}(h)=2h\), and Proposition5.3 ensures that \(h=T_{x_{0}}\) for some \(x_{0}\in[0,1]\).
**Remark 5.5**.: Since it is not possible in general to have that \(\mathcal{G}(h)\geq h\), then the same argument cannot be used for the lower bound
\[\frac{\mu_{1}(h)}{\sigma_{1}(h)}\geq 1.\]
For instance, let
\[h(x)=\frac{1}{2}+x,\]
then \(\mathcal{G}(h)(0)=h^{2}(0)<h(0)\), while \(\mathcal{G}(h)(1)=h^{2}(1)>h(1)\).
Here we prove the lower bound in Theorem1.4 in the symmetric case.
**Proposition 5.6**.: _Let \(h\in\mathcal{P}_{1}\) such that \(h(1-x)=h(x)\) for all \(x\in[0,1]\). Then_
\[\frac{\mu_{1}(h)}{\sigma_{1}(h)}\geq 1. \tag{5.12}\]
Proof.: Let \(g=\mathcal{G}(h)\). By the variational characterization (4.18) of \(\sigma_{1}\), and Theorem1.3, we can find a function \(w\in H^{2}(0,1)\), symmetric with respect to \(x=1/2\), such that
\[\mu_{1}(h)=\sigma_{1}(g)=\frac{\int_{0}^{1}(w^{\prime})^{2}(x)\, dx}{\int_{0}^{1}\frac{w^{2}(x)}{g(x)}\,dx}, \tag{5.13}\]
and \(w\) solves the problem
\[\begin{cases}-w^{\prime\prime}(x)=\frac{\sigma_{1}(h)}{h(x)}w(x)&x\in(0,1),\\ w(0)=w(1)=0.\end{cases}\]
We can choose \(w\) to be positive and concave, so that
\[\begin{split} w^{\prime}(x)&\geq 0\qquad\qquad\text{in } \left(0,\frac{1}{2}\right),\\ w^{\prime}(x)&\leq 0\qquad\qquad\text{in }\left(\frac{1}{2},1 \right).\end{split} \tag{5.14}\]
Moreover, by the variational characterization (4.18), we get
\[\sigma_{1}(h)\leq\frac{\int_{0}^{1}(w^{\prime})^{2}(x)\,dx}{\int_{0}^{1}\frac{ w^{2}(x)}{h(x)}\,dx}, \tag{5.15}\]
and then, joining (5.13) and (5.15), we get
\[\frac{\mu_{1}(h)}{\sigma_{1}(h)}\geq\frac{\int_{0}^{1}\frac{w^{2}(x)}{h(x)}\, dx}{\int_{0}^{1}\frac{w^{2}(x)}{g(x)}\,dx}.\]
To prove (5.12) it is sufficient to prove that
\[\int_{0}^{1}\frac{w^{2}(x)}{g(x)}\,dx\leq\int_{0}^{1}\frac{w^{2}(x)}{h(x)}\,dx.\]
We now compute the left-hand side by means of the change of variables \(x=H(y)\), where
\[H(y)=\int_{0}^{y}h(t)\,dt,\]
so that
\[\int_{0}^{1}\frac{w^{2}(x)}{g(x)}\,dx=\int_{0}^{1}\frac{w^{2}(H(y))}{h(y)}\,dy.\]
We now notice that the symmetry of \(h\) gives (5.5) with \(\bar{x}=1/2\), namely
\[\begin{split} H(y)&\leq y\qquad\qquad x\in\left[0, \frac{1}{2}\right],\\ H(y)&\geq y\qquad\qquad x\in\left[\frac{1}{2},1 \right].\end{split} \tag{5.16}\]
Finally, joining (5.16), and (5.14), we have
\[\int_{0}^{1}\frac{w^{2}(x)}{g(x)}\,dx=\int_{0}^{1}\frac{w^{2}(H(y))}{h(y)}\, dy\leq\int_{0}^{1}\frac{w^{2}(y)}{h(y)}\,dy,\]
which concludes the proof.
Proof of Theorem 1.4.: The result follows from Proposition 5.4 and Proposition 5.6
### Acknowledgements
We would like to thank Carlo Nitsch and Cristina Trombetti for the valuable advice that helped us to achieve these results.
The three authors were partially supported by Gruppo Nazionale per l'Analisi Matematica, la Probabilita e le loro Applicazioni (GNAMPA) of Istituto Nazionale di Alta Matematica (INdAM).
### Competing Interests
The authors report there are no competing interests to declare.
|
2310.00842 | Time-averaged Dynamics of Compressible Particles in Oscillatory Gradient
Flows | Acoustic fields effect steady transport of suspended particles by rectifying
the inertia of primary oscillations. We develop a fully analytic theory that
relates this steady particle motion to incident oscillatory (acoustic) flow and
the time-averaged force acting on the particle, systematically spanning the
entire range between inviscid acoustofluidics and viscous particle
hydrodynamics. By applying the Lorentz reciprocal theorem, we obtain a
Fax\'{e}n-like relationship that includes nonlinear inertial forces, which
depend on (i) the thickness of the oscillatory Stokes layer around the
particle, and (ii) the density and compressibility contrast between the
particle and the fluid. The framework recovers secondary radiation forces for
thin Stokes layers, and predicts a reversal of the motion when the thickness of
the Stokes layer is comparable to the particle size. We quantitatively validate
the theory using numerical simulations of the timescale-separated
hydrodynamics. | Xiaokang Zhang, Jake Minten, Bhargav Rallabandi | 2023-10-02T01:33:19Z | http://arxiv.org/abs/2310.00842v1 | # Time-averaged Dynamics of Compressible Particles in Oscillatory Gradient Flows
###### Abstract
Acoustic fields effect steady transport of suspended particles by rectifying the inertia of primary oscillations. We develop a fully analytic theory that relates this steady particle motion to incident oscillatory (acoustic) flow and the time-averaged force acting on the particle, systematically spanning the entire range between inviscid acoustofluidics and viscous particle hydrodynamics. By applying the Lorentz reciprocal theorem, we obtain a Faxen-like relationship that includes nonlinear inertial forces, which depend on (i) the thickness of the oscillatory Stokes layer around the particle, and (ii) the density and compressibility contrast between the particle and the fluid. The framework recovers secondary radiation forces for thin Stokes layers, and predicts a reversal of the motion when the thickness of the Stokes layer is comparable to the particle size. We quantitatively validate the theory using numerical simulations of the timescale-separated hydrodynamics.
The application of oscillatory fields is a powerful means to manipulate suspended particles and has recently been used in a wide range of applications, including microfluidic particle focusing and sorting [1, 2], cell patterning [3], acoustic levitation [4, 5], and the design of swimming microrobots [6]. An incident acoustic or otherwise oscillatory source excites an oscillatory flow around a suspended particle. The advective inertia of the primary oscillations drives a secondary flow that exerts a nonzero time-averaged force on the particle, leading to time-averaged motion of the particle along gradients of the incident field. For example, particles may accumulate at nodes or antinodes of an acoustic standing wave [7], be attracted to boundaries [2, 8], or may assemble into chains or clusters [9, 10, 11].
The flow is controlled by the ratio \(\delta=\sqrt{\frac{2\nu}{\omega a^{2}}}\) of a viscous Stokes layer thickness to the particle radius \(a\) (\(\nu\) is the kinematic viscosity of the fluid and \(\omega\) is the angular frequency of oscillation); see Fig. 1. In the inviscid acoustic limit (\(\delta\ll 1\)), the time-averaged particle dynamics are well understood through the theory of secondary radiation forces [12, 13]. An alternative (better suited for \(\delta\gg 1\)) uses the Gatigiond-Maxey-Riley equation [14, 15] (often with modifications [16, 17, 18, 19]) but neglects compressibility effects important in acoustics. Most applications operate at intermediate \(\delta\), where no simple analytic theory exists, and where the above approaches and direct hydrodynamic calculations [20, 21] can yield contradictory predictions for the particle dynamics.
In this Letter, we develop analytic theory and numerical simulations for the time-averaged motion of a spherical particle in an oscillatory flow, systematically accounting for inertial and viscous forces (arbitrary \(\delta\)) and compressibility effects. We start with a known ambient (or incident) fluid flow \(\mathbf{v}^{\infty}(\mathbf{x},t)\) that is defined in the absence of the particle (Fig. 1) and is characterized by a combination of an oscillatory primary component of characteristic speed \(v\) and a slower secondary component with non-zero time-average. Such flows are common in nonlinear acoustics and in streaming flows driven by oscillating boundaries. The flow in the presence of the particle is \(\mathbf{v}(\mathbf{x},t)=\mathbf{v}^{\infty}(\mathbf{x},t)+\mathbf{v}^{d}( \mathbf{x},t)\), where \(\mathbf{v}^{d}\) is the disturbance (or scattered) flow produced by the particle. The fluid (\(f\)) and the particle (\(p\)) have equilibrium density \(\rho_{f,p}\) and compressibility \(\kappa_{f,p}=(\rho_{f,p}c_{f,p}^{2})^{-1}\), where \(c_{f,p}\) is the speed of sound in the medium. Scaling length with \(a\), time with \(\omega^{-1}\) and defining a dimensionless density field \(\varrho(\mathbf{x},t)=\rho(\mathbf{x},t)/\rho_{f}\), the flow is governed by
\[\frac{2}{\delta^{2}}\left(\frac{\partial\mathbf{v}}{\partial t}+ \varepsilon\mathbf{v}\cdot\nabla\mathbf{v}\right)= \nabla\cdot\mathbf{\sigma}, \tag{1a}\] \[\frac{\partial\varrho}{\partial t}+\varepsilon\nabla\cdot(\varrho \mathbf{v})= 0\,. \tag{1b}\]
Here, \(\mathbf{\sigma}=-p\mathbf{I}+\left(\nabla\mathbf{v}+\nabla\mathbf{v}^{\mathsf{ T}}\right)\) is the stress tensor (scaled with \(\mu v/a\), where \(\mu=\nu\rho_{f}\)) and \(\varepsilon=v/(a\omega)\) is the dimensionless amplitude of oscillation. The particle translates [velocity \(\mathbf{V}_{p}(t)\)], and undergoes volume oscillations (with a surface velocity \(V_{n}(t)\mathbf{n}\), \(\mathbf{n}\) being the fluid-facing unit normal). On the particle surface \(S_{p}(t)\), the flow thus
Figure 1: An ambient flow of fluid (density \(\rho_{f}\), compressibility \(\kappa_{f}\)) produces oscillations of a suspended particle (density \(\rho_{p}\), compressibility \(\kappa_{p}\)). Advective nonlinearities drive a secondary time-averaged motion of the particle.
satisfies
\[\mathbf{v}(\mathbf{x},t)=\mathbf{V}_{p}(t)+V_{n}(t)\mathbf{n},\quad\mathbf{x}\in S _{p}(t), \tag{2}\]
Rotation of the particle does not contribute to the force due to symmetry [21], so we neglect it here.
We seek to relate the time-averaged motion of the particle to the (known) ambient flow and time-averaged forces acting on the particle for arbitrary \(\delta\) and small oscillation amplitude \(\varepsilon\ll 1\)[22]. We invoke a perturbation solution with \((\mathbf{v},\mathbf{\sigma})\sim(\mathbf{v}_{1},\mathbf{\sigma}_{1})+\varepsilon( \mathbf{v}_{2},\mathbf{\sigma}_{2})\) and \(\varrho\sim 1+\varepsilon\varrho_{1}\). Primary components (subscript 1) are strictly oscillatory, whereas secondary components (subscript 2) additionally involve steady components, which are of interest. Separating orders of \(\varepsilon\) in (1) leads to
\[\frac{2}{\delta^{2}}\frac{\partial\mathbf{v}_{1}}{\partial t}= \nabla\cdot\mathbf{\sigma}_{1},\quad\frac{\partial\varrho_{1}}{\partial t}+\nabla \cdot\mathbf{v}_{1}=0. \tag{3a}\] \[\nabla\cdot\left\langle\mathbf{\sigma}_{2}-\frac{\delta^{2}}{ \delta^{2}}\mathbf{v}_{1}\mathbf{v}_{1}\right\rangle=\mathbf{0},\quad\nabla \cdot\left\langle\mathbf{v}_{2}+\varrho_{1}\mathbf{v}_{1}\right\rangle=0, \tag{3b}\]
where angle brackets define a time-average over an oscillation according to \(\left\langle g\right\rangle(\mathbf{x})=(2\pi)^{-1}\int_{t}^{t+2\pi}g( \mathbf{x},t)\) and isolate steady flow features. As is typical in acoustics, the primary flow is weakly compressible, with pressure and density oscillations being related by \(p_{1}=\varrho_{1}c_{f}^{2}/(\nu\omega)\). The inertia of the secondary flow is typically small [23] and has been neglected in (3b).
Similarly expanding the particle kinematics into primary and secondary contributions, projecting (2) onto the mean particle surface \(\left\langle S_{p}\right\rangle\), and separating powers of \(\varepsilon\) yields effective boundary conditions (details in Supplemental Material [24])
\[\mathbf{v}_{1} =\mathbf{V}_{p1}+\mathbf{V}_{n1}\mathbf{n}\quad\text{for}\quad \mathbf{x}\in\left\langle S_{p}\right\rangle, \tag{4a}\] \[\mathbf{v}_{2} =\mathbf{V}_{p2}-\left\langle\int\mathbf{v}_{1}dt\cdot\nabla \mathbf{v}_{1}\right\rangle\quad\text{for}\quad\mathbf{x}\in\left\langle S_{p }\right\rangle. \tag{4b}\]
We first solve for the primary (oscillatory) flow around the particle [21; 25] by making the ansatz that they are of the form \(\text{Re}\left[g(\mathbf{x})e^{it}\right]\) for generally complex \(g(\mathbf{x})\). Spatial variations of the ambient flow occur on length scales much larger than the particle (either the wavelength of sound \(L_{c}=2\pi c_{f}/\omega\) or a geometric scale \(L_{g}\)). Defining a time-averaged (i.e. inertial) frame \(\mathbf{r}=\mathbf{x}-\left\langle\mathbf{X}_{p}\right\rangle\) centered at the _time-averaged particle center_\(\left\langle\mathbf{X}_{p}\right\rangle\), we expand the primary ambient flow as
\[\mathbf{v}_{1}^{\infty}(\mathbf{x},t)\sim\mathbf{V}_{1}^{\infty}(t)+\mathbf{E} _{1}^{\infty}(t)\cdot\mathbf{r}+\frac{1}{3}\Delta_{1}^{\infty}(t)\mathbf{r}+\ldots \tag{5}\]
in terms of the velocity \(\mathbf{V}_{1}^{\infty}\), the deviatoric rate of strain (i.e. extension rate) \(\mathbf{E}_{1}^{\infty}\) and the velocity divergence \(\Delta_{1}^{\infty}\) of the ambient flow, all evaluated at \(\mathbf{x}=\left\langle\mathbf{X}_{p}\right\rangle\). Note that these flow properties are complex phasors, and it will be understood that only the real part of any complex equality is physically meaningful. The vorticity of the ambient flow does not contribute to forces due to symmetry and has been neglected in (5). Solving (3a), (4a) for \(a\ll L_{c}\) yields the primary disturbance flow
\[\mathbf{v}_{1}^{d}=\mathbf{D}\cdot(\mathbf{V}_{p1}-\mathbf{V}_{1}^{\infty})+ \mathbf{\mathcal{Q}}:\mathbf{E}_{1}^{\infty}+\mathbf{m}\left(V_{n1}-\frac{\Delta_ {1}^{\infty}}{3}\right)\!, \tag{6}\]
where \(\mathbf{m}(\mathbf{r})\), \(\mathbf{D}(\mathbf{r},\delta)\) and \(\mathbf{\mathcal{Q}}(\mathbf{r},\delta)\) are well-known monopole (rank-1), dipole (rank-2) and quadrupole (rank-3) tensor solutions; see [24].
The primary oscillatory flow \(\mathbf{v}_{1}=\mathbf{v}_{1}^{\infty}+\mathbf{v}_{1}^{d}\) is now known (as is the primary stress \(\mathbf{\sigma}_{1}\)), up to the oscillatory particle kinematics \(\mathbf{V}_{p1}\) and \(V_{n1}\). To this end, we invoke conservation of the particle momentum (projected on \(e^{it}\) modes), \(\frac{4}{3}\pi a^{3}\rho_{p}\frac{4\mathbf{V}_{p1}}{d\Omega t}=\int_{\left\langle S _{p}\right\rangle}\mathbf{n}\cdot\mathbf{\sigma}_{1}dS\). This establishes the oscillatory velocity of the particle relative to that of the ambient flow according to [13]
\[\mathbf{V}_{p1}-\mathbf{V}_{1}^{\infty} =\mathcal{R}\mathbf{V}_{1}^{\infty},\quad\text{where} \tag{7a}\] \[\mathcal{R}(\lambda,\tilde{\rho}) =-\frac{2\lambda^{2}(\tilde{\rho}-1)}{\lambda^{2}(2\tilde{\rho}+1) +9\lambda+9} \tag{7b}\]
is a relative particle mobility (see Fig. 2), \(\tilde{\rho}=\rho_{p}/\rho_{f}\) is the density ratio, and \(\lambda=(1+i)/\delta\) is a complex reciprocal Stokes layer thickness. Real and imaginary parts of \(\mathcal{R}\), respectively, quantity in-phase and out-of-phase oscillations of the particle relative to the fluid. Similarly, equilibrium of normal stresses on the particle surface determines \(V_{n1}=\frac{1}{3}\left(\tilde{\kappa}-1\right)\Delta_{1}^{\infty}\), where \(\tilde{\kappa}=\kappa_{p}/\kappa_{f}\) is the compressibility ratio [13].
Having fully determined the primary flow \(\mathbf{v}_{1}\), we turn to the time-averaged particle motion. Doinikov [20] showed that the average force exerted by the fluid on the particle (in units of \(\epsilon\mu av\)) is
\[\left\langle\mathbf{F}\right\rangle=\int_{\left\langle S_{p}\right\rangle} \mathbf{n}\cdot\left\langle\mathbf{\sigma}_{2}-\frac{2}{\delta^{2}}\mathbf{v}_{1} \mathbf{v}_{1}\right\rangle\,dS. \tag{8}\]
To make an analytic prediction for \(\left\langle\mathbf{F}\right\rangle\) without calculating \(\mathbf{\sigma}_{2}\) in detail (which requires a solution to the secondary flow), we reformulate (8) using the Lorentz reciprocal theorem [26]. We introduce, as an auxiliary flow, the steady, incompressible, Stokes flow [velocity \(\hat{\mathbf{v}}(\mathbf{x})\), stress \(\hat{\mathbf{\sigma}}(\mathbf{x})\)] produced by a sphere translating with velocity \(\hat{\mathbf{V}}\) through quiescent fluid. The rate of strain of this
Figure 2: Relative particle mobility, showing real (solid; in-phase) and imaginary (dashed; out-of-phase) parts.
auxiliary flow is \(\hat{\mathbf{E}}(\mathbf{x})=\mathbf{\mathcal{E}}(\mathbf{x})\cdot\hat{\mathbf{V}}\) and the auxiliary traction on the particle surface is \(\mathbf{n}\cdot\hat{\mathbf{\sigma}}|_{<S_{p}>}=\mathbf{T}(\mathbf{x})\cdot\hat{ \mathbf{V}}\), where the tensors \(\mathbf{\mathcal{E}}\) (rank 3) and \(\mathbf{T}\) (rank 2) are well-known (e.g. [24; 25; 27]). Starting with (3b), we construct the symmetry relation \(\nabla\cdot\left\langle\mathbf{\sigma}_{2}-\frac{2}{\delta^{2}}\mathbf{v}_{1} \mathbf{v}_{1}\right\rangle\cdot\hat{\mathbf{v}}=\nabla\cdot\mathbf{\sigma}\cdot \mathbf{v}_{2}\) and integrate over the fluid volume to recast (8) as (see SI)
\[\langle\mathbf{F}\rangle= \int_{(S_{p})}\left(\mathbf{V}_{p2}-\mathbf{v}_{2L}^{\infty}- \left\langle\int\mathbf{v}_{1}dt\cdot\nabla\mathbf{v}_{1}\right\rangle^{d} \right)\cdot\mathbf{T}dS\] \[+\int_{(V)}\frac{2}{\delta^{2}}\left\langle\mathbf{v}_{1} \mathbf{v}_{1}\right\rangle^{d}:\mathbf{\mathcal{E}}dV, \tag{9}\]
where \(\langle V\rangle\) represents the volume surrounding the time-averaged particle surface. Above, we have introduced the (known) ambient time-averaged Lagrangian "streaming" velocity \(\mathbf{v}_{2L}^{\infty}(\mathbf{x})=\left\langle\mathbf{v}^{\infty}+\int \mathbf{v}_{1}^{\infty}dt\cdot\nabla\mathbf{v}_{1}^{\infty}\right\rangle\), which represents the average velocity of a _material_ fluid element in the absence of the particle [28]. Using standard averaging rules for products of complex oscillating quantities [29], we find that the time-averaged force on the particle (reverting to dimensional variables) is
\[\langle\mathbf{F}\rangle\stackrel{{\rm Real}}{{=}} -6\pi\mu a\left\{\mathbf{V}_{p2}-\mathbf{v}_{2L}^{\infty}-\frac{a^ {2}}{6}\nabla^{2}\mathbf{v}_{2L}^{\infty}\right\}\bigg{|}_{\mathbf{x}=( \mathbf{X}_{p})}\] \[+m_{f}\left(\mathbf{V}_{1}^{\infty}\right)^{*}\cdot\mathbf{E}_{1} ^{\infty}\ \mathcal{F}_{E}(\lambda,\tilde{\rho})\] \[+m_{f}\left(\mathbf{V}_{1}^{\infty}\right)^{*}\Delta_{1}^{\infty}\ \mathcal{F}_{\Delta}(\lambda,\tilde{\rho},\tilde{\kappa}), \tag{10}\]
where \(m_{f}=\frac{4}{3}\pi a^{3}\rho_{f}\), and \(\mathcal{F}_{E}\) and \(\mathcal{F}_{\Delta}\) are complex coefficients that we discuss in detail later. The asterisk denotes a complex conjugate, and only the real part of (10) is physically relevant [30].
The first term of (10) is a Stokes drag with a Faxen correction for a non-inertial particle moving through the _Lagrangian_ ambient streaming field \(\mathbf{v}_{2L}^{\infty}\). The second and third terms are inertial forces that depend on a quadratic combinations of the ambient oscillatory velocity and rate of strain. The associated complex coefficients \(\mathcal{F}_{E,\Delta}\) determine the strengths of these forces and account for both in-phase and out-of-phase oscillations. For example, the real part of \(\mathcal{F}_{E,\Delta}\) quantifies the force resulting from an ambient flow velocity that oscillates in phase with the ambient strain-rate (e.g. a standing acoustic wave), whereas \(\mathcal{F}_{E,\Delta}\) characterizes forces due to \(90^{\circ}\) out-of-phase oscillations (e.g. a traveling wave). We find that these coefficients admit the exact decomposition
\[\mathcal{F}_{E} =\mathcal{R}^{*}\mathcal{G}_{E}, \tag{11a}\] \[\mathcal{F}_{\Delta} =(\tilde{\kappa}-1)\,\mathcal{G}_{\Delta}^{\kappa}+\mathcal{R}^{* }\mathcal{G}_{\Delta}^{\mathcal{R}}+(\tilde{\kappa}-1)\mathcal{R}^{*}\mathcal{ G}_{\Delta}^{\kappa\mathcal{R}}, \tag{11b}\]
into terms that depend on the density contrast \((\tilde{\rho}-1)\) (through \(\mathcal{R}^{*}\)), the compressibility contrast \((\tilde{\kappa}-1)\) or a product of the two. The associated complex coefficients \(\mathcal{G}_{A}^{B}(\lambda)\) are purely hydrodynamic quantities (independent of particle properties) that arise from the spatial structure of the primary flow. They are obtained by analytic evaluation of the integrals in (9) (using _Mathematica_); see solid curves in Fig. 3. Simple expressions for these coefficients are given by
\[\mathcal{G}_{E}(\lambda) \simeq-\frac{3\lambda^{2}+\frac{4}{5}(1+9i)\lambda+9}{4\lambda^{2}}, \tag{12a}\] \[\mathcal{G}_{\Delta}^{\kappa}(\lambda) =-\frac{1}{2},\] (12b) \[\mathcal{G}_{\Delta}^{\mathcal{R}}(\lambda) =-\frac{\lambda^{2}+3i\lambda+6}{4\lambda^{2}},\] (12c) \[\mathcal{G}_{\Delta}^{\kappa\mathcal{R}}(\lambda) \simeq-\frac{15}{4\lambda^{2}}\frac{(9\lambda+8i)}{(9\lambda+40i)}. \tag{12d}\]
Note that (12b,c) are exact, while the approximations (12a,d) are accurate to within 3% of the exact results [24] and are asymptotic at leading order for both small and large \(\lambda\); see dashed curves in Fig. 3(a,d).
The relations (10)-(12) describe in full the time-averaged motion of a particle suspended in an oscillatory gradient flow and form the main theoretical result of this Letter. In the inviscid limit of \(\delta\to 0\) (\(\lambda\to\infty\)), the present formulation fully recovers the theory of secondary radiation forces [12; 13], while the viscous limit \(\delta\to\infty\) recovers Stokesian hydrodynamics; cf. [31]. We note that time-averaged inertial force contributions due to flow curvature [32] can simply be added to the right hand side of (10). Furthermore, because the secondary flow is quasi-steady and inertialess, the sum of the time-averaged hydrodynamic force \(\langle\mathbf{F}\rangle\) and external non-hydrodynamic forces (e.g., particle's buoyant weight) is zero. This condition thus determines the time-averaged velocity \(\mathbf{V}_{p2}\) of a freely suspended particle.
We now discuss the behavior of the inertial force contributions of (10) in detail. The coefficient \(\mathcal{F}_{E}=\mathcal{R}^{*}\mathcal{G}_{E}\) associated with extensional flow is nonzero only for density-mismatched particles, and approaches real-valued constants in both the inviscid [\(\delta\ll 1\), \(\mathcal{F}_{E}\to\mathcal{F}_{E}^{\rm inv}=\frac{3(\tilde{\rho}-1)}{2(\tilde{ \rho}+1)}\)],
Figure 3: Coefficients (a) \(\mathcal{G}_{E}\), (b) \(\mathcal{G}_{\Delta}^{\kappa}\), (c) \(\mathcal{G}_{\Delta}^{\mathcal{R}}\) and (d) \(\mathcal{G}_{\Delta}^{\mathcal{R}}\) as obtained from the theory (curves) and from numerical solutions (symbols), showing real and imaginary parts. Approximations (12a,d) are indicated by dashed curves in panels (a,d).
and the viscous [\(\delta\gg 1\), \(\mathcal{F}_{E}\rightarrow\mathcal{F}_{E}^{\text{isc}}=-\frac{(\tilde{\rho}-1)}{2}\)] limits, in agreement with [19]. Thus, only velocities oscillating in phase with the extension rate lead to time-averaged forces in either limit. Notably, the inviscid and viscous limits are of opposite sign, indicating a reversal of the corresponding inertial force contribution with \(\delta\) (Fig. 4a). This reversal occurs when \(\delta\approx 1.5\) and increases weakly with density ratio (Fig. 4a; see also [24]). The imaginary part of \(\mathcal{F}_{E}\) vanishes in both limits, and achieves a maximum at intermediate \(\delta/a\). Both real (in-phase) and imaginary (out-of-phase) parts of \(\mathcal{F}_{E}\) are comparable for the \(O(1)\) values of \(\delta\) typical of applications and are both likely to be important in practical oscillatory flows.
The contribution of dilatation somewhat more complicated as it depends on all three physical parameters (\(\delta\), \(\tilde{\rho}\) and \(\tilde{\kappa}\)); see (12). As with \(\mathcal{F}_{E}\), the coefficient \(\mathcal{F}_{\Delta}\) [Fig. 4(b,c)] asymptotes to real-valued constants in both the inviscid [\(\mathcal{F}_{\Delta}^{\text{inv}}=-\frac{\tilde{\kappa}-1}{2}+\frac{(\tilde{ \rho}-1)}{2(2\tilde{\rho}+1)}\)] and viscous [\(\mathcal{F}_{\Delta}^{\text{visc}}=-\frac{1}{6}(\tilde{\kappa}-1)(\tilde{ \rho}+2)-\frac{1}{3}(\tilde{\rho}-1)\)] limits. We plot the coefficient \(\mathcal{F}_{\Delta}\) (after normalizing by \(\mathcal{F}_{\Delta}^{\text{inv}}\)) for different density and two compressibility ratios in Fig. 4(b,c). While \(\mathcal{F}_{\Delta}\) may change sign with \(\delta\), this feature is not universal and only occurs for a limited range of density and compressibility ratio.
Finally, we verify the predictions of our theory using numerical solutions of the detailed flow in an axisymmetric setting under the small-amplitude perturbation scheme (3)-(4). We use the analytical formulation of the oscillatory flow (described above) and numerically solve for the secondary flow in detail, holding the particle stationary on average, and with no ambient Lagrangian streaming (\(\mathbf{V}_{p2}=\mathbf{v}_{2L}^{\infty}=\mathbf{0}\)). We then use (8) to calculate \(\langle\mathbf{F}\rangle\), which under the above setup isolates the inertial contributions of (10). Furthermore, constructing oscillatory flows with pairs of flow modes lets us identify the computed force with a single term in (10) (e.g., a simulation with \(\Delta_{1}^{\infty}=0\) and nonzero \(\mathbf{V}_{1}^{\infty}\) and \(\mathbf{E}_{1}^{\infty}\) identifies \(\mathcal{G}_{E}\)). The \(\mathcal{G}\) coefficients thus computed are in excellent agreement (typically to within \(4\%\)) with the exact results; see symbols in Fig. 3.
Figure 5 shows example streamlines of the numerically computed secondary flow associated for a polystyrene bead in water (see also [24]). Though quite complex near the sphere, the secondary flow exhibits the \(r^{-1}\) far-field velocity decay characteristic of Stokes flow driven by a point force. Interestingly, the associated point force is distinct from \(\langle\mathbf{F}\rangle\), as a part of the secondary stress (viz., the radiation pressure) is in _hydrostatic_ balance with a part of the Reynolds stress \(\left\langle\frac{2}{\delta^{4}}\mathbf{v}_{1}\mathbf{v}_{1}\right\rangle\) and does not engender a secondary flow. A detailed exposition of these features is left to future work.
Whether extension or dilatation ultimately dominates the time-averaged particle dynamics depend on the details of the ambient flow as well as the physical properties of the system. When the ambient flow surrounds (or is generated by) a geometric feature of size \(L_{g}\ll L_{c}\) (e.g. in microstreaming flows), the extension rate \(\propto v/L_{g}\) is much greater than the dilatation rate \(\propto v/L_{c}\). In this case, the extensional component of inertial force dominates the dilatational one, provided that the properties of the particle do not contrast too strongly with that of the fluid. In the same geometric situation, however, both contributions may be important for large density or compressibility contrasts (e.g. a surfactant-coated gas bubble in water). By contrast, acoustofluidic and acoustic levitation setups typically
Figure 4: Real (solid) and imaginary (dashed) parts of (a) \(\mathcal{F}_{E}\) and (b,c) \(\mathcal{F}_{\Delta}\) (normalized by their inviscid limits) for different density and compressibility ratios. The change in the signs of the real (imaginary) parts of the coefficients indicates a reversal in the direction of the force due to straining components that are in phase (out of phase) with the fluid velocity.
use \(L_{g}\simeq L_{c}\), so both extensional and dilatational contributions are equally important at the outset. The fully analytic theory developed here encompasses all of these situations over the entire range of \(\delta\), \(\tilde{\rho}\) and \(\tilde{\kappa}\), and is thus a powerful quantitative tool to understand the dynamics of suspended objects in a wide range of acoustic and oscillatory flow systems.
We are grateful to S. Agarwal, M. Gazzola and S. Hilgenfeldt for stimulating discussions, and thank the National Science Foundation for support through grant CBET-2143943.
|
2307.00255 | The "super-active" accretion phase of T CrB has ended | The symbiotic recurrent nova T CrB erupted for the second and last recorded
time in 1946. Following the outburst, the accretion rate onto its WD has
remained rather low with only occasional and minor flaring episodes, until in
late 2014 it entered a "super-active" phase (SAP) that peaked in April 2016:
the flux radiated by Balmer lines increased by two orders of magnitude,
accompanied by the appearance of strong HeI, HeII, and many other emission
lines. Following the sharp maximum, the intensity of the emission lines has
been steadily decreasing, reaching back the pre-SAP levels by mid-2023. The end
of SAP is also confirmed by the drop of $B$-band brightness to pre-SAP
conditions and the simultaneous re-appearance of a large-amplitude flickering.
This suggest that the accretion disk has emptied from the extra material that
has driven the "super active" state and has completed its transfer onto the WD,
setting the stage for a new and probably imminent nova eruption. | U. Munari | 2023-07-01T07:35:01Z | http://arxiv.org/abs/2307.00255v1 | # The "super-active" accretion phase of T CrB has ended
###### Abstract
The symbiotic recurrent nova T CrB erupted for the second and last recorded time in 1946. Following the outburst, the accretion rate onto its WD has remained rather low with only occasional and minor flaring episodes, until in late 2014 it entered a "super-active" phase (SAP) that peaked in April 2016: the flux radiated by Balmer lines increased by two orders of magnitude, accompanied by the appearance of strong HeI, HeII, and many other emission lines. Following the sharp maximum, the intensity of the emission lines has been steadily decreasing, reaching back the pre-SAP levels by mid-2023. The end of SAP is also confirmed by the drop of \(B\)-band brightness to pre-SAP conditions and the simultaneous re-appearance of a large-amplitude flickering. This suggest that the accretion disk has emptied from the extra material that has driven the "super active" state and has completed its transfer onto the WD, setting the stage for a new and probably imminent nova eruption.
Recurrent Novae (1366) -- Symbiotic stars (1674) -- Stellar accretion disks (1579) 0000-0002-4880-8808]Ulisse Munari
## 1 Introduction
T CrB is a very famous recurrent nova (eruptions recorded in 1866 and 1946; Payne-Gaposchkin 1964, and references therein) and is also a symbiotic binary by harboring a red giant (RG) as the donor star to the massive white dwarf (WD) companion.
The life-cycle of a symbiotic binary as outlined by Munari (2019), is characterized by long accretion phases interspersed by shorter periods during which the material accumulated on the surface of the WD is burned nuclearly. If the accreted shell is not electron degenerate, the burning proceeds in thermal equilibrium for decades/centuries until most of the hydrogen fuel in the shell is consumed, the burning finally quenches down, and a new long-lasting phase of accretion initiates the next cycle (examples are V4368 Sgr, HM Sge, and V1016 Cyg). When the accreted shell is instead electron degenerate, the nuclear burning proceeds explosively resulting in a nova outburst, with most of the shell expelled in the process and the residual nuclear burning on the WD extinguishes in a few weeks/months, after which accretion resumes and a new cycle begins. In addition to T CrB, other well known symbiotic recurrent novae are RS Oph, V3890 Sgr, and V745 Sco.
Traditionally, accretion in symbiotic stars has been treated as a smooth process relatively stable over long periods of time (eg. Kenyon, 1986). This approach has progressively changed in favor of a highly-episodic interpretation of the accretion process, characterized by brief periods of (very) high accretion rates in-between longer intervals spent at much lower mass-transfer rates (eg. Luna et al., 2020; Munari et al., 2021).
Munari et al. (2016) has called attention to the fact that starting with 2015, T CrB entered a "super-active" accretion phase (SAP), characterized by a much brighter accretion disk as the result of a greatly enhanced mass-flow through it and then toward the central WD. The accretion level attained during SAP largely exceeded any other experienced by T CrB since the 1946 eruption. By noting that a similar event preceded the 1946 nova outburst, Munari et al. (2016) concluded that SAP is probably announcing a new and imminent eruption of T CrB, a view shared by Schaefer (2023).
## 2 Observations
We have been regularly recording fluxed spectra of T CrB for the last \(\sim\)35 yrs, initially with the Asiago 1.82m + B&C and since 2006 with the Asiago 1.22m + B&C telescope. For all the 1.22m spectra, we adopted a 300 ln/mm grating blazed at 5000 A that paired with a completely UV-transparent optical train and a highly UV-sensitive CCD detector (ANDOR iDus DU440A with a back-illuminated E2V 42-10 chip, 2048\(\times\)512 array, and 13.5 \(\mu\)m pixel size),
allows to efficiently record spectra down to the \(\sim\)3100 A atmospheric cut-off imposed by the telescope 1000m altitude above sea level. Our 1.22m spectra of T CrB extend from 3200 to 7900 A at 2.3 A/pix dispersion. In addition to being fluxed thanks to nightly observations of spectrophotometric standard stars, their flux zero-point is fine-tuned against (nearly-)simultaneous \(BVR\) photometry, so that the flux error anywhere in the spectra rarely exceed a few percent. This 2006-2023 set of T CrB spectra is therefore characterized by a highly stable instrumental set-up and robust IRAF calibration procedures, and constitutes an ideal sample for variability studies of spectral features over long intervals of time. A few of the spectra of T CrB here considered can be viewed in Munari et al. (2016).
Figure 1: Spectral and photometric changes prior to and during the ”super-active” accretion phase of T CrB. The bottom panel shows the \(B\)-band light-curve of T CrB collected by ANS Collaboration. The panels above plot the integrated flux of selected emission lines measured on low-resolution spectra, all obtained with the Asiago 1.22m + B&C (and 300 ln/mm grating). While H\(\beta\) is discernible in emission at all epochs, higher excitation/ionization lines turned on only during the ”super-active” accretion phase (the arrows point to missing-line epochs).
## 3 The End of the "Super Active" Accretion Phase
To trace the evolution of T CrB along the "super active" accretion phase, we have measured the integrated flux of a sample of emission lines on the 2006-2023 Asiago 1.22m + B&C spectra described in the previous section. The selected lines are H\(\beta\), HeI 5876, and HeII 4686, which are representative of low, medium, and high excitation/ionization conditions, respectively. Their absolute fluxes are plotted in Figure 1 along with the \(B\)-band lightcurve of T CrB as recorded by ANS Collaboration.
Prior to 2014, both HeI 5876 and HeII 4686 were not visible in emission, and H\(\beta\) has been present but always at rather feeble levels. During this period the \(B\)-band lightcurve is dominated by the ellipsoidal distortion of the RG with superimposed the scattering due to the large-amplitude and always present flickering (eg. Zamanov & Bruch, 1998; Dobrotka et al., 2010, and references therein).
The start of the "super active" accretion phase in late 2014 is marked by the sudden appearance in emission of HeI 5876 and HeII 4686, a corresponding rise of H\(\beta\) (compare the spectra for 2014-11-02 and 2012-09-03 in Munari et al., 2016), and a large increase in \(B\)-band brightness caused by the rapidly brightening accretion disk. SAP reached its maximum in April 2016, when the flux of all emission lines sharply peaked, as illustrated by Figure 1. Around this epoch strong satellite UV and thermal radio emission were also recorded (Luna et al., 2018; Linford et al., 2019).
Following the maximum in April 2016, the flux of all emission lines has gone steadily decreasing, at a faster pace for higher excitation/ionization lines, and by mid-2023 they have returned to pre-SAP values, indicating that the "super active" accretion phase is finally over. Also the \(B\)-band photometric brightness has been quickly dropping during the last few months, while the flickering has returned to the usual large amplitude (Minev et al., 2023) compared to the much reduced impact it had on photometry collected around SAP maximum (Zamanov et al., 2016).
The disappearance of emission lines, the drop in \(B\)-band brightness, and the return to large amplitude flickering suggest that the accretion disk has emptied from the extra material that driven the "super active" state and has completed its transfer onto the WD. The shell around the latter may possibly still takes a little to cool and shrink down to favorable conditions, but the stage for a new nova outburst appears now inevitably set.
|
2310.14837 | Harnessing Attention Mechanisms: Efficient Sequence Reduction using
Attention-based Autoencoders | Many machine learning models use the manipulation of dimensions as a driving
force to enable models to identify and learn important features in data. In the
case of sequential data this manipulation usually happens on the token
dimension level. Despite the fact that many tasks require a change in sequence
length itself, the step of sequence length reduction usually happens out of
necessity and in a single step. As far as we are aware, no model uses the
sequence length reduction step as an additional opportunity to tune the models
performance. In fact, sequence length manipulation as a whole seems to be an
overlooked direction. In this study we introduce a novel attention-based method
that allows for the direct manipulation of sequence lengths. To explore the
method's capabilities, we employ it in an autoencoder model. The autoencoder
reduces the input sequence to a smaller sequence in latent space. It then aims
to reproduce the original sequence from this reduced form. In this setting, we
explore the methods reduction performance for different input and latent
sequence lengths. We are able to show that the autoencoder retains all the
significant information when reducing the original sequence to half its
original size. When reducing down to as low as a quarter of its original size,
the autoencoder is still able to reproduce the original sequence with an
accuracy of around 90%. | Daniel Biermann, Fabrizio Palumbo, Morten Goodwin, Ole-Christoffer Granmo | 2023-10-23T11:57:44Z | http://arxiv.org/abs/2310.14837v1 | # Harnessing Attention Mechanisms: Efficient Sequence Reduction using Attention-based Autoencoders
###### Abstract
Many machine learning models use the manipulation of dimensions as a driving force to enable models to identify and learn important features in data. In the case of sequential data this manipulation usually happens on the token dimension level. Despite the fact that many tasks require a change in sequence length itself, the step of sequence length reduction usually happens out of necessity and in a single step. As far as we are aware, no model uses the sequence length reduction step as an additional opportunity to tune the models performance. In fact, sequence length manipulation as a whole seems to be an overlooked direction. In this study we introduce a novel attention-based method that allows for the direct manipulation of sequence lengths. To explore the method's capabilities, we employ it in an autoencoder model. The autoencoder reduces the input sequence to a smaller sequence in latent space. It then aims to reproduce the original sequence from this reduced form. In this setting, we explore the methods reduction performance for different input and latent sequence lengths. We are able to show that the autoencoder retains all the significant information when reducing the original sequence to half its original size. When reducing down to as low as a quarter of its original size, the autoencoder is still able to reproduce the original sequence with an accuracy of around 90%.
Neural networks, Natural language processing, Machine Learning
## I Introduction
Over the recent years, a lot of progress has been made in the field of natural language processing (NLP). This progress has been largely driven by the Transformer, introduced by Vaswani et al. in 2017 [1]. The power of Transformer models is based on their ability to avoid recurrence in favour of an easy parallelizable attention mechanism while retaining the ability to capture contextual information in sequential data. Since then, many Transformer-based models have been developed, studied, and used to reach ever-better-performing NLP models. Among these are OpenAI's GPT models [2, 3, 4], XLNet [5] and, BERT [6] and its popular derivatives [7, 8]. OpenAI's most recent GPT iterations, ChatGPT [9] and GPT4 [10] have generated a lot of media attention and demonstrated the power and fast progress of such models in NLP.
Nearly all NLP tasks require the model to change the shape of the input sequence at some level in the workflow. The desired output shape rarely corresponds to a sequence's input shape. A core problem in many classification tasks is the reduction of sequences with many tokens down to a single token, as in many classification tasks or sentence embedding. Despite this, few Transformer models deviate from the standard practice of capturing data features solely on the word token level. The sequence reduction is usually performed in a way that does not allow for a lot of exploration and tuning on the sequence level (see. II).
The productivity to capture contextual information and data features on the word token level lies in the attention mechanism used in Transformer models. The scaled dot-product attention [1] generates a new contextual representation for each token in a sequence by calculating a weighted sum over the tokens in the entire sequence. The weights, which correspond to an attention map between all tokens, yield from generated query, key and value vector representations of each token. These vectors are created from the tokens in the input sequence. Thus, the scaled dot-product naturally captures contextual information on a word token level.
Interestingly, by design, the scaled dot-product attention does allows for direct manipulation of the number of tokens in the sequence, as it was first introduced in a machine translation task [1]. The number of tokens output by scaled dot-product attention is dictated by the number of query vectors. While it is clear how to generate differing query vector numbers in a machine translation setting, in general it is not as straightforward how to initialize query vectors so that the number of query vectors differs from the input sequence. Due to this, it seems that in Transformer models, the sequence reduction step itself is done as a necessity and seldomly seen as an opportunity for additional modelling and tuning.
In general, the investigation and use of more nuanced techniques to manipulate the sequence length is of interest. Manipulating the sequence length could offer an additional axis and tool to tune and create better and more potent latent spaces that capture the patterns in sequential data. Additionally, manipulating sequence length could allow for avoiding or alleviating the problem of empty calculations due to padding sequences or putting more context into input-restrictive models such as BERT and other Transformer models.
In this work, we investigate the ability to use scaled-dot
product attention to capture features by directly manipulating the number of tokens in a sequence instead of the dimensions of its tokens. To this end, we introduce an additional scaling matrix into the scaled dot-product to enable it to manipulate the number of tokens in a sequence more freely. We then employ this reducing scaled dot-product attention in an autoencoder setting. The autoencoder encodes sequences in a latent space with fewer tokens and recreates the entire original sentence from the token-reduced latent space. While our method introduced strong restrictions regarding the uniformity of input sentences, we argue that these restrictions overlap and synergize well with existing input shape limitations of existing popular attention models such as BERT-based models. In particular tasks with very long input sequences will be less affected due to already existing limitations. Overall, in this paper:
* We introduce reducing scaled dot-product attention. By adding a simple scaling matrix to the query vector generation process in the scaled dot-product attention process, we enable it to directly manipulate sequence length.
* We investigate the reducing scaled dot-product's ability to retain information when reducing sequence lengths for different reduction sizes and input sequence lengths.
* We build and train a novel attention-based autoencoder that creates a latent space by manipulating sequence dimensions instead of token dimensions.
To the best of our knowledge, there is currently no other detailed work investigating the direct manipulation of sequence lengths with Transformer-like attention mechanisms. Further, we are unaware of other investigations using nuanced sequence length manipulation as a primary tool to encode contextual information in latent spaces.
Thus, we hope that the explorations in this work spark inspiration in other researchers to explore the manipulation of sequence lengths in addition to token dimension. We are further convinced that, in time, more investigations in this direction will reveal more natural and less restrictive methods to directly manipulate sequence length with attention.
## II Related Work
When handling sequential data, most machine learning tasks require the model to change the shape of the input to the shape of the desired output. The desired output can range from a static number of classes in classification tasks to a new sequence of different shape in text generation tasks to a single token in sentence embedding tasks. This sequence reduction step is performed in different ways depending on the model.
In recurrent models, the sequence length reduction is achieved in tandem with the context capturing mechanism. Gated-recurrent Unit (GRU) [11] models or long short-term memory (LSTM) [12] models process the information sequentially and the last hidden state captures the contextual information of the entire sequence. While naturally reducing a sequence down in length, difficulties in parallelization and optimization make these models challenging in their own regard [13].
The majority of current state-of-the-art models in NLP are based upon the attention mechanism introduced with the Transformer model [1]. BERT and the GPT models make use of the basic Transformer encoder/decoder blocks, while other models keep the overall structure of a Transformer model and replace the scaled dot-product and multi-head attention mechanisms with more efficient attention models. For example, the Longformer [14] replaces the scaled-dot product with an attention mechanism that scales better with the length of an input sequence. Unlike in recurrent models, the attention mechanism does not naturally yield a reduction down to one token for classification tasks. While the mechanism itself theoretically allows to change the length of the sequence during the attention step, in practice, this has been rarely used due to the difficult task of initializing the query vector with a new length (see III-A).
Transformer models solve the necessary reduction mainly in two different schemes. In the first scheme, reminiscent of recurrent models, a single token of the last Transformer block is designated to capture and embed the entire sequence in a single token. BERT approached this problem by introducing a CLS-token in the tokenization process and appending it to the input sequence. Similarly, Gao et al. [15], Hou et al. [16], and Wang et al. [17] use a CLS-token to create single token sentence representations and further improve them with contrastive learning schemes. Feng and Yang et al [18] create language agnostic BERT sentence embeddings by using the \(\ell_{2}\) normalized CLS-token of the last encoder block.
The second scheme employs standard pooling algorithms, usually averaging, to combine all hidden word tokens into a single token. Sentence-BERT [19] uses BERT as a base model in a siamese model setup and creates sentence embedding by averaging over the tokens of the last Transformer block. Refined SBERT [20] extends this approach to a manifold space. Correspondingly, Li et al. [21] find improved performance averaging over the hidden tokens of the last two Transformer blocks instead of using the CLS-token. Kim et al. [22] follow Li et al. and combine it with a contrastive learning scheme to create sentence embeddings. Barkan et al. [23] average over the last 4 Transformer blocks and Park et al. [24] pool over all hidden encoder embeddings to generate a pooled sentence embedding for the purpose of building a variational autoencoder. Sentence T5 [25] employs both approaches in different settings. In an encoder only setting the first token of the last Transformer block is used as an sentence embedding and in an encoder-decoder setting, the sequence is reduced by averaging over all encoder output tokens. Overall, these two approaches are the current state-of-the-art in reducing sequences down in Transformer models.
Another relevant research area focuses on token pruning. Token pruning aims to avoid unnecessary computation by removing either parameters or word tokens from Transformer. In particular, strategies focused on eliminating word tokens in successive attention layers are of considerable interest. PowerBERT [26] computes the importance of each token and only passes on a number of tokens. The importance is taken as the sum of the attention paid to a token by all other tokens in the sequence. In a similar way, length adaptive Transformers [27] prune a random number of tokens, according to importance, after each attention layer. Kim et al.
[28] avoid the fixed number of tokens pruned by introducing an importance threshold, pruning all tokens falling below this threshold. TR-BERT [29] devised a reinforcement learning scheme in which a module is trained to decide which tokens to prune. Meanwhile, Zero-TPrune [30] prunes tokens according to importance and similarity to other tokens. While the goal of reducing the sequence length matches with token pruning models, the motivation for token pruning solely lies in saving computational resources. The attention process itself is not changed and the question of using the sequence reduction as a training tool in attention models remains open.
Our proposed approach attempts to make use of the attention mechanisms' ability to change the sequence length directly in the attention step. This is done by scaling the query vector generation process with additional parameters. Similar to our approach, Fang et al. create a conditional variational autoencoder by using attention average blocks [31]. These attention average blocks use a single learnable q-vector token in a Multi-head attention step to reduce the sentence down to a single token representations. We extend this approach by allowing the attention process to reduce an arbitrary number of tokens. While introducing strong restrictions, in specific tasks, this new approach will offer a more nuanced way to manipulate the shape of sequences with attention mechanisms.
## III Model
Our model follows a basic encoder-decoder structure (see Fig. 1). An input length sequence \(N\) is subjected to the attention-based encoder that reduces the sequence by \(k\) tokens. The sequence length in latent space is thus reduced to \(N-k\) tokens. This reduced latent space is then given to a decoder to reproduce the original input sequence. Thus, the model learns to compress the information of a full-length sequence into a reduced form and then reconstruct the complete sequence from its reduced form.
The reducing attention encoder/decoder blocks are, in essence, applications of the scaled dot-product attention introduced by Vaswani et al. [1] combined with additional scaling weights to allow for sequence shape manipulation. We chose this form of attention as it is well established in existing Transformer models, and it fundamentally already allows for sequence manipulation.
### _Attention can manipulate sequence length directly_
Looking at how the scaled dot-product is calculated immediately makes apparent how it allows for direct manipulation of the sequence shape. By generating query, key and value vectors \(q,k,v\) from original input \(x\), the attention \(\mathcal{A}\) is calculated via the formula:
\[\mathcal{A}(Q,K,V)=\mathrm{softmax}(\frac{QK^{T}}{\sqrt{d_{k}}})V \tag{1}\]
where \(Q,K,V\) are the respective matrices consisting of the query, key, and value vectors for each token in the sequence. Looking at the dimensions of the variables in the formula clarifies how the sequence shape is directly manipulated. Considering an input sequence with \(n\) tokens and word embedding dimension of \(d_{m}\), the input sequence matrix \(X\) has the dimension of \(\mathds{R}^{n\times d_{m}}\). Multiplying \(X\) with the respective weight matrix \(W^{Q,K,V}\) generates the respective query, key and value matrices:
\[Q,K,V=X\times W^{Q,K,V} \tag{2}\]
The dimension of the \(Q,K,V\) matrices are then given by
\[\mathds{R}^{n\times d_{m}}\times\mathds{R}^{d_{m}\times d_{q,k,v}}=\mathds{R}^ {n\times d_{q,k,v}}\]
with the respective vector dimension \(d_{q},d_{k},d_{v}\) that can be arbitrarily chosen, whereas the dimension of the query and key vectors need to match \(d_{q}=d_{k}\). It is a common convention in Transformer models to set the query, key, and value to the same value \(d_{q}=d_{k}=d_{v}\) as was done in the introducing Transformer paper. We can see that the sequence length \(n\) is carried over from the original input and is equal for all matrices. When artificially labeling the sequence length of the query, key and value matrices, we get the respective dimension \(\mathds{R}^{n_{q,k,v}\times d_{q,k,v}}\). Plugging the Q, K and V matrices into equation 1 and looking at the dimensions, it becomes apparent that the dimension of the scaled dot-product has the form
\[\mathds{R}^{n_{q}\times d_{v}}\]
This shows that the output sequence shape is only dictated by the number of tokens in the query matrix. Thus, the sequence shape can be directly manipulated if the number of tokens in the query matrix differs from the original sequence length, while the key and value matrices still carry the full information of the complete input sequence. Consequentially, the difficulty of manipulating the sequence shape lies in initializing the query vector matrix with a different shape than its key and value counterparts (\(n\neq n_{q}\)).
Fig. 1: Model architecture. The encoder reduces the input sequence from N tokens down to N-k tokens. This reduced latent space is then given to a decoder to reproduce the original input sequence.
### _Initializing query vector with addition scaling matrix_
The simplest method to initialize the query vectors is by introducing an additional scaling matrix of trainable parameters \(W^{S}\), with the dimensions \(\mathds{R}^{n_{q}\times n}\), where \(n_{q}\) and \(n\) denote the new and old sequence length, respectively. Inserting \(W^{S}\) into equation 2 yields a query matrix of the desired shape.
\[Q=W^{S}\times(X\times W^{Q})\]
with
\[\mathds{R}^{n_{q}\times n}\times\mathds{R}^{n\times d_{m}}\times\mathds{R}^{d _{m}\times d_{q}}=\mathds{R}^{n_{q}\times d_{q}}\]
Thus, the original sequence length \(n\) given by the input \(X\) has been changed to the new sequence length \(n_{q}\), where \(n_{q}\) is an arbitrarily chosen value.
### _Method limitations_
This scaling matrix method immediately exposes some of the constraints and challenges of this sequence shape-changing approach.
While neural networks need the input to have the same dimension on the token level, they are more flexible regarding sequence length. Adding the scaling matrix \(W^{S}\) negates this flexibility and fixates the input and output sequence lengths allowed to a specific value. This additional strong constraint is undesirable in many ways. Sequences, especially in NLP, seldom have the same length across the entire dataset. Thus, all input sequences presented to the model will need to be padded or constrained to the same length, which introduces additional preprocessing and computational costs due to the increased padding. Additionally, if we have to pad sequences with more tokens that we effectively reduce, in an ideal case, this model would then remove the padding in the sequence that we had to add to use the model.
For instance, consider a scenario where we pad all sequences in a dataset to match the length of the longest one and then train the model to decrease this sequence length by 50%. If the dataset's average sequence length is less than half of the longest sequence's length, we will end up incorporating more padding tokens than the number of tokens we're reducing, on average. This will likely impact the model's desired ability to identify and compress information-carrying features in the sequence, as it will mainly reduce padding tokens void of information.
This strong additional constraint makes this model undesirable for tasks that handle short sequences and sequences of varying lengths. In cases where the length of sequences is already fixed or the sequences' lengths are similar, the input length constraint this model introduces has less of a negative impact. We argue that this constraint will effectively have no negative consequence on tasks considering long sequences. Many state-of-the-art Transformer models already have an input limit of 512 to 728 tokens due to the associated computational cost of longer sequences. This already establishes a constraint for the input length, and our model's fixed input length has no additional effect.
Consequentially, NLP tasks where the input sequence length exceeds the transformer-imposed input length limit of 512 tokens can benefit from this model, even with the strong constraints. Part of future research will be to investigate methods to make the initialization of the query matrix more flexible to enable the model to handle shorter sequences better. Nevertheless, many tasks and use-cases could benefit from this model approach, such as long document classification.
## IV Experiments
In our experiments, we employ the proposed method in an autoencoder setting. Our goal is to investigate the models' ability and limits regarding their ability to retain most of the important information of a sequence while reducing the length of the sequence. Thus, we define the performance of the model as the models' ability to reconstruct the the original sequence from its latent space representation. If the model can reconstruct the original sequence from its reduced form, we argue that the reduced form has retained all the information of the original full-size sequence. If the model cannot reproduce the original sequence fully, it can be surmised that some critical information has been lost in the reduction process.
The goal of our investigation is to study and explore the limits of the reduction process. How far can we reduce a sequence without it losing critical information? How does the models performance depend on the input sequence length and the desired length of latent space?
### _Hyperparameters and Setup_
To answer these questions, we train our model in several different input and latent sequence length settings. In the first experiment, we fixate the input sequence length and systematically reduce the latent sequence length. This allows us to investigate the models learning behaviour for different reduction sizes and potential limits in how much a sequence can be reduced in this setting. For the second experiment we repeat the first experiment for different input sequence lengths. This explores how the model is affected by different input sequence lengths and whether the input sequence length has an influence on how much we can reduce it without losing information.
Table IV-A list the hyperparameters and settings used in this study. The model was trained for a maximum of 20 epochs with a patience of 5 epochs. The performance of the model is recorded as the token-wise accuracy between the original sequence and the reconstructed sequence. Thus, an accuracy of \(1.0\) indicates that every token in the entire sequence was correctly reconstructed, while an accuracy of \(0.5\) would indicate that only half of the tokens in the sequence were correctly decoded. The model was trained using cross entropy loss and the learning rate was linearly reduced from its start value of \(0.001\) to \(0.0001\) over the first five epochs. For input sequences above 256 tokens, the learning rate was kept static at \(0.0001\). The chosen hyperparameter settings are the result of a hyperparameter search for the case of reducing a 512 input token sequence down to a 256 latent token sequence. Initially, a dropout of 30% was considered during the hyperparameter search it became apparrent that not adding any dropout yielded the best results for longer sequences.
### _Dataset_
The model was trained with data from the wikipedia dataset [32]. Throughout this study we used data from the English '20220301.en' data dump. As the model has to be trained many times thoughout our investigations, as a time saving feature, we only used the first 30% of the dataset in our simulations. This leaves us with a dataset consisting of roughly \(1.9\) million samples which were further divided into an 80/20 train/test split. Each sample in the dataset corresponds to the contents of a single wikipedia page. To generate samples of the desired input sequence length, we only used the first \(N\) tokens of the repsective wikipedia page. Thus, we create a single sample per page and we observe the same number of data samples for different input sequence lengths. The data was tokenized via the Transformers library, using the pretrained BertTokenizer [33].
## V Results and Discussion
### _Impact of Sequence Reduction on Model Accuracy_
To start testing our hypothesis, we fix the input sequence length to 512 tokens and we train the model to reduce it down to different latent sequence lengths. The results of these simulations are shown in Figure 2. The graph depicts the recorded validation accuracy over training epochs.
As expected, when not reducing the input sequence at all _(512to512)_ the model is able to reproduce the original sequence completely. More interesting is the fact that the model continues to show this near-perfect performance when the model reduces the original sequence by half _(512to256)_. Even when reducing the original sequence down to \(25\%\) of its original length _(512to128)_, the model still reaches a reconstruction accuracy of over \(90\%\). The performance of the model starts to fall more significantly when approaching reduction ratios of \(20\%\) or lower (see Fig. 3).
These results prove that in the case of an input sequence length of 512 tokens, we can reduce the sequence length by half without losing any critical information. Moreover, most of the critical information are retained even when the original sequence is reduced down to \(25\%\).
### _Impact of Input Sequence Length on Model Accuracy_
To further understand the behaviour of our model we then focus on the impact of different Input Sequence Lengths on model accuracy. Figure 3 illustrates how the model performs with varying degrees of input sequence reduction. To allow a more direct and easy comparison between simulations with different input sequence lengths, the x-axis expresses the latent sequence length as a percentage of the corresponding input.
In Figure 3, focusing on the case of 512 input tokens, we can observe the previously described behaviour. Up to a \(50\%\) reduction, the model achieves near-perfect accuracy. However, greater reductions result in a noticeable decline in performance. The model behaves differently with shorter input sequences: performance degrades more quickly for shorter inputs compared to longer ones. Simulations with inputs longer than 256 tokens maintain performance above a \(0.9\) accuracy level even at a \(25\%\) reduction, whereas those with shorter inputs fall below this threshold. In particular, when looking at the simulations below a \(64\) input sequence length, we observe stronger fluctuations in performance.
The cause of this increased drop in performance for smaller input sequence lengths is not immediately evident. One pos
Fig. 3: Reconstruction accuracy over latent sequence lengths expressed as a percentage of respective number of tokens in the input sequence length.
Fig. 2: Reconstruction accuracy over training epochs for different latent sequence lengths. The input sequence length was fixed to 512 tokens.
sible explanation could be the nature of the sequences as natural text, which has inherent structural rules. Longer sequences may have more of these rules and repetitive elements, potentially making them easier to reduce efficiently. For instance, a paragraph made up of multiple sentences will exhibit repeated grammatical structures and commonly used words. Such redundancy in longer sequences might be more effectively compressed by the model in the latent space compared to shorter sequences.
Another potential reason could be the quantized nature of our sequences. A single token carries more weight in a 64-token sequence than in a 512-token sequence. Despite examining proportional reductions in our study, the effect of removing a single token could be more impactful in models with shorter input lengths.
Finally, the decline in performance might also be attributed to the fewer latent tokens available in models with shorter input sequences. For instance, a \(25\%\) reduction from 64 tokens leaves only 16 latent tokens, while the same reduction from 512 tokens yields 128 latent tokens. Although larger sequences require encoding more information, the greater number of latent tokens could allow the model to learn and capture more nuanced relationships.
Overall, we can summarize that for all observed input sequence lengths, we can at least halve the sequence in latent space without loss of significant information. In a simplified view, we could say that each latent space token is able to carry the information of two input tokens without losing information. However, attempting to condense the information of three tokens into one introduces the first small but significant loss in information. When reducing a sequence of \(30\%\) or less, we retain an accuracy of above \(0.9\). However, reductions bigger than \(30\%\) lead to a substantial loss of information, the extent of which varies depending on the initial number of input tokens.
### _Performance variance due to initialization_
In initial simulation runs, we used a fixed learning rate of \(0.0001\) for all simulations. While performing well for large input sequence lengths, when scaling down to smaller input sequence lengths we could observe a large variance in performance depending on the initialization of the model itself. To further examine this behaviour we simulated three different input/latent sequence length combinations \(10\) times with different initialization seeds. Figure 4 shows the results of these simulations reporting the accuracy variance over the training epochs. The error-bands in the graph represent the standard error.
Looking at the curve for the larger input sequence length, we see a very small variance in accuracy after training. Thus, in the case of the large input sequence, the model was able to consistently reach a high performance. In contrast, the simulation with an input sequence length of \(128\) shows a significantly larger variance in accuracy. A more detailed quantification of model behavior is presented in Figure 5, which shows boxplots for accuracy variation across each epoch. These boxplots capture the range of achieved accuracy over \(10\) runs, with the upper and lower bounds representing the maximum and minimum accuracies attained. Looking at the accuracy after 20 epochs of training, the accuracy averages around \(0.9\) for \(128\)-token input sequence length. The arms of the boxplot further indicate that the minimum and maximum reached accuracies range from close to \(1\) to \(0.7\). This large discrepancy in performance between different initializations suggests that in some cases the model got stuck in local minima and did not reach the possible global minimum. This problem was alleviated by increasing the initial learning rate by a factor of \(10\) and linearly decreasing it over the next 5 epochs down to its original value of \(0.0001\). The simulation called _128to72_* depicts the results of this learning rate approach. We can see that the model still shows a higher variability after the first epoch but quickly finds the correct global minimum in subsequent epochs. Overall, the learning rate was the only parameter that had an effect on this problem. Making the model more complex by increasing the embedding or attention dimension did not affect the variance for smaller input sequence lengths.
We hypothesize that altering the input sequence length affects the smoothness of the error function, leading to more distinct local minima that are challenging to circumvent.
## VI Conclusion
In this study, we introduced a new way to use the well-established scaled-dot product attention of the Transformer to directly manipulate the shape of sequences. The scaled dot-product directly allows the change of the length of a sequence by adding a scaling matrix to the query vector generation. We have developed a new autoencoder that compresses the original input sequence into a shorter latent sequence and then reconstructs the original input from this compressed latent space. In current literature, no previous work has looked at using sequence length manipulation as an encoding tool in attention-based deep learning networks.
Fig. 4: Illustration of accuracy variance for different input length sizes. Depicted are errobands for three different simulation runs. The bold line depicts the mean value. The simulation denoted with \(*\) ran with an increased learning rate of \(0.001\) and linearly decreased, in 5 epochs, to \(0.0001\). \(0.0001\) is the default learning rate.
Our subsequent exploration of the models' capabilities reveals interesting insights into the model's reduction behaviour. We were able to observe that in latent space, the original sequence length can be reduced by half without information being lost. Reducing the original sequence down to \(30\%\) of its original length still allowed for a reconstruction accuracy of over \(90\%\). We could further observe that the accuracy starts to drop faster for smaller input sequence lengths. We hypothesised that this is most likely due to a combination of the facts that large natural text carries a lot of recurring structural and word information and each token carries more weight and has more impact in smaller sequences.
We further were able to observe that the model is more prone to get stuck in local minima for smaller input sequence lengths. Increasing the learning rate proved to alleviate this effect. Thus, choosing the appropriate learning rate for the individual use case is of great importance. We could observe that larger input sequence length models were able to perform well with lower learning rates than their smaller counterparts.
One of the larger drawbacks of this model is the introduction of an additional strong restriction regarding the possible shape of the input of the model. In its current form, the model requires the input shape to be static. This means, the model is not able to process sequences of different lengths as most NLP models are able to do. This might prove problematic in NLP setting where sequences seldomly are of the same length.
Nevertheless, we argue that in the field of NLP there are use cases and tasks where this additional restriction has little to no effect. This is further supported by the already existing and common upper limit of \(512\) and \(768\) token sequences in Transformer models.
This study opens up exciting further investigations. Applying this model to sequences outside of the NLP setting would allow to study whether the faster dropoff in performance for smaller input sequence lengths is due to the specific structure of natural text sequences or due to other effects. A further study into making the model more flexible regarding the input sequence length would make this model more interesting for tasks with highly varying sequence lengths. Furthermore, we investigated the ability to reduce sequences in a single attention step. A promising next step would be to investigate the reduction limits when considering multiple attention steps. Ultimately, we believe that the direct and more nuance manipulation of sequence lengths is an overlooked avenue in tuning and designing machine learning models, especially in cases where sequential data needs to be reduced down to one vector.
|
2305.01732 | High-Resolution Synthetic RGB-D Datasets for Monocular Depth Estimation | Accurate depth maps are essential in various applications, such as autonomous
driving, scene reconstruction, point-cloud creation, etc. However,
monocular-depth estimation (MDE) algorithms often fail to provide enough
texture & sharpness, and also are inconsistent for homogeneous scenes. These
algorithms mostly use CNN or vision transformer-based architectures requiring
large datasets for supervised training. But, MDE algorithms trained on
available depth datasets do not generalize well and hence fail to perform
accurately in diverse real-world scenes. Moreover, the ground-truth depth maps
are either lower resolution or sparse leading to relatively inconsistent depth
maps. In general, acquiring a high-resolution ground truth dataset with
pixel-level precision for accurate depth prediction is an expensive, and
time-consuming challenge.
In this paper, we generate a high-resolution synthetic depth dataset (HRSD)
of dimension 1920 X 1080 from Grand Theft Auto (GTA-V), which contains 100,000
color images and corresponding dense ground truth depth maps. The generated
datasets are diverse and have scenes from indoors to outdoors, from homogeneous
surfaces to textures. For experiments and analysis, we train the DPT algorithm,
a state-of-the-art transformer-based MDE algorithm on the proposed synthetic
dataset, which significantly increases the accuracy of depth maps on different
scenes by 9 %. Since the synthetic datasets are of higher resolution, we
propose adding a feature extraction module in the transformer encoder and
incorporating an attention-based loss, further improving the accuracy by 15 %. | Aakash Rajpal, Noshaba Cheema, Klaus Illgner-Fehns, Philipp Slusallek, Sunil Jaiswal | 2023-05-02T19:03:08Z | http://arxiv.org/abs/2305.01732v1 | # High-Resolution Synthetic RGB-D Datasets for Monocular Depth Estimation
###### Abstract
Accurate depth maps are essential in various applications, such as autonomous driving, scene reconstruction, point-cloud creation, etc. However, monocular-depth estimation (MDE) algorithms often fail to provide enough texture & sharpness, and also are inconsistent for homogeneous scenes. These algorithms mostly use CNN or vision transformer-based architectures requiring large datasets for supervised training. But, MDE algorithms trained on available depth datasets do not generalize well and hence fail to perform accurately in diverse real-world scenes. Moreover, the ground-truth depth maps are either lower resolution or sparse leading to relatively inconsistent depth maps. In general, acquiring a high-resolution ground truth dataset with pixel-level precision for accurate depth prediction is an expensive, and time-consuming challenge.
In this paper, we generate a high-resolution synthetic depth dataset (HRSD) of dimension \(1920\times 1080\) from Grand Theft Auto (GTA-V), which contains 100,000 color images and corresponding dense ground truth depth maps. The generated datasets are diverse and have scenes from indoors to outdoors, from homogeneous surfaces to textures. For experiments and analysis, we train the DPT algorithm, a state-of-the-art transformer-based MDE algorithm on the proposed synthetic dataset, which significantly increases the accuracy of depth maps on different scenes by 9%. Since the synthetic datasets are of higher resolution, we propose adding a feature extraction module in the transformer's encoder and incorporating an attention-based loss, further improving the accuracy by 15 %.
+
Footnote †: This work was partially funded by the German Ministry for Education and Research (BMBF) under the grant PLIMASC.
+
Footnote †: This work was partially funded by the German Ministry for Education and Research (BMBF) under the grant PLIMASC.
## 1 Introduction
Artificial intelligence's success in several computer vision applications has recently led to the low cost, small size, and wide applications of monocular cameras. Monocular depth estimation (MDE) algorithms are mainly based on neural networks and have shown great abilities in estimating depth from a single image [23, 54, 20, 6, 20, 32]. These algorithms require leveraging high-level scene priors [44], so training a deep neural network with supervised data becomes the defacto solution.
MDE algorithms using convolutional neural networks (CNN) deployed in an encoder-decoder structure learn a depth map with a similar spatial resolution to an RGB image. The encoder learns the feature representations from the input and provides a low-level output to the decoder. Using these features, the decoder first aggregates them and learns the final predictions. While CNN's have been the preferred architecture in computer vision, transformers have also recently gained traction motivated by their success in natural language processing [51]. Transformer-based encoder structures have significantly contributed to many vision-related problems, such as image segmentation [56], optical
Figure 1: Improvement over state-of-the-art DPT [41]. Ours DPT-B and DPT-B+R+AI are two variants of DPT [41] trained on the proposed HRSD datasets. The green rectangle represents area of the image where DPT fails to give precise and consistent depth compared to our algorithm.
flow estimation [30], and image restoration [34]. In contrast with CNN's that progressively down-sample the input image and lose feature resolution across the layers, vision transformer (ViT) [19] processes feature maps at a constant resolution with a global receptive field at every stage. Feature resolution and granularity are important for dense depth estimation, and an ideal architecture should resolve features at or close to the resolution of the input RGB image.
Generally, an MDE algorithm based on CNN or transformer requires large RGB-D datasets consisting of diverse scenes with precise ground truth depth maps. However, the publicly available depth datasets mostly consist of ground-truth depth maps, either lower resolution (e.g., NYU V2) [46] or sparse (e.g., KITTI RGB-D) [24], or have been estimated from multiview depth estimation algorithms MiDaS [42]. Figure 2 highlights some ground-truth depth maps from these publicly available datasets. This is one of the reasons why the existing MDE algorithms lack fine-grained details, as shown in Figure 1, and fail to perform accurately in diverse scenes. In this paper, we generate a high-resolution synthetic dataset using a commercial video game, Grand Theft Auto (GTA-V) [3]. The dataset contains around 100,000 pairs of color images with precise dense depth maps of resolution \(1920\times 1080\), and we refer to this dataset as a High-Resolution Synthetic Depth (HRSD) dataset for monocular depth estimation. To show the effectiveness of the proposed HRSD datasets, we re-train the state-of-the-art transformer-based MDE algorithm: DPT [41] with different experiments explained in section 4.1. More specifically, we fine-tune DPT on the proposed HRSD datasets for dense depth prediction on high-resolution images, demonstrate the performance on other public datasets, and compare them with the SOTA algorithms. Further, to exploit the HRSD dataset, we propose adding a feature-extraction module with an attention-loss that further helps improve the results.
In summary, we introduce the following concepts:
* We have generated a high-resolution synthetic dataset (HRSD) from the game GTA-V [3] with precise ground truth depth values. The HRSD dataset consists of diversified images enabling MDE networks to train on varied scenes, leading to a good generalization of real-world data.
* We propose to add a feature extraction module that processes the color image and converts them into feature maps to use them as patches for both the ViT [19] and DPT algorithms [41]. In addition, we optimize the training procedure by using an attention-based loss instead of shift-invariant loss [41, 25], improving performance in terms of efficiency and accuracy, resulting in smooth, consistent depth maps for high-resolution images.
We conduct experiments on standard public datasets with different input image sizes, such as NYU V2 [46] (\(640\times 480\)), KITTI [24] (\(1216\times 352\)), and our HRSD (\(1920\times 1080\)). We compare the performance with DPT [41], the state-of-the-art MDE algorithm, and Multi-res, [39] a depth estimation network built specifically for high-resolution images. We observe that the depth maps produced after training on the HRSD dataset outperformed existing algorithms on different public datasets.
## 2 Related Work
**RGB-D Dataset** Various datasets have been proposed that are suitable for monocular depth estimation, i.e., they consist of RGB images with corresponding depth annotation of some form. These datasets differ in captured environments and objects, type of depth annotation (sparse/dense, absolute/relative depth), accuracy (laser, stereo, synthetic data), image resolution, camera settings, and dataset size. Earlier RGB-D datasets have relied on either Kinect [47, 29, 17, 46] or LIDAR [24, 8] or stereo vision [45] for depth annotation. Existing Kinect-based datasets are limited to indoor scenes; existing LIDAR-based datasets are biased towards scenes of man-made structures and have a low spatial resolution. Every dataset comes with its characteristics and has its own biases and problems [48].
Figure 2: Example frames from publicly available datasets used for MDE. We zoom-in the problematic areas in the depth map.
Ranftl et al. [42] introduced a vast data source from 3D films capturing high-quality frames from movies to get a diversity of dynamic environments for depth estimation. They captured 80,000 frames at \(1920\times 1080\) resolution while using stereo matching to extract relative depth. However, stereo matching has its downside, as the depth maps are not precise and fail to perform on homogeneous scenes [55]. DIODE [50], another high-resolution dataset generated using a laser sensor for dense depth annotation. It contained 25,000 indoor & outdoor RGB images at a \(768\times 1024\), but it also led to inconsistent depth maps with artifacts in background objects and textured scenes, as shown in Figure 2. Synthetic dataset generation from computer games [22, 27, 43] has been used extensively for training computer vision algorithms like semantic segmentation [11, 40], object detection [31], and depth estimation [25, 52]. Mohammad et al. [25] have also used the GTA-V game to generate a synthetic RGB-D dataset but with relative depth at the focus. The resolution of the dataset used for training is \(256\times 256\) which is much smaller than the publicly available datasets. Furthermore, they need a preprocessing phase, such as histogram equalization, to use the datasets before feeding them to the training. For training, they use a Resnet architecture and process the RGB image and GT depth with resolution \(256\times 256\).
**Depth from Single Image** Convolutional networks within an encoder-decoder paradigm [6] is the standard prototype architecture for dense depth prediction from a single image. The building blocks of such a network consist of convolutional and sub-sampling as their core elements. However, CNN as an encoder suffers from a local receptive field problem [4], leading to less global representation learning at higher resolutions. Several algorithms adapt different techniques to learn features at different resolutions to address this issue like dilated convolutions [16, 36] or parallel multi-scale feature aggression [28, 33].
Recently, transformer architectures such as vision transformer (ViT) [19] or data-efficient image transformers (DeiT) [49] have outperformed CNN architectures in image recognition [19, 49], object detection [12], and semantic segmentation [56]. Inspired by the success of transformers in various topics, Rene et al. [41] use a vision transformer for dense depth prediction and have outperformed all the existing MDE algorithms. Nonetheless, all transformer architectures are data-hungry networks [10, 19] and thus require a huge dataset.
## 3 Proposed Method
In this section, we introduce our high-resolution synthetic dataset (HRSD) and then discuss the proposed architecture changes to ViT [19] and DPT [41] algorithms to provide consistent and accurate dense depth maps on high-resolution images.
### RGB-D Dataset
Acquiring an accurate ground truth depth dataset for high-resolution images is challenging and expensive. Most RGB-D datasets have low image quality and sparse or relatively inaccurate depth maps. Inspired by the success of synthetic data in different computer vision applications [11, 25, 31, 52, 30], we propose to generate a synthetic high-resolution RGB-D dataset for monocular depth estimation. The advantage of having a synthetic RGB-D dataset is to have precise ground truth depth maps for diverse color images. Also, one can control the lighting environments, objects, and background and the datasets can be as large depending on the applications.
We use the Game Grand-Theft-Auto-V (GTA-V) [3] to generate a high-resolution synthetic RGB-D dataset. On a high level, we use the GTA-V game's in-build model and mechanics to calculate the ground truth (GT) depth and store them along with the high-resolution RGB image. Note that a similar idea of GTA-V-based synthetic data generation is designed for semantic segmentation [11, 43] and depth estimation [25, 40, 52]. Grand Theft Auto (GTA) [3], one of the prominent interactive games, consists of many diversified environments, including people, vehicles, recreational areas, and architecture. The precision of graphics and 3D rendered models in this game is exceptional, making it a favorable alternative to acquiring demanding large-scale real-world datasets such as RGB-D.
**Real-time rendering** To explain the data collection process, we must first review deferred shading, an important aspect of modern video games real-time rendering pipeline. Deferred shading utilizes geometric resources to produce depth and normal image buffers by communicating to the GPU [5]. The intermediate outputs of the rendering pipeline are collected in buffers called G-buffers, which are then illuminated rather than the original scene. Rendering is accelerated significantly due to decoupled geometry processing, reflectance properties, and illumination. To utilize the rendering pipeline, we identify how the game communicates with the graphics hardware to extract the different types of resources, i.e., texture maps and 3D meshes. These resources are combined for the final scene composition. APIs such as OpenGL, Direct3D, or DirectX, provided via dynamically loaded libraries, are used for inter-communication. Video Games load these libraries into their application memory and use a wrapper to the specific library to initiate the communication and record all commands.
**G2D and Scripthook V** Using a DirectX driver, we communicate with GTA-V and redirect all rendering commands to the driver for extracting depth maps. To obtain the necessary image datasets under varying conditions from GTA-V, we use an image simulator software G2D: from
GTA to Data [18]. G2D allows to manipulate the environment on the fly, by injecting a customized mod in the game that controls global lighting and weather conditions by timing sunrises and sunsets. Using an accelerated time frame, we can create multiple high-resolution images from a single scene during sunrise, midday, sunset, and midnight. At the same time, automatic screen capture is enforced to record each frame displayed within the game, and the depth information is extracted. This enables us to accumulate a massive dataset as shown in Figure 3, comprising diverse environmental conditions (time of day, weather, season, etc.) and resulting in training computer vision algorithms that are robust and reliable in the real world.
Using the above strategy, we can generate as many images, and in this paper, we synthetically generate \(100,000\) images with resolution \(1920\times 1080\) along with depth maps. The minimum and maximum depths are set to \(0.1\) m and \(10\) m, respectively, for indoor scenes, and the max depth for outdoor scenes is \(50\) m. We choose a small depth range to enable efficient training of our modified transformer network using \(L_{1}\) loss instead of the scale-invariant loss [21] used in most MDE networks [9, 25, 41, 42]. We then use this dataset to train ViT [19] & the DPT [41] algorithm for dense depth-map prediction.
### Architecture
The state-of-the-art DPT [41] algorithm uses ViT [19] as a backbone encoder and then adds a decoder to get a depth map of the same resolution as a color image. Here, we discuss our modification to the DPT [41] architecture. More specifically, we add a feature extractor module (pretrained Resnet [26]) in the DPT [41] architecture and a loss function consisting of attention supervision and \(L_{1}\) loss to efficiently train the algorithm. The important blocks of the architecture are described below, and an overview of the network is depicted in Figure 4.
**Feature Extractor Module** Previous works [15, 19, 56] split the input RGB Image \(I\in\mathbb{R}^{H\times W\times 3}\) into equal size 2D patches and used linear projection to convert them into tokens for ViT. These tokens have a one-to-one correspondence with image patches and thus grow as the size of the input image increases. For high-resolution images, this creates a bottleneck for the ViT, given as the number of tokens increases, ViT's computation performance increases leading to higher inference time. As shown in Figure 4, we replaced the color-image patches in ViT [19] (or DPT's encoder) with a feature extraction module similar to DPT-Hybrid [41]. In this paper, we use pretrained ResNet [26] for our feature extraction module and a detailed analysis is given in Table 4.
The input sequence to the ViT now comes from a ResNet backbone [26]. This feature module (ResNet-50 [26]) converts the input image into patches of feature maps. We use the final layer feature map, which gives a pre-defined dimensional representation of any image size. To match the input dimension of the transformer with the output of the
Figure 4: Overview of our proposed changes to DPT [41] architecture. We introduce a feature extraction module.
Figure 3: Examples from HRSD dataset consisting of indoor & outdoor scenes with diversified objects and environments.
final ResNet-50 layer, we flatten the spatial dimension of these feature maps and project them to the transformer's dimension. So the vectorized patches from the output of Resnet are projected into a latent embedded sequence, the input to the first transformer layer, as shown in Figure 4. The input tokens are then processed by \(L\) transformer layers consisting of multi-headed self-attention (MHSA) and multi-layer perceptron (MLP) blocks.
**Vision Transformer Encoder** The spatial resolution of the initial embedding is maintained throughout the ViT encoder enabling a global receptive field at every transformer layer. This, along with MHSA being an inherently global operation, helps to achieve higher performance on high-resolution images. We have experimented with different values of \(L\), with \(12\) transformer layers providing consistent dense depth maps similar to DPT-Base [41]. At each layer \(L\), the input of MHSA is a triplet of \(Q\) (query), \(K\) (key), and \(V\) (Value) computed as follows. [19, 51]
\[Q=z_{\ell-1}\times W_{Q},K=z_{\ell-1}\times W_{K},V=z_{\ell-1}\times W_{V} \tag{1}\]
Here, \(z_{l-1}\) is the output from the previous transformer layer, with \(z_{0}\) being the input to the first layer. Then, the attention head (\(AH\)) is calculated using the triplet \(Q\), \(K\), and \(V\) as given in [19, 51]. Later, all the \(AH\)'s are combined with a weight matrix calculated during training for multi-head attention. The MHSA output is then fed to the MLP block, which calculates the final output.
**Convolutional Decoder**. Our decoder resembles the one used in DPT [41] as it assembles the set of tokens from different transformer layers at various resolutions. An image-like representation is recovered from the output of the encoder using a three-function calculation called the reassemble operation [41]. We use four reassemble blocks which extract tokens from the \(1^{\text{st}}\), \(4^{\text{th}}\), \(\delta^{\text{th}}\), \(12^{\text{th}}\) (final) transformer layers. Transformer networks need more channels than their convolutional counterparts, so we double the number of channels in the three last reassemble modules. Each stage in the decoder layer, known as fusion blocks, is based on refinement [35]. They progressively combine the feature maps from consecutive stages into the final dense prediction. Unlike in the DPT decoder, batch normalization is helpful for dense depth prediction. We also reduce the number of channels in the fusion block to 96 from 256 in DPT [41] to enable faster computations. The final block is the head block which outputs relative depth for each pixel.
### Attention-Based Loss
The depth range of our HRSD dataset allows us to utilize the standard loss function for depth regression problems known as \(L_{1}\) loss or Mean Absolute Error loss (MAE). Unlike a scale-invariant loss [21], whose variants are used for many MDE networks [9, 25, 41, 42], \(L_{1}\) loss is more efficient and performs better [13]. However, training with only \(L_{1}\) loss leads to discontinuities and noisy artifacts in depth maps [7]. Other loss functions considered to aid \(L_{1}\) loss include employing the edge accuracy between real and predicted depth maps known as Structural Similarity (SSIM) [53] used for the MDE network [6]. Inspired by [14], in our method, we use an attention-based supervision loss to smooth the overall prediction and control the number of depth discontinuities and noisy artifacts in the final output.
To estimate the true values of the attention map at each pixel \(p\), we calculate \(A_{p}\) from the ground-truth depth map as
\[A_{p}=\operatorname{SOFTMAX}\left(-\lambda\left|y_{p}-\hat{y}_{p}\right|\right) \tag{2}\]
where \(y_{p}\), \(\hat{y}_{p}\) are the ground-truth and predicted depth map, respectively, and \(\lambda\) is the hyper-parameter. We use the ground truth attention map (\(A_{p}\)) and the predicted attention values (\(\hat{A}_{p}\)) to calculate the attention-based loss term [14].
\[\mathcal{L}_{\text{as}}=\frac{1}{n}\sum_{p=0}^{n}\left|A_{p}-\hat{A}_{p}\right| \tag{3}\]
For training our final network, we define the final loss \(L\) between \(y\) and \(\hat{y}\) as the weighted sum of two loss terms:-
\[L(y,\hat{y})=\lambda L_{\text{depth}}(y,\hat{y})+L_{as}(y,\hat{y}) \tag{4}\]
where \(L_{depth}\) is the point-wise \(L1\) loss defined on depth
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Algorithms**} & \multicolumn{3}{c|}{**Error \& Accuracy**} \\ \cline{3-5} & & **AbsRel**\(\downarrow\) & **RMSE**\(\downarrow\) & \(\boldsymbol{\delta<1.25\uparrow}\) \\ \hline \multirow{5}{*}{**Dataset**} & ViT-B & 0.115 & 0.509 & 0.828 \\ & ViT-B + R & 0.108 & 0.416 & 0.875 \\ & ViT-B + R + AL & **0.104** & **0.362** & **0.916** \\ \cline{2-5} & DPT-B & 0.101 & 0.375 & 0.895 \\ & DPT-B + R & 0.103 & 0.364 & 0.903 \\ & DPT-B + R + AL & **0.094** & **0.310** & **0.945** \\ \hline \multirow{5}{*}{**Dataset**} & ViT-B & 0.106 & 4.699 & 0.861 \\ & ViT-B + R & 0.101 & 4.321 & 0.889 \\ & ViT-B + R + AL & **0.078** & **2.933** & **0.915** \\ \cline{2-5} & DPT-B & 0.098 & 3.821 & 0.894 \\ & DPT-B + R & 0.069 & 2.781 & 0.939 \\ & DPT-B + R + AL & **0.056** & **2.453** & **0.962** \\ \hline \multirow{5}{*}{**Dataset**} & ViT-B & 0.125 & 0.471 & 0.828 \\ & ViT-B + R & 0.107 & 0.342 & 0.882 \\ & ViT-B + R + AL & **0.099** & **0.322** & **0.912** \\ \cline{2-5} & DPT-B & 0.118 & 0.421 & 0.835 \\ & DPT-B + R & 0.101 & 0.330 & 0.894 \\ & DPT-B + R + AL & **0.074** & **0.288** & **0.921** \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison on three RGB-D datasets. The three variants of ViT [19] and three variants of DPT [41] are trained on the proposed HRSD datasets.
values and is given as.
\[L_{\mathrm{depth}}(y,\hat{y})=\frac{1}{n}\sum_{p=0}^{n}|y_{p}-\hat{y}_{p}| \tag{5}\]
Note that only one weight parameter \(\lambda\) is required for calculating the loss function, and empirically \(\lambda\) = 0.1 is set in equation (4) [14].
## 4 Experiments
In this section, we make an extensive study that demonstrates the advantage of the proposed HRSD dataset. We evaluate public datasets, such as KITTI [24] and NYU [46], and the HRSD dataset. We show quantitative and qualitative comparisons in our experiments for analysis and discussion.
### Training
**Architecture details** DPT [41] uses an encoder-decoder architecture. It uses ViT [19] architecture for its encoder and designs a decoder to get a depth map of the same size as the color image. Thus, DPT [41] use pretrained ViT [19] weights as their initial encoder weights to train the entire encoder-decoder architecture on datasets, such as KITTI (outdoor) [24], NYU V2 (indoor) scenes [46], and the MI-DAS 3D dataset [42] for dense depth prediction. We adopt the same transformer-based encoder-decoder architecture used in DPT for our experiments and improve it further, as mentioned above in section 3.2. We initialize the encoder weights with DPT's [41] and train the entire encoder-decoder architecture on the proposed HRSD datasets (the decoder is initialized from scratch). To show the effect of proposed architecture changes, such as feature module and attention loss, we do six training experiments on the proposed HRSD dataset. Three training is done using DPT encoder weights [41], and they are:
_1)_ Training with DPT [41] encoder weights * on the proposed HRSD dataset with no changes in architecture and loss functions which refer to DPT-B in this paper. _2)_ Training DPT [41] weights with feature module (DPT-B + R). _3)_ Training DPT [41] with feature module and attention-loss (DPT-B + R + AL).
Footnote *: In this paper, for training and evaluation, DPT weights refers to DPT-Hybrid weights as provided in the original DPT [41] paper.
Later, we do a similar process using ViT weights [19], i.e., initialize the encoder weights with ViT's [19] and train the entire encoder-decoder architecture on the proposed HRSD datasets and do three separate trainings.
**Training details** Our proposed algorithm is implemented in PyTorch, and we use \(4\) NVIDIA RTX A6000 48GB GPU for training. The proposed HRSD dataset contains \(100,000\) images of resolution \(1920\times 1080\) and we use \(75,000\) for training, \(15000\) for validation, and \(10000\) for the test dataset. We crop the images to the nearest 32 multiples, and the network outputs the depth at the same resolution as of color image. We trained the model for \(80\) epochs with a batch size of \(4\) and use the ADAMW [38] optimizer with \(\beta\)1 = 0.9 & \(\beta\)2 = 0.999 and a learning rate of \(1\times 10^{-4}\) for encoder & \(1\times 10^{-5}\) for the decoder. The learning rate decayed after \(15\) epochs by a factor of \(10\). Finally, we test the algorithm's performance on different datasets, such as KITTI, NYU, and HRSD, for indoor and outdoor scenes to show the generalization and robustness of the proposed algorithm in real-world scenes.
### Evaluation
Like [21, 41, 42], we use three error metrics, such as RMSE, AbsRel, and percentage of correct pixels, to evaluate the performance of depth maps.
**Datasets for evaluation** To demonstrate the competitiveness of our approach, we evaluate the methods against KITTI [24] and NYU V2 [46] RGB-D datasets. These are the standard datasets for outdoor and indoor scenes, respectively. KITTI dataset contains 697 images (\(1216\times 352\)) with re-projected Lidar points as sparse depth maps and NYU V2 contains 694 images (\(640\times 480\)). In addition, we evaluate depth maps on 10,000 images (\(1920\times 1080\)) of our HRSD dataset as high-resolution datasets like [42, 50] are unavailable to the public.
### Results
**Quantitative Results** To show the effect of the proposed HRSD datasets on different variants of training, we make a quantitative comparison of the evaluation of different datasets and present them in Table 1 and Table 2. In Ta
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{**Dataset**} & \multicolumn{3}{c|}{**Error \& Accuracy**} \\ \cline{3-5} & \multicolumn{1}{c|}{**Algorithms**} & AbsRel \(\downarrow\) & RMSE \(\downarrow\) & \(\mathbf{\delta<1.25\uparrow}\) \\ \hline \multirow{5}{*}{**Parameter**} & DPT [41] & 0.110 & 0.392 & 0.864 \\ & MultiRes [39] & 0.102 & 0.347 & 0.921 \\ & ViT-B + R + AL & 0.104 & 0.362 & 0.916 \\ & DPT-B + R + AL & **0.094** & **0.310** & **0.945** \\ \hline \multirow{5}{*}{**Parameter**} & DPT [41] & 0.114 & 4.773 & 0.849 \\ & MultiRes [39] & 0.059 & 2.756 & 0.956 \\ & ViT-B + R + AL & 0.078 & 2.933 & 0.915 \\ & DPT-B + R + AL & **0.056** & **2.453** & **0.962** \\ \hline \multirow{5}{*}{**Parameter**} & DPT [41] & 0.127 & 0.494 & 0.822 \\ & MultiRes [39] & 0.096 & 0.339 & 0.920 \\ \cline{1-1} & ViT-B + R + AL & 0.099 & 0.322 & 0.912 \\ \cline{1-1} & DPT-B + R + AL & **0.074** & **0.288** & **0.921** \\ \hline \end{tabular}
\end{table}
Table 2: Quantitative comparison on three RGB-D datasets. Here DPT [41] and MultiRes [39] results are obtained using the author’s weights. ViT-B + R + AL and DPT-B + R + AL are the variants of ViT [19] and DPT [41] trained on the proposed HRSD datasets.
ble 1, we compare six variants of training performed using DPT [41] encoder weights & ViT [19] weights as described earlier. The addition of the feature module and attention-loss performs better in all three datasets in all metrics. In Table 2, we compare the original DPT [41] and MultiRes [39] with the proposed variant of ViT (ViT-B+ R + AL) and DPT (DPT-B+ R + AL). From Table 2, we can conclude that the proposed variants are better in almost all the datasets. This indicates the effectiveness of the proposed HRSD datasets, which results in a lower absolute error and higher accuracy.
**Qualitative Results** The quantitative results are supported by visual comparisons in Figures 5 & 6. We compare indoor (Figure 5) and outdoor scenes (Figure 6) for analysis and discussion. We also include real-world scenes in both figures to test the performance of various algorithms. In Figures 5 & 6, DPT [41] fails to give precise depth edges and lacks details of the structure of background objects. Also, DPT [41] fails to get a clear object boundary in real-world scenes. MultiRes [39] can get a sharper depth map and details but provides inconsistent depth within an object, and these artifacts are highlighted using a rectangular box in Figures 5 & 6. Although the proposed variant ViT-B + R+ AL lacks details in the background, the DPT-B + R + AL gives a more structured and consistent depth with precise object shapes and clearer edges, even for objects further away in the scene. This is reasonable because DPT [41] is trained on RGB-D datasets, whereas ViT [19] is trained for image recognition tasks. Compared to both DPT [41] and MultiRes [39], our method results in a smoother depth map closest to ground truth maps.
**Running time analysis** We further compare the inference time from different algorithms and present them in Table 3. It shows the inference speed in milliseconds per frame, averaging over 400 images on the three different resolutions. Timings were conducted on an Intel I5 \(10^{\text{th}}\) generation @ 2. 90GHz with eight physical cores and a single Nvidia RTX A6000 GPU. Multires [39] take the longest running time as it merges different resolutions to compute a high-quality depth map. DPT [41] and the proposed network take similar inference time on smaller resolutions. However, our proposed network is more efficient for high-resolution images due to the fewer patches produced by the feature-extraction module, which is later processed by the transformer layers.
**Feature extraction module analysis** Here, we compare the effect of the feature extraction module that provides image embedding to our transformer layers, as shown in Figure 4. ViT [19] uses a simple image flattening technique to transform the image into patches and comes in two variants with ViT-32 and ViT-16. Since we use the Resnet [26]
Figure 5: Indoor Scenes. \(1^{\text{st}}\)Row:- NYU [46]. \(2^{\text{nd}}\)Row:- HRSD indoor. \(3^{\text{rd}}\) Row:- RealWorld. Our DPT-B + R + AL gives a consistent depth map across all regions and displays sharp structure for overall objects i.e. items on the table in the real-world images. Original DPT [41] fails to identify objects in the background as shown by the green rectangles i.e. no structure of human in HRSD indoor. Multires [39] leads to an inconsistent depth map highlighted by green rectangles i.e. the toilet seat in NYU image.
feature module to process color RGB images, we compare different pre-trained ResNet encoders, such as Resnet-50 and Resnet-101, with ViT [19]. In Table 4, we first make a comparison with ViT-16 and ViT-32. We use these weights as an initial encoder weight and train the entire encoder-decoder on the proposed HRSD datasets. We observe that ViT-16 achieves low error and higher accuracy than ViT-32. Later, we fixed ViT-16 as the backbone architecture and experimented with two resnet encoders, such as Resnet-50 and Resnet-101. We again make two different training, one with Resnet-50 with ViT-16 and the other with Resnet-101 as our initial encoder weights, and train the entire encoder-decoder on the proposed HRSD datasets. As shown in Table 4, introducing the Resnet encoders significantly improves the performance. However, Resnet-50 outperforms and thus becomes the final choice for our MDE network.
## 5 Conclusion
In this paper, we proposed to generate a high-quality synthetic RGB-D datasets from the game GTA-V [3] with precise dense depth maps. Since we can control all the aspects of the GTA game, we can capture scenes with varied lighting, different environments, and diverse objects. We trained the DPT [41] architecture with DPT [41] and ViT [19] weights on the proposed HRSD datasets. We observed a significant improvement in the performance of the depth maps, both objectively and subjectively. The performance is further improved by modifying the DPT [41] architecture and loss function.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Method & AbsRel \(\downarrow\) & RMSE \(\downarrow\) & \(\delta\)\(\downarrow\) 1. 25 \(\uparrow\) \\ \hline \hline ViT-32 [19] & 0.137 & 0.494 & 0.803 \\ ViT-16 [19] & 0.125 & 0.471 & 0.828 \\ ViT-16+ R-101 & 0.112 & 0.387 & 0.852 \\ ViT-16 + R-50 & **0.107** & **0.342** & **0.882** \\ \hline \end{tabular}
\end{table}
Table 4: Feature module analysis. Different variants of ViT [19] and Resnet [26] are experimented with to choose the best feature extraction module.
Figure 6: Outdoor Scenes. \(1^{\text{st}}\) Row-: KITTI [24]. \(2^{\text{nd}}\)Row-: HRSD outdoor. \(3^{\text{rd}}\) Row-: RealWorld. Similar to indoor scenes, our DPT-B + R + AL gives the best performance outputting a consistent depth map with precise overall structure i.e. the motorbike in the real-world image. Original DPT [41] again fails to identify objects in the background as shown by the green rectangle i.e. no structure of background buildings in the KITTI image. Multires [39] leads to the inconsistent depth map, highlighted by green rectangles i.e. depth around the biker’s body is fluctuating in the real-world image. |
2304.09047 | Neural Lumped Parameter Differential Equations with Application in
Friction-Stir Processing | Lumped parameter methods aim to simplify the evolution of spatially-extended
or continuous physical systems to that of a "lumped" element representative of
the physical scales of the modeled system. For systems where the definition of
a lumped element or its associated physics may be unknown, modeling tasks may
be restricted to full-fidelity simulations of the physics of a system. In this
work, we consider data-driven modeling tasks with limited point-wise
measurements of otherwise continuous systems. We build upon the notion of the
Universal Differential Equation (UDE) to construct data-driven models for
reducing dynamics to that of a lumped parameter and inferring its properties.
The flexibility of UDEs allow for composing various known physical priors
suitable for application-specific modeling tasks, including lumped parameter
methods. The motivating example for this work is the plunge and dwell stages
for friction-stir welding; specifically, (i) mapping power input into the tool
to a point-measurement of temperature and (ii) using this learned mapping for
process control. | James Koch, WoongJo Choi, Ethan King, David Garcia, Hrishikesh Das, Tianhao Wang, Ken Ross, Keerti Kappagantula | 2023-04-18T15:11:27Z | http://arxiv.org/abs/2304.09047v1 | # Neural Lumped Parameter Differential Equations with Application in Friction-Stir Processing
###### Abstract
Lumped parameter methods aim to simplify the evolution of spatially-extended or continuous physical systems to that of a "lumped" element representative of the physical scales of the modeled system. For systems where the definition of a lumped element or its associated physics may be unknown, modeling tasks may be restricted to full-fidelity simulations of the physics of a system. In this work, we consider data-driven modeling tasks with limited point-wise measurements of otherwise continuous systems. We build upon the notion of the _Universal Differential Equation_ (UDE) to construct data-driven models for reducing dynamics to that of a lumped parameter and inferring its properties. The flexibility of UDEs allow for composing various known physical priors suitable for application-specific modeling tasks, including lumped parameter methods. The motivating example for this work is the plunge and dwell stages for friction-stir welding; specifically, (i) mapping power input into the tool to a point-measurement of temperature and (ii) using this learned mapping for process control.
## 1 Introduction
Lumped-parameter methods aim to simplify the evolution of spatially-extended physical systems to that of a single "lumped" element. Such methods are common in engineering and science where modeling spatially-extended physical systems may be unnecessary or intractable. Heat transfer is one such field that is amenable to such techniques: under certain assumptions and conditions, the flow of heat can be approximated by scaled temperature differences between bodies of uniform temperature. Thus, the modeling task is reduced to the evolution of a single unit under the action of energy fluxes across boundaries.
Many situations exist where lumped parameter methods may provide meaningful physical insights, including low-cost predictive capabilities, but the definition of the lumped element and its evolution may be unknown or ill-characterized. Typical concessions for proceeding with lumped-parameter modeling include neglecting property gradients within the modeled element and restricting input / output physics to occur along certain pathways (e.g. at a specific interface).
In this work, we are motivated by _Friction-Stir Processing_ (FSP), whereby a non-consumable rotating tool is plunged into a workpiece, generating heat to process the surface of a work
piece without melting the material [1]. Final properties of the processed region are highly sensitive to the process conditions imposed such as temperature, force, plunge depth and the tool geometry. Typically, the required conditions are met through modifying the process parameters such as tool rotation rate, and tool traverse rate during FSP. Therefore knowledge of the mapping between process parameters and process conditions such as temperature behavior is critical for process control [2]. However, modeling complex nonlinear systems like FSP is challenging. Both empirical and computational work has been done to understand the relationship between power and temperature, including computational fluid dynamics (CFD) approaches [3; 4; 5; 6; 7]. While many insights can be gained, detailed numerical models and empirical studies can incur significant costs, and high fidelity modeling may be unnecessary for applications. Here we consider a data-driven approach, and demonstrate it's performance using only a small set of experimental temperature measurements for FSP.
In practice, temperature during a FSP experiment is measured by a thermocouple embedded in the non-consumable rotating tool. We aim to learn the response of these measured temperatures in response to operator inputs using time series from past trial runs for processing a stainless steel alloy. The physics of this process is complex: fast physics of internal heat generation drives the flow of energy to cooler regions, which is dominated by the slower physics of gradient-dependent heat transfer. We construct a data-driven lumped-parameter model corresponding to thermocouple measurements of FSP tool temperature by leveraging assumed energy flow according to the First Law of Thermodynamics - that is, the conserved flow of energy - for a lumped element representative of the tool. With these inductive priors (lumped element obeying the first law of thermodynamics), we choose _Universal Differential Equations_[8] as the modeling paradigm upon which we build out models.
This paper is organized as follows: in Section 2.1 we motivate the problem and desired modeling outcomes, including the FSP machinery, and instrumentation. In Section 2.2, we contextualize our work in the broader ML for Materials Science community. Our methods are described in Section 3 with numerical experiments following in Section 5.
## 2 Background
### Friction Stir Welding
A relatively new joining processing, called _Friction Stir Welding_ (FSW), was invented at The Welding Institute (TWI) in the United Kingdom in 1991. [9] FSW utilizes the heat generated by the frictional surface contact between a rotating tool and workpiece that results in material softening and subsequent severe plastic deformation along with material mixing to bond materials together. The entire process takes place in the solid-state, where the processing temperature remains below the melting temperature. Due to the absent of excessive heat and melting, FSW processes are known as energy efficient methods in comparison to arc welding, where materials are melted for bonding. In addition, FSW welds do not experience problems with re-solidification, such as cracking, porosity, and embrittlement. For theses reasons, friction stir welding has been an attractive joining process in many industries. FSP is a derivative approach to FSW where there is only one workpiece that is modified through the plunging and traverse of the non-consumable tool.
The FSP process involves four stages: plunging, dwelling, traversing, and extracting. The initial plunge stage refers to the period where the rotating tool is in contact with the
workpiece with downward directional force, referred to as the plunge force. The dwelling stage refers to the period where rotational friction continues, but there is no additional downward directional motion. The material plastically deforms and heats up during the plunge and dwell stages. Then during the traverse stage the rotating tool moves along the a defined path. The plasticized material is mixed and extruded past the rotating and traversing tool. Lastly, at the end of the traverse line, the rotating tool is extracted, leaving the exit hole.
The plunge and dwelling stages in FSP process are extremely important, since most of the initial thermo-mechanical conditions are generated and the workpiece undergoes significant material transformation due to the rapid temperature increase and forging pressure [6]. During these stages, materials undergo significant transformation from a solid state to a plastically deformed condition in a short period of time. Due to the rapid condition change, most of the tool wear occurs during these stages [9; 10; 11]. Unsuccessful temperature control at the end of the dwell stage often leads to overshoot and undershoot of the temperature during the weld traverse. This may cause stability issues even under well tuned temperature control algorithms [12; 13]. A thorough understanding of the temperature profile prior to the traverse stage is critical to preserving tool life and securing optimal process conditions during the traverse stage.
Figure 1: _Friction-stir processing_ (FSP) utilizes internal heat generation from friction-based power transfer to plastically deform the material for welding (a). In (b), notional time histories for temperature at the tool face and power input during the plunge and dwell stages of the welding are depicted. Once the temperature reaches a certain set point, the tool begins to travel across the workpiece. This work is concerned with modeling and control of the temperature profile during the plunge and dwelling stages. Our method is predicated upon the energy balance prescribed by the First Law of Thermodynamics (c). We construct an expected dynamical system, coined a _Neural Lumped Parameter Differential Equation_ (d), to impart this particular energy balance suitable for this problem.
### ML for Temperature Prediction
Much work has been done to construct high fidelity models of FSW and FSP processes, often leveraging computational fluid dynamics approaches [14; 15; 16; 17; 18; 19]. In general these approaches require significant computational resources that can be prohibitive for their use in process design and control. To lessen this burden, data driven and machine learning methods have been utilized to learn relationships between process parameters and weld properties from limited numerical or computational experiments that can then be used in process design. For example, impacts of tool geometry [20] and chemical composition [21]. However, less work has been done to construct simplified models of FSW dynamics for the purposes of process control. At the level of process control, the detail of high fidelity models may be unnecessary, but it is challenging to construct simplified dynamics that can accurately predict measured outputs such as temperature. In a related solid phase process called shear assisted process extrusion (ShAPE), Wight et al. developed DeepTemp, a recurrent neural network that can accurately predict measured temperature during processing given the operator inputs [22]. A simplified thermodynamic model augmented with a neural network has also been shown to produce accurate predictions of process temperature dynamics for ShAPE [23]. Addition of physics can allow for greater interpretability of learned dynamics and allow for greater confidence in their use for process control.
## 3 Methodology
### Lumped-parameter methods
Lumped parameter models reduce complex systems with infinite degrees of freedom (e.g. spatially-varying fields) to a single degree of freedom that is representative of the whole unit, thereby reducing space-time evolution of a governing Partial Differential Equation (PDE) to the time evolution of a system of Ordinary Differential Equations (ODE). The prototypical example is that of a lumped thermal unit; for example, a room in a building whose state can be represented by a single temperature that evolves according to its capacitance (i.e. thermal capacitance associated with the volume of the room) and resistive coupling to other rooms. In heat transfer problems, one expects temperatures to behave as a first-order system; i.e., temperatures typically asymptotically approach a steady state value where the energy input to the system is eventually matched by the sum of the dissipative physics of the problem.
In FSP, the workpiece is heated through power input via a spinning tool. On the time scales relevant to FSP, heat dissipation is primarily through conduction away from the stir region. With the notional dominant balance physics in mind, one can construct a framework around these energy pathways consistent with the time-dependent First Law of Thermodynamics:
\[\frac{dE}{dt}=\dot{Q}-\dot{W}, \tag{1}\]
where \(E\) is the energy contained in the system, \(\dot{Q}\) is the rate of heat input to the system, and \(\dot{W}\) is the rate of work done by the system on its surroundings. To simplify the problem further, we define the boundaries of the assumed lumped-volume to be placed such that there is no work exchange between the modeled system and its surroundings. Thus, the work term
can be neglected and the energy balance for the system becomes the competition between energy gain and loss:
\[\dot{Q}=\dot{Q}_{in}-\dot{Q}_{out}, \tag{2}\]
Substituting Eq. 2 into Eq. 1 and recasting energy dynamics and thermal dynamics, we obtain:
\[\frac{dT}{dt}=\frac{1}{C}\left(\dot{Q}_{in}-\dot{Q}_{out}\right), \tag{3}\]
with \(C\) representing the thermal capacitance of the lumped volume. Thus, for a well-defined lumped volume, the thermal dynamics are reduced to equilibrating heating and cooling physics.
### Neural Ordinary Differential Equations
_Neural Ordinary Differential Equations_ (NODEs [24]) are a machine learning modeling paradigm where the transition dynamics of a system are approximated by a tunable function approximator; e.g. a trainable neural network. NODEs differ from other ML time series modeling methods (e.g. RNNs, LSTMs, etc.) in that (i) the treatment of time is continuous as opposed to discrete timesteps, (ii) the dynamics can be made interpretable by the inclusion of various physical priors, and (iii) NODEs can leverage standard ODE integration techniques (e.g. adaptive step Runge-Kutta methods or integrators for stiff dynamics).
Neural ODEs evolve a system's state \(x\) through an independent variable (typically time \(t\)) through the solution to the initial value problem (IVP):
\[x_{t_{end}}=x_{t_{0}}+\int_{t_{0}}^{t_{end}}f\left(x,t;\theta\right)dt, \tag{4}\]
where \(f(\cdot)\) is the learnable Right-Hand-Side (RHS, or the transition dynamics) of the ODE parameterized by the parameter vector \(\theta\). We denote the solution to this IVP as:
\[x_{t_{end}}=\mathrm{ODESolve}\left(f\left(x,t;\theta\right),x_{0},t_{0},t_{end }\right), \tag{5}\]
that is, the solution to the ODE from the initial condition \(x_{0}\) for the temporal span of \(t_{0}\) to \(t_{end}\). Note that as written, this is a non-autonomous differential equation because it depends on the independent variable \(t\). In general, the model can be autonomous or non-autonomous depending on the specific application. The IVP solution can be made differentiable one of many ways: (i) backpropagation through the elementary operations of the solver (reverse-mode auto-differentiation), (ii) through the adjoint sensitivity method, or (iii) through forward sensitivity analysis (forward-mode auto-differentiation).
The transition dynamics of \(f\) can be made of any differentiable mapping allowing for the inclusion of known physical priors or universal function approximators, such as feed-forward neural networks. In this manner, Neural ODEs can exist as 'black-box' models, where a learnable dynamical system is expressed as:
\[\frac{dx}{dt}=\mathrm{NN}(x,t;\theta), \tag{6}\]
with \(\mathrm{NN}(\cdot)\) representing the tunable Neural Network parameterized by \(\theta\). The inclusion of various physical priors allows for gray- and white-box modeling tasks, such as surrogate and/or closure modeling and parameter tuning problems. This class of domain-informed neural ODEs is called _Universal Differential Equations_[8].
### Neural Lumped-Parameter Differential Equations
In this work, we focus on modeling the input and output energy balance present in the friction-stir welding process through fitting a differential equation of the form of Eq. 3 to the available experimental data. We propose a sub-class of neural ODEs called _Neural Lumped-Parameter Differential Equations_ constructed to exploit knowledge of input and output energy pathways and capacitive first-order physics. At this level of abstraction, we expect to fit models (for temperature of a lumped volume) of the form:
\[\frac{dT}{dt}=\frac{1}{C}\left(f\left(T,t;\theta\right)-g\left(T,t;\phi\right) \right), \tag{7}\]
with \(f(\cdot)\) and \(g(\cdot)\) representing (potentially time-dependent) learnable input and output heat transfer pathways, respectively.
Imparting such domain knowledge acts to (i) regularize the regression to promote training stability and generalizability of the model and (ii) increase the interpretability of the model for downstream tasks, such as control and/or analysis. In the context of FSP, we can further inform the design of the input and output energy pathway functions from domain knowledge. First, we assume that the loss of energy from the domain is linear with temperature difference between that of the volume and its assumed-constant surroundings; i.e.:
\[g\left(T\right)=h\left(T-T_{sink}\right), \tag{8}\]
where \(h\) is the heat transfer coefficient and \(T_{sink}\) is the pseudo-infinite sink temperature. Both \(h\) and \(T_{sink}\) can be user-specified or learned. With this specified energy loss term, we aim to "close" the model by finding an appropriate approximation of the energy input by tuning a feed-forward neural network \(f=\text{NN}\left(T,P\left(t\right);\theta\right):\mathbb{R}^{2}\rightarrow \mathbb{R}^{1}\), where \(P(t)\) is the recorded input power over time. Thus, the final model form that we aim to fit is:
\[\frac{dT}{dt}=\frac{1}{C}\left(\text{NN}\left(T,P\left(t\right);\theta\right) -h\left(T-T_{sink}\right)\right)=M(T,P(t);\Theta), \tag{9}\]
where \(\Theta\) denotes the inclusive set of learnable parameters in the model. Note that these modeling choices were made to simplify the modeling procedure by reducing the number of tunable parameters. Should more complex physics be present, the energy input and output functional forms can be recast for increased flexibility (e.g. deep neural networks).
Known, for example, is that the time constants for the thermocouple response can vary from stage-to-stage. In the initial plunge and dwell stages, the time constant may be larger than that of traversal: the input and output energy balance is not in equilibrium and leads to a first-order rise in temperature as measured at the embedded thermocouple. Once a quasi-steady state condition has been reached and tool traversal has begun, the input and output energy balance is altered due to the movement of the tool. We explicitly restrict our study to the plunge and dwelling stages of the presented experiments. To generalize to other stages would require (i) further parameterization of the model or (ii) implementing piece-wise continuous models trained on separate processing stages.
## 4 Friction-Stir Processing Experiments
The data used to train the model was sampled from 34 inch bead on plate FSP experiments of 316L stainless steel plate. Specifically, time series measurements of the mechanical
input power and temperature during the plunge and dwelling stages were utilized to train the model. The mechanical power input is estimated by the product of spindle speed and torque from the spindle motor. An encoder measures spindle speed of the motor and torque is calibrated from the motor current. The 316L plates were commercially available hot rolled, solution annealed at 1080 \({}^{\circ}\)C and water cooled. The chemical composition of the as-received plates provided was: C: 0.029 wt. %, Cr: 18.180 wt. %, Ni: 8.029 wt.%, Mn: 1.871 wt. %, Si: 0.281 wt. %, S: 0.001 wt. %. The plates were machined to have a pilot hole to minimize excessive flash at the start of the weld. Mazak MegaStir tools made of polycrystalline cubic boron nitride (PCBN) with 30 wt % W-Re tool were used. A embedded K- type thermocouple is place at the back of PCBN tool and the temperature data is transferred wirelessly through a temperature transmitter to data acquisition at 10 Hz rate.
## 5 Temperature Modeling
### Model Setup
We seek to minimize the mean-squared error between model time series data \(\hat{y}_{i,j}\) and experimentally-obtained temperature time series data \(y_{i,j}\) over a collection of experimental runs; i.e. minimize the loss function:
\[\mathcal{L}=\frac{1}{N_{i}}\sum_{i}^{N_{i}}\sum_{j}^{N_{j}}\left(\hat{y}_{i,j }-y_{i,j}\right)^{2}, \tag{10}\]
where \(N_{i}\) is the number of unique experimental runs in the training data set and \(N_{j}\) is the number of equally-spaced data points for each experimental run. The model data \(\hat{y}_{i,j}\) is defined by the forward-pass of the model over each of the experimental runs:
\[\hat{y}_{i,j}=\text{ODESolve}(M(T,P(t);\Theta),T_{0},t_{0},t_{end})|_{t=j\cdot \Delta t}, \tag{11}\]
Figure 2: In (a), shown is an example thermocouple-recorded temperature profile during the dwelling phase of a friction-stir welding run. The corresponding Neural Lumped Parameter Differential Equation model solution is shown in red. In (b), the experimental control input that produced the time series in (a) is shown. This control input is used during training to construct a surrogate mapping from the control signal and current temperature to internal heat generation.
that is, the solution to the \(i\)-th IVP recorded at the co-location points \(j\) specified by the fixed timestep \(\Delta t\).
For modeling temperature specifically, we use a scaled feed-forward neural network \(Q_{0}\cdot\text{NN}(T/T_{0},P/P_{0}):\mathbb{R}^{2}\rightarrow(0,Q_{0})\) mapping input temperature and power to internal heat generation with one hidden layer of 10 nodes ('swish'-activated) and a sigmoidal output layer. Note that although we do not non-dimensionalize the model, we do scale the associated physical quantities such that they are all approximately the same order of magnitude. Likewise, the neural network is bounded to the interval \((0,Q_{0})\) as a heuristic to enforce physicality for the learned heat generation surrogate. Similarly, we select \(T_{sink}\) to be fixed at room temperature, 23 Celsius, and set the heat transfer coefficient to \(h=1\). Thus, in total, the parameter vector \(\Theta\) is comprised of the neural network weights and biases \(\theta\) and the capacitance of the lumped volume \(C\).
We perform the regression in the Julia computing ecosystem, especially leveraging the packages DifferentialEquations.jl [25] and DiffEqFlux.jl [8]. The ODE solver is an adaptive fourth-order Runge-Kutta integrator. For each optimization task, we use Adam for 200 epochs with a learning rate of 0.001 followed by the BFGS optimizer until converged (marked by small relative change in the loss between training epochs).
### Low-data Training and Validation
Seven FSP experiments were recorded (i.e. having full temperature time histories) with consistent material selections and similar dwell times. Given this limited amount of data, and with a goal of training an interpretable and generalizable model, we employ a data augmentation strategy and a low-data model validation strategy.
A risk in modeling a low number of unique time histories is that the model can'memorize' the shape of the traces and / or their locations in time. In addition to the lumped parameter prior built into the model by construction, we augment the data with additional copies of the training data that have been randomly shifted in time. In this manner, the model is
Figure 3: Upon successful training of the neural lumped parameter model (Eq. 7), the learned surrogate for internal heat generation can be queried to give an estimate of the mapping between input power, temperature, and resultant heat generation.
encouraged to learn the appropriate input and output energy physics during the tool plunge and dwell stages.
For model reporting, we performed a 10-fold shuffle-split training strategy. In each iteration, four training trajectories were sampled from the set of seven and the remaining three were withheld as a test set.
### Results
Table 1 summarizes the results of the model training and reporting tasks. An example reproduction of time series data for a particular run is shown in Fig. 2 with weights corresponding to Trial #1. A comprehensive set of results showing the qualitative differences between the 'best' model and the 'worst' model (as judged by the magnitude of the test loss) is listed in the Appendix.
In all training scenarios, the model was able to reproduce the characteristic first-order rise of temperature seen in the experiments. Furthermore, as evidenced by the results in Table 1, the learned capacitance values are all of the same order, with many converging to a similar value. Figure 3 depicts the response surface of the learned surrogate model (trained neural network) mapping temperature (horizontal axis) and input power (vertical axis) to internal heat generation (color in the heatmap).
\begin{table}
\begin{tabular}{l c c c c c c c c c} Trial No. & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline Train & 820. & 351. & 227. & 704. & 286. & **115.** & 199. & 321. & 655. & 160. \\ Test & **974** & 3110 & 4730 & 1020 & 2000 & 1520 & 1320 & 1260 & 1160 & 1260 \\ \(C\) & 3.97 & 3.93 & 3.44 & 2.93 & 3.94 & 17.4 & 18.1 & 3.75 & 3.87 & 6.10 \\ \end{tabular}
\end{table}
Table 1: Summary of training task. _Train_ and _Test_ magnitudes correspond to the loss function (Eq. 10). \(C\) is the identified capacitance for the assumed lumped volume for each trial.
Figure 4: The system identification task shown in Fig. 2 allows for investigation of different model-based control strategies, e.g. Model Predictive Control (MPC). In (a), the goal is to reach and hold a set temperature of 700 Celsius. In (b), an optimization-obtained control input (achieved through power input at the friction-stir welding tool) is shown for a system with a capped maximum power input of 4 kW.
## 6 Control
A trained model of the true FSP temperature response to power input is immediately useful for informing operation of the tool during processing. The learned model parameters can be held fixed and control inputs can then be learned to achieve target outcomes. We demonstrate these techniques by constructing a power profile for the initial plunge stage such that temperature rises smoothly to a target set point. The trained lumped-parameter model is only assumed to be valid in the plunge and dwell stages. A smooth transition between an imposed power profile and temperature feedback control (e.g. PID) can occur for subsequent stages. Construction of a data-driven control in this manner has the potential to reduce overshoot and settling times to target set temperatures.
### Model Setup
Beginning from the model of Eq. 7, we replace the power signal \(P(t)\) with a second feed-forward neural network mapping time to power; i.e. \(P_{0}\cdot NN(t;\phi):\mathbb{R}^{1}\rightarrow(0,P_{0})\), with \(P_{0}\) representing the maximum power. The neural network has two hidden layers with 5 nodes, each'swish' activated, with a sigmoidal output layer. With all other model parameters fixed, we train this second neural network based on the loss function:
\[\mathcal{L}=\left\|T_{set}-T\right\|_{2}^{2}, \tag{12}\]
that is, minimizing the difference between the model trajectory and the reference trajectory. The training procedure uses the Adam optimizer for 200 epochs and then the BFGS optimizer for fine tuning until converged.
### Results
Figure 4 depicts an optimized solution to Eq. 12 for a set temperature of 700 celsius and maximum power input of \(4kW\). After an initial steep rise in power over an approximately 60-second period, the power level steadies to a constant power required for the steady-state input and output energy balance.
## 7 Discussion and Conclusions
This paper presented methodology for modeling characteristic first-order-rise temperature time series from FSP experiments performed on 316L steels. Consistent with engineering heat transfer modeling techniques, we extended the notion of the lumped-parameter method to include tunable physics to be learned from data directly. The regression task was built within the _Universal Differential Equation_ paradigm, where one fits a dynamical system to data. In this work, we construct the model dynamical system to contain the same input and output physics present in the friction-stir welding experiment, namely internal heat generation and heat transfer away from the weld region. The lumped-capacitance abstraction is sufficient to capture essential physics for reproducing behavior seen in time series data (Fig. 2). Because the model is constructed with constituent energy pathways, the model is interpretable in that one can use the learned closure (mapping from tool force and temperature to heat input) for inference tasks; i.e. predictive control as shown in Fig. 4.
As presented, the learned models do not generalize beyond this specific experimental setup, including material choices. A natural extension of our work is to include parametric dependence of the experiments during training of the models. For example, including material density, specific heat capacity, and heat transfer coefficients can help craft a higher-fidelity response surface (Fig. 3 ) tailored to a specific experimental campaign. These higher-dimensional parameterizations can help dramatically reduce the need for costly high-resolution grid-search strategies in experiments.
## Acknowledgements
The research described in this paper is part of the Materials Characterization, Prediction, and Control agile investment at Pacific Northwest National Laboratory. It was conducted under the Laboratory Directed Research and Development Program at PNNL, a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy under contract DE-AC05-76RL01830.
## Author Contributions
The authors confirm contribution to this manuscript as follows: study conception and design: J. Koch, W. Choi, E. King; analysis: J. Koch; data creation: D. Garcia, H. Das, T. Wang, K. Ross, K. Kappagantula; data curation: W. Choi; draft preparation: J. Koch. All authors reviewed the manuscript and approved its final version.
|
2305.07343 | On the consistency of relative facts | Lawrence et al. have presented an argument purporting to show that "relative
facts do not exist" and, consequently, "Relational Quantum Mechanics is
incompatible with quantum mechanics". The argument is based on a GHZ-like
contradiction between constraints satisfied by measurement outcomes in an
extended Wigner's friend scenario. Here we present a strengthened version of
the argument, and show why, contrary to the claim by Lawrence et al., these
arguments do not contradict the consistency of a theory of relative facts.
Rather, considering this argument helps clarify how one should not think about
a theory of relative facts, like RQM. | Eric G. Cavalcanti, Andrea Di Biagio, Carlo Rovelli | 2023-05-12T09:41:40Z | http://arxiv.org/abs/2305.07343v2 | # On the consistency of relative facts
###### Abstract
Lawrence _et al._ have presented an argument purporting to show that "relative facts do not exist" and, consequently, "Relational Quantum Mechanics is incompatible with quantum mechanics". The argument is based on a GHZ-like contradiction between constraints satisfied by measurement outcomes in an extended Wigner's friend scenario. Here we present a strengthened version of the argument, and show why, contrary to the claim by Lawrence _et al._, these arguments do not contradict the consistency of a theory of relative facts. Rather, considering this argument helps clarify how one should _not_ think about a theory of relative facts, like RQM.
In [1], Lawrence, Markiewicz, and Zukowski present an argument meant to show that "relative facts do not exist" and "Relational Quantum Mechanics (RQM) [2; 3; 4] is incompatible with Quantum Mechanics". See also [5; 6]. Here we show why their conclusion is not warranted. We also present a strengthened version of the argument and argue that, although these arguments do not establish the inconsistency of relative facts, they nonetheless help clarify how one should _not_ think about a theory of relative facts, like RQM.
The authors consider an extended Wigner's friend version of a GHZ-type scenario. A system \(S\) formed by three qubits \((S_{1},S_{2},S_{3})\) is prepared in a GHZ state [7]:
(1)
A triple of systems \((A_{1},A_{2},A_{3})\), considered as observers, respectively measure a fixed observable for each of the three qubits and obtains outcomes \((\mathcal{A}_{1},\mathcal{A}_{2},\mathcal{A}_{3})\), where \(i=1,2,3\). Subsequently, a second triple of observer systems \((B_{1},B_{2},B_{3})\) respectively measures one observable for each pair of systems \((S_{i},A_{i})\) and obtains outcomes \((\mathcal{B}_{1},\mathcal{B}_{2},\mathcal{B}_{3})\).
Let us emphasise that, although the notation of [1] does not make this clear, in RQM these outcomes have a value relative to each observer being considered, but not necessarily relative to every observer. In other words, they are relative facts.
The observables are chosen in [1] to parallel the proof of the GHZ theorem [7] against the existence of local hidden variables. The authors of [1] claim that the resulting set of six quantities \(\{\mathcal{A}_{1},\mathcal{A}_{2},\mathcal{A}_{3},\mathcal{B}_{1},\mathcal{B }_{2},\mathcal{B}_{3}\}\) must satisfy the four incompatible GHZ constraints
(i) : \[\mathcal{B}_{1}\mathcal{B}_{2}\mathcal{B}_{3}=1,\] (ii) : \[\mathcal{B}_{1}\mathcal{A}_{2}\mathcal{A}_{3}=-1,\] (iii) : \[\mathcal{A}_{1}\mathcal{B}_{2}\mathcal{A}_{3}=-1,\] (iv) : \[\mathcal{A}_{1}\mathcal{A}_{2}\mathcal{B}_{3}=-1.\]
They conclude that this is an argument against the existence of the relative facts that RQM takes as its main ingredient.
Let us analyse this argument in detail. The authors of [1] argue that each of the above measurements can be described as a unitary interaction between the system being measured and the corresponding observer system (what they call an "RQM-measurement"). This is in agreement with RQM. However, in RQM, quantum states are only interpreted as _relative_ states in the sense of [8], and therefore any such unitary evolution is relative to a specific observer. In [1], the observer from whose perspective the unitary description is given is not made explicit. However, in RQM a state of a system is always a state relative to another system. Here we consider for concreteness the unitary description to be relative to an observer \(W\) ("Wigner") external to all of the systems considered above.
Using a notation only slightly different from that of [1], the measurement of the Pauli \(Y\) observable of \(S_{m}\) by \(A_{m}\) can be described as unitary \(\hat{U}_{SA_{m}}\) such that, when \(S_{m}\) is prepared in a \(Y\)-eigenstate \(|l^{y}\rangle_{S_{m}}\) (\(l\in\{\pm 1\}\)) and \(A_{m}\) is initially in a "ready" state \(|R\rangle_{A_{m}}\), we have
\[\hat{U}_{SA_{m}}\left(|l^{y}\rangle_{S_{m}}|R\rangle_{A_{m}}\right)=|l^{y} \rangle_{S_{m}}|l^{y}\rangle_{A_{m}}\,. \tag{2}\]
Ref. [1] considers then an entangling measurement by \(B_{m}\) on the joint system \(S_{m}\otimes A_{m}\). For simplicity of exposition, here (following [9]) we consider instead a measurement that consists of first applying the inverse unitary \(\hat{U}_{SA_{m}}^{\dagger}\), and then \(B_{m}\) proceeding to measure the system \(S_{m}\) directly on the Pauli \(X\) basis. This procedure leads to the same statistics.
Using this, Wigner describes the measurement by \(B_{m}\) as a unitary interaction \(\hat{U}_{SAB_{m}}=\hat{U}_{SB_{m}}\hat{U}_{SA_{m}}^{\dagger}\), where
\[\hat{U}_{SB_{m}}\left(|l^{x}\rangle_{S_{m}}|R\rangle_{B_{m}}\right)=|l^{x} \rangle_{S_{m}}|l^{x}\rangle_{B_{m}}\,. \tag{3}\]
The sequence of all measurements then can be represented from Wigner's perspective as follows,
(4)
where we have used
(5)
Now let us consider the constraints (i)-(iv) above, starting with (i). The authors of [1] describe the composite system \(B=B_{1}\otimes B_{2}\otimes B_{3}\) as a single observer "\(B\)", and similarly for \(A=A_{1}\otimes A_{2}\otimes A_{3}\). But this is not necessarily coherent with RQM.
Let us first consider the three systems \(B_{i}\) as separate observers. The outcome \(\mathcal{B}_{i}\) has a value relative to observer \(B_{m}\), but can we say that the product of these outcomes should satisfy (i)? In RQM, as quoted in [1], "it is meaningless to compare events relative to different systems, unless this is done relative to a (possibly third) system" and "comparisons can only be made by a (quantum-mechanical) interaction". Thus, before the observers \(B_{m}\) interact among themselves, or with a further observer, the constraint (i) has no meaning in RQM.
Let us then consider an interaction with Wigner, who measures each system \(B_{m}\) on its "pointer basis"--that is, the basis \(|l^{x}\rangle_{B_{m}}\) above, obtaining outcome \(\mathcal{B}_{m}^{W}\). It is easy to show that the statistics for the product of these three
measurements should correspond to the statistics for the product of three Pauli \(X\) measurements on the initial GHZ state. We can represent this process diagramatically as follows
(6)
This satisfies
\[(\mathrm{i}^{\prime}):\quad\mathcal{B}_{1}^{W}\mathcal{B}_{2}^{W}\mathcal{B}_{3} ^{W}=1.\]
But this is not constraint (i), and in RQM we cannot infer constraint (i) from this constraint.
Similarly, constraint (ii) is not meaningful in RQM except relative to an observer that evaluates it. Let us then consider the situation where Wigner measures \(B_{1}\) as above, but this time measures systems \(A_{2}\) and \(A_{3}\) on their pointer bases before \(B_{2}\) and \(B_{3}\) do their measurements, obtaining outcomes \((\mathcal{B}_{1}^{W},\mathcal{A}_{2}^{W},\mathcal{A}_{3}^{W})\):
(7)
The statistics for the product of these three measurements correspond to the statistics for the product of Pauli measurements \(X_{1}Y_{2}Y_{3}\) on the initial GHZ state, thus satisfying
\[(\mathrm{ii}^{\prime}):\quad\mathcal{B}_{1}^{W}\mathcal{A}_{2}^{W}\mathcal{A} _{3}^{W}=-1.\]
But as before, this is not constraint (ii), and in RQM we cannot infer constraint (ii) from this constraint. A similar analysis holds for constraints (iii) and (iv).
We therefore conclude that _none_ of the constraints (i)-(iv) hold a priori in RQM, contrary to the claim by Ref. [1]. Only one of the four constraints (i')-(iv') can hold relative to Wigner, with the rest being meaningless. Each constraint corresponds to a different context, where Wigner makes a different triple of measurements. To be clear, Wigner can meaningfully predict, before choosing his measurements, that the following constraints hold as expectation values, if those measurements are performed by him:
\[(\mathrm{i}^{\prime\prime\prime}):\quad\langle\hat{X}_{1}\hat{X} _{2}\hat{X}_{3}\rangle_{W}=1,\] \[(\mathrm{ii}^{\prime\prime\prime}):\quad\langle\hat{X}_{1}\hat{Y} _{2}\hat{Y}_{3}\rangle_{W}=-1,\] \[(\mathrm{iii}^{\prime\prime\prime}):\quad\langle\hat{Y}_{1}\hat{X} _{2}\hat{Y}_{3}\rangle_{W}=-1,\] \[(\mathrm{iv}^{\prime\prime\prime}):\quad\langle\hat{Y}_{1}\hat{Y} _{2}\hat{X}_{3}\rangle_{W}=-1.\]
But when we write \(\mathcal{B}_{1}^{W}\), we are referring to a measurement outcome actually obtained by Wigner in a particular run. Of course, this value only exists (relative to Wigner) in the runs where Wigner performs that measurement. Following Peres [10], "unperformed measurements have no outcomes".
On the other hand, one may ask: isn't it the case that all the six quantities \(\{\mathcal{A}_{1},\mathcal{A}_{2},\mathcal{A}_{3},\mathcal{B}_{1},\mathcal{B }_{2},\mathcal{B}_{3}\}\) refer to _performed_ measurements? Don't they all have a value in each run of the experiment, then? And if so, shouldn't those values obey the constraints (i)-(iv)?
This is a subtle point. The key is that although those measurements are all performed _by some observer_ in each run of the experiment, there is no observer relative to whom they all take co-existing values. One may invoke the "cross-perspective link" [11] to conclude that _if_ Wigner performs one of the six measurements above (say if he observes outcome \(\mathcal{B}_{1}^{W}\)), _then_ he can conclude that \(\mathcal{B}_{1}^{W}=\mathcal{B}_{1}\). If he observes the triple \(\mathcal{B}_{1}^{W}\mathcal{B}_{2}^{W}\mathcal{B}_{3}^{W}\), he should obtain values compatible with (i'), and therefore in that case he could conclude that constraint (i) holds for the values observed by \(B_{1}\), \(B_{2}\) and \(B_{3}\). One cannot however simply _define_ an observer \(B=B_{1}\otimes B_{2}\otimes B_{3}\) relative to which constraint (i) holds, if there is no interaction involving those three systems after their measurements take place. A similar argument can be made to conclude that _if_ Wigner performs the measurements corresponding to one of the other constraints (ii')-(iv'), _then_ the corresponding constraint (ii)-(iv) holds. But this does not allow us to infer that all four constraints must be a priori satisfied.
We close with a general philosophical consideration. As repeatedly stated in the original papers, RQM does not necessarily require a commitment to a specific philosophy. However, it does highlight the cost that quantum mechanics puts upon different metaphysical options. The scenario analysed here is a good example. Different philosophical attitudes can be considered, with respect to the metaphysical status of the list \(\{\mathcal{A}_{1},\mathcal{A}_{2},\mathcal{A}_{3},\mathcal{B}_{1},\mathcal{ B}_{2},\mathcal{B}_{3}\}\). One possibility is the choice of declaring it part of reality, even if no observer has simultaneous access to all of those values (see [11]). The "cost" of this option is that reality, so defined, violates a number of features that we commonly expect it to respect [9; 12; 13; 14] - an assignment of values to all of those quantities amounts to an assumption of "Absoluteness of Observed Events", implying the rejection of at least one of the other premises of various no-go theorems [9; 13; 14]. Alternatively, one may choose a more radical relationalism, and assume that only assertions relative to a physical system are to be taken as meaningful statements about reality. In this case, the elements of the list are part of reality relative to each observer making those measurements, but the complete list is not part of reality, because there is no observer relative to which all of those observables take co-existing values.
***
## Acknowledgments
EGC acknowledges support from grant number FQXi-RFP-CPW-2019 from the Foundational Questions Institute and Fetzer Franklin Fund, a donor advised fund of Silicon Valley Community Foundation (EGC), and an Australian Research Council (ARC) Future Fellowship FT180100317 (EGC). ADB and CR acknowledge support of the ID# 61466 grant from the John Templeton Foundation, as part of the "Quantum Information Structure of Spacetime (QISS)" project (qiss.fr). EGC acknowledges the traditional owners of the land at Griffith University on which this work was undertaken, the Yuggera and Yugambeh peoples.
|
2304.10516 | Distributed Neural Representation for Reactive in situ Visualization | Implicit neural representations (INRs) have emerged as a powerful tool for
compressing large-scale volume data. This opens up new possibilities for in
situ visualization. However, the efficient application of INRs to distributed
data remains an underexplored area. In this work, we develop a distributed
volumetric neural representation and optimize it for in situ visualization. Our
technique eliminates data exchanges between processes, achieving
state-of-the-art compression speed, quality and ratios. Our technique also
enables the implementation of an efficient strategy for caching large-scale
simulation data in high temporal frequencies, further facilitating the use of
reactive in situ visualization in a wider range of scientific problems. We
integrate this system with the Ascent infrastructure and evaluate its
performance and usability using real-world simulations. | Qi Wu, Joseph A. Insley, Victor A. Mateevitsi, Silvio Rizzi, Michael E. Papka, Kwan-Liu Ma | 2023-03-28T03:55:47Z | http://arxiv.org/abs/2304.10516v2 | # Distributed Neural Representation for Reactive in situ Visualization
###### Abstract
_In situ visualization and steering of computational modeling can be effectively achieved using reactive programming, which leverages temporal abstraction and data caching mechanisms to create dynamic workflows. However, implementing a temporal cache for large-scale simulations can be challenging. Implicit neural networks have proven effective in compressing large volume data. However, their application to distributed data has yet to be fully explored. In this work, we develop an implicit neural representation for distributed volume data and incorporate it into the DIVA reactive programming system. This implementation enables us to build an in situ temporal caching system with a capacity 100 times larger than previously achieved. We integrate our implementation into the Ascent infrastructure and evaluate its performance using real-world simulations._
## 1 Introduction
Recent advances in volume compression have highlighted the effectiveness of using neural networks for the implicit representation of large-scale 3D data [1]. Such a neural representation offers several advantages. Firstly, it allows for significant reductions in data size by several orders of magnitude while preserving high-frequency details. Secondly, it permits direct access to values without the need for decompression. Thirdly, it enables access to spatial locations at arbitrary resolutions. The latest developments in this field have also enabled lightning-fast training [16] and high-fidelity interactive volume rendering [20] of neural representations. These advantages make implicit neural representation a promising technique for handling large-scale volume data.
State of the art scientific simulations running on massively parallel supercomputers generate data at rates and volumes that often cannot be fully transferred, stored, and processed. In situ data reduction and visualization, executing in tandem with the simulation, is an increasingly employed approach to this data problem. The in situ processing tasks are often scheduled based on predefined reactive triggers [17] and programmed through a reactive programming system such as DIVA [20].
For studies such as causality analysis, key insights often lie inside the data precipitating an event. Thus, it is necessary to temporarily cache data in memory for possible later visualization and analysis. Nonetheless, in practice, implementing an efficient temporal caching mechanism can be challenging due to the large size of the simulation data. A neural representation of volume data is a great candidate for addressing this challenge, but its application to distributed data generated in parallel environments has yet to be fully explored, which is crucial for large-scale parallel distributed simulations.
In this work, we address this gap by developing an efficient neural representation for large-scale distributed volume data. We use this distributed neural representation to implement an efficient in situ temporal caching algorithm for the DIVA reactive programming system. We also integrate DIVA into the Ascent infrastructure [1], enabling efficient temporal data caching in any Ascent-compatible simulations. This effectively facilitates the use of declarative and reactive programming languages for complex adaptive workflows in these simulations. The results of our implementation are demonstrated and evaluated using two real-world in situ simulations. We plan to open-source our implementation at _omitted for review_.
## 2 Related Work
In this related work section, we delve into the relevant research of three crucial areas related to our presented work. We begin by providing an overview of in situ visualization and in situ triggers, followed by a review of reactive programming for adaptive in situ workflows. Lastly, we focus on the latest advancements in deep learning for volume compression. Our aim is to present a comprehensive overview of each area and inform the contributions of our presented work.
### In Situ Visualization and In Situ Triggers
The field of in situ visualization has seen a growing interest in automating the identification of crucial regions for analysis, visualization, and storage. To achieve this, researchers use "in situ triggers", defined as Boolean indicator functions, to characterize data features [1]. These triggers can be either domain-specific [1] or domain-agnostic [11]. Larsen et al. [12] made a significant contribution by introducing the first general-purpose interface for creating in situ triggers in the Ascent infrastructure [1], simplifying the development process. Additionally, Larsen et al. [11] proposed methods to evaluate the viability of trigger-based analysis in dynamic environments. The DIVA framework [13] enhances the usability of in situ triggers by enabling reactive programming. It can automatically generate fine-grained in situ triggers and optimize workflow performance based on user-specified data dependencies and high-level constraints.
### Reactive Programming Models for in situ Visualization
The visualization frameworks commonly use dataflow programming to construct visualization workflows [15]. These frameworks represent workflows as pipelines or directed graphs, where nodes stand for low-level visualization components and data is processed hierarchically. Although the dataflow model provides good flexibility in visualization systems, it can have a complex syntax and limited support for time-varying data. Domain-Specific Languages (DSLs) offer a specialized grammar tailored to specific domains [1, 10, 11, 12]. They enhance usability by providing a large number of domain-specific built-in functions and abstractions that hide the underlying execution models. However, they often lack critical features for highly adaptive workflows.
Reactive programming is a paradigm that models time-varying values, known as signals, as first-class objects, providing a more intuitive and flexible way of managing dynamic data. Classical reactive programming languages, such as those described in [1], offer elegant semantics for directly manipulating continuous time-varying values. However, they can have a high memory footprint and long computation times due to the potential unlimited length of signal streams. To address this issue, event-driven reactive programming makes the assumption that signals are all discrete, which reduces memory footprints and makes it more suitable for intensive event-driven applications like interactive visualization [1, 13]. The DIVA system, presented in Wu et al. [13], first introduced reactive programming to the field of in situ visualization. The system's declarative programming interface, comparable to a DSL, is user-friendly and expressive, allowing users to directly specify their visualization designs while focusing on scientific relationships, instead of technical implementation details. Despite its many benefits, the DIVA system does not resolve the limitations of data caching in large-scale scientific simulations where volume fields are often too large to cache in memory for future processing.
Outside the domain of reactive programming, there have been various studies on efficiently caching temporal data. The use of hardware-based solutions, such as non-volatile RAMs and burst buffers, have been utilized by Demarle et al. [1] in Paraview [1] through the implementation of a sliding buffer. However, the performance of hardware-based methods is contingent upon the hardware itself. As an alternative, software-defined approaches, such as volume compression, are also available. However, most volume compression algorithms necessitate decompression prior to accessing volume values. In this work, we focus on deep learning-based methods to address these limitations.
### Deep Learning for Volume Compression
Several deep learning techniques have been explored to compress volume data. Jain et al. [12] proposed an encoder-decoder network while Wurster et al. [14] utilized a generative adversarial networks hierarchy. The use of super-resolution neural networks can help conserve storage space or improve simulation efficiency [1, 13, 14, 15, 16]. Lu et al. [11] suggested using implicit neural representations, although this method demands a time-consuming training process. Weiss et al. [13] and Wu et al. [13] improved the efficiency of the technique by incorporating grid-based encoding, speeding up both training and inference. Wu et al.'s [13] wavefront rendering algorithm enabled real-time visualization of implicit neural representations for the first time. Their work serves as the basis for this project. Additionally, for handling complex time-varying volumes, Han et al. [15] proposed an implicit neural network design. For high-resolution sparse volumes, Doyub et al. [1] combined implicit neural representation with the OpenVDB framework.
## 3 Distributed Neural Representation (DNR)
Implicit neural representation (INR) is a neural network that is capable of directly approximating a volume field. It takes a spatial coordinate \((x,y,z)\) as input and outputs a value \(\mathbf{v}\) corresponding to the volume field value at that coordinate, as represented by the equation:
\[\Phi:\mathbb{R}^{3}\rightarrow\mathbb{R}^{D},\ (x,y,z)\mapsto\Phi(x,y,z)= \mathbf{v}. \tag{1}\]
Here, \(\mathbf{v}\) can be a scalar (\(D=1\)), 3-dimensional vector (\(D=3\)), or high-dimensional vector (\(D>3\)), depending on the specific problem. Our base INR architecture comprises a multi-resolution hash encoding layer [14] and a small multilayer perceptron (MLP) network. The Rectified Linear Unit (ReLU) activation function is used in the hidden layers of the MLP.
Figure 1: The architecture of our distributed neural representation.
The neural network is trained by sampling input coordinates uniformly within the volume bounding box and computing the corresponding target values through appropriate interpolation methods utilizing reference data. To enhance network stability and accuracy, normalization of both input coordinates and output values to the range \([0,1]\) is performed.
During inference, the neural network can output \(\mathbf{v}\) on-demand for any arbitrary coordinate within the volume domain, with low memory footprint and low computational cost. In certain cases, it may be necessary to decode the neural representation back to its original grid-based representation, making the technique compatible with existing visualization and analysis toolkits.
In the subsequent sections, we will demonstrate how this technique is adapted for use in distributed scenarios.
### Design of Distributed Neural Representation
We have devised a decentralized methodology for constructing INRs for distributed volume data. Our methodology involves creating a standard INR network on each MPI rank and training it using local data partitions. This results in the generation of numerous networks distributed across multiple ranks. To perform data analysis and visualization, we leverage standard parallel computing techniques. Our approach significantly reduces the need for data communication between ranks during the network training process, thus leading to a reduction in latency and improved performance. An overview of our technique is provided in Figure 1. In Figure 2, we highlight volume rendering results of neural representations with different optimization levels.
To maintain the continuity of the neural representation across partition boundaries during training, two techniques are utilized. Firstly, ghost regions are integrated into the training process and all networks are optimized to a consistent level of accuracy, aligning with the requirements of most existing distributed visualization algorithms. This is expected to result in highly scalable behavior as the training process can be executed independently on each rank, without the need of data communication. Secondly, a weighted loss term that prioritizes the continuity of the neural representation at partition boundaries is incorporated. The loss function takes the form of:
\[L_{\text{Total}}=(1-\lambda)\,L_{1}(X_{\text{Uniform}},Y_{\text{Uniform}})+ \lambda\,L_{1}(X_{\text{Bound}},Y_{\text{Bound}}), \tag{2}\]
Figure 3: This plot depicts the impact of the weighting factor \(\lambda\) on boundary connectivity and overall reconstruction quality. The blue curve represents the average image PSNR of two boundary slices relative to the ground truth slice, while the orange curve illustrates the average volume PSNR of two partitions relative to the ground truth volume. This plot is created using the S3D flow field data.
Figure 2: We compared the rendering of our distributed neural representations using varying numbers of training steps. The data was distributed to two MPI ranks and trained using two NVIDIA A100-40G GPUs on the ALCF Polaris supercomputer. Partition boundaries were highlighted using white lines in A) and B). C) are zoomed views of A) near partition boundaries. In 1C), an obvious discontinuity is visible at the partition boundary. With more training steps in 2), the discontinuity becomes less obvious, but high frequency noises are still visible. However, in 3), with sufficient training steps, these artifacts are no longer visible. We used flow field data generated from the S3D simulation [1] for this experiment.
where \(X_{\text{Uniform}}\) and \(Y_{\text{Uniform}}\) are the reference and predicted volume values obtained from uniformly sampling the data, and \(X_{\text{Bound}}\) and \(Y_{\text{Bound}}\) are the ground truth and predicted volume values at the partition boundaries. \(\lambda\) is a weighting factor that controls the influence of the boundary term on the overall loss.
To maximize the use of the input range of each local neural network, a global coordinate is first converted into the relative coordinate of a data partition and then normalized to the range of \([0,1]^{3}\). To ensure consistent neural network optimizations, different data partitions are also normalized using the same maximum and minimum values.
In Figure 3 we perform a parameter search to study the effectiveness of the weighting factor \(\lambda\). We found that the existence of the boundary connectivity loss can significantly increase the data accuracy across the partition boundary. However, as the weight increases, we see a diminishing effect and a negative impact on the overall reconstruction quality. We believe that the sweet spot for \(\lambda\) is 0.5 and we use this value in this paper. In Figure 4 we show a more detailed comparison between \(\lambda=0.5\) and \(\lambda=0.0\) (i.e., no boundary connectivity loss).
### Implementation
We implemented the training system in PyTorch [19] and leveraged GPUs to accelerate the training process. To further optimize the implementation, we utilized the PyTorch API of TinyCUDA-NN [18] for the multi-resolution hash encoding and multi-layer perceptions.
Our system employs an 16-level hash-grid encoding layer with 4 features per level. Each level uses a hash table of \(2^{19}\) entries and the base level resolution is 4 with a scaling factor of 2. The multi-layer perception consists of 4 hidden layers with 64 neurons each. As mentioned before, ReLU activation functions are applied to all hidden layers. The output layer does not use an activation function.
We also utilized the Adam optimizer to train the neural network, with hyperparameters \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\), to optimize for stability and convergence. To combat overfitting and improve convergence, we employed a learning rate schedule that starts at 1e-2 and decays by a factor of 0.8 every 500 steps. Additionally, we used the loss function with a weighting \(\lambda=0.5\).
## 4 DIVA and Ascent Integration
In this work, we first integrate the distributed neural representation technique into the DIVA reactive programming system and employ it to cache large volume fields. Then, we incorporate DIVA into Ascent, facilitating reactive programming in real-world in situ applications. This section presents details about our implementations.
### Neural Representation Integration in DIVA
Classical reactive programming implementation guarantees unrestricted access to the history of time-varying values, which poses a significant challenge as it may lead to impractical data storage for some applications [1]. In order to address this challenge, DIVA [18] mandates explicit specification of the maximum number of time-steps to cache when defining related variables. Exceptions will be thrown if the system is unable to cache all the required time-steps, thereby limiting the expressiveness of reactive programming when dealing with volume fields. To overcome this limitation, we have incorporated DNR for efficient data caching. In the subsequent section, we present our implementation approach, specifically focusing on the methodology of constructing and inferring neural representations.
#### 4.1.1 Constructing Neural Representation
We have developed a specialized construct called the _neural volume_ object in DIVA, which encapsulates volume fields. Creating a _neural volume_ object initiates the training of a DNR in PyTorch, which continues until the user-defined accuracy criterion (such as a PSNR target) is reached. Once the training concludes, access to the original volume field is revoked, and the learned neural network parameters are cached in system RAM for efficient storage and retrieval.
Figure 4: This figure compares the boundary slices of two neighboring partitions with two weighting factors, \(\lambda\). A) displays histograms of the pixel values and their differences between the slices. B) shows pseudo-color plots of both slices and their differences. Both neural representations are trained for 10,000 steps. The SSD flow field data was used in this experiment.
Notably, the _neural volume_ object is implemented as a _pure global_ entity in DIVA, which means that it is referentially transparent. However, the instantiation of the object may necessitate data synchronization across multiple parallel ranks. This attribute enables DIVA to streamline the workflow by avoiding the construction of a DNR if it is not ultimately accessed by any trigger actions on any ranks. If a DNR is generated, DIVA automatically synchronizes the metadata and training process to ensure accuracy.
DIVA is implemented in C++. To facilitate seamless integration with PyTorch, we leverage the pybind11 library [16] to create bindings between C++ and Python. Specifically, we maintain an embedded Python interpreter inside C++, which is initialized at the start of the simulation. Using the Python interpreter, we construct DNR models and execute DNR training scripts. It is worth noting that the Python-side models and scripts are unaware of distributed data parallelism. Instead, all synchronization and data communication operations are handled on the C++ side through DIVA. This approach allows us to leverage PyTorch's powerful machine learning capabilities while still maintaining the efficiency and scalability of our DIVA codebase.
To prevent data duplication, we utilize zero-copy numpy array views for passing volume data to Python scripts. To optimize memory usage during training, we generate training data samples on-demand using a custom volume sampler that supports backends for scalar and vector fields, as well as all volume mesh types used in Ascent. For uniform and rectilinear meshes, we provide a native data sampler that generates samples directly on the GPUs. In the case of more complex unstructured meshes, we implement data sampling using VTKm [16], which currently necessitates multiple data transfers between the CPU and the GPU for each training step due to the absence of a direct GPU API. To solve this inefficiency, we offer the option of resampling unstructured meshes into a uniform mesh. The optimization of our VTKm-based data samplers is left as future work.
#### 4.1.2 Accessing Neural Representation
To access the _neural volume_ in subsequent sessions, we reconstruct the DNR network using PyTorch and calculate the required data values through inference. We provide two inference strategies in our implementation to ensure performance and compatibility. The neural network can be directly used as input for operations compatible with DNR, and the most appropriate inference method can be selected. For instance, our volume renderer utilizes the sample-streaming algorithm and the macro-cell acceleration structure proposed by Wu et al. [17] for optimization. Legacy operations like streamline tracers can decode the neural volume to a grid representation. We plan to offer DNR compatible routines for these operations in the future gradually.
#### 4.1.3 Window Operator
DIVA enables access to variables' histories through temporal _pure_ operators. Among them, the _window_ operator is one of the key operators. As shown in Listing 1, the _window_ operator accepts an input variable and an integer size. It allows users to transform a time-varying variable into a temporal array. The constructed temporal array (i.e., win) can be used just like a normal array (e.g., lst) and can be used with any array operators for visualization and analysis. Users can also define trigger-like filters to exclude values from unwanted timesteps, but this feature is not demonstrated in this example for the sake of simplicity.
The implementation of the _window_ operator uses DIVA's internal temporal data caching mechanism to store the input variable's values as the simulation progresses. As the number of cached values reaches the user-defined size, the earliest value gets removed
Figure 5: Visualization results generated by our DIVA-Ascent integration. A) Volume rendering of distributed neural representation generated using our neural rendering operator. B) Volume rendering of the ground truth data directly using the corresponding DIVA operator associated with Ascent’s renderer. C) Isosurfaces generated by the DIVA operator associated with Ascent’s contour filter.
Figure 6: The overview of our DIVA-Ascent integration.
to save space for new values. When applied to a volume field, the temporal data caching can quickly consume a lot of memory, causing scalability issues. Therefore, in this study, we have modified the _window_ operator to support _neural volumes_. This modification enables users to create windows that are about 100\(\times\) longer, thus overcoming the memory bottleneck.
#### 4.1.4 Distributed Volume Rendering
Volume rendering is one of the most commonly used operations in visualization, thus an optimized routine to render our distributed neural representation is provided. Our neural network architecture is independent of volume mesh structures, and as a result, only one rendering algorithm is needed. We leverage previous work done by Wu et al. [20] and construct a base renderer using their sample-streaming algorithm. We also use their macro-cell acceleration structure for adaptive sampling. Then we add a sort-last parallel rendering system to support distributed neural representations. The renderer does not require decoding the neural representation back to a grid representation, consequently producing a very small memory footprint. However, as inferring an implicit neural network is inevitably more expensive than directly sampling a 3D GPU texture, our renderer produces a higher rendering latency.
### DIVA Integration in Ascent
Ascent is a many-core capable flyweight in situ visualization and analysis infrastructure that many real-world multi-physics HPC simulations adopt [1]. To enable our method in real-world production environments directly, we also integrated DIVA into it. Figure 6 gives an overview of our implementation design. Major benefits include the ability to program event-driven in situ workflows using DIVA's declarative interface and reactive programming abstractions. This not only reduces the complexity of writing an event-driven in situ workflow, but also improves the workflow performance as the DIVA runtime can perform lazy-evaluation and avoid needless re-computations.
The integration maps all the important Ascent concepts to DIVA. For example, it exposes _fields_ as regular variables and enables them to be implicitly associated with a coordinate system and a topology. It also blurs the difference between _filters_, _pipelines_ and _expressions_, and implements all of them as DIVA operators. Internally, the runtime can map different operators back to the corresponding Ascent concept, construct a valid volume mesh using zero-copy operations, and create an Ascent _action_ to execute each individual operation. Using a customized zero-copy _extractor_, calculated results can be directly returned to the DIVA runtime and continue interacting with other components defined in the visualization workflow. The code snippet in Listing 2 illustrates how to use a contour filter with five uniform levels. Listing 3 illustrates how the same thing can be done in Ascent. Figure 5 displays the resulting images. As a result, this design choice makes DIVA more accessible.
Finally, our integration is very lightweight because it does not need additional adaptors in the simulation code. It can directly accept Conduit Blueprint mesh descriptions [10] and parse them to generate DIVA-specific data definitions. The only thing users have to do is call divaInitialize and divaFinalize at the beginning and end of the simulation, and then execute the DIVA workflow instead of Ascent actions in each visualization step. This makes it easy for any simulation that already uses Ascent to enable DIVA as an enhancing feature with minimal changes.
## 5 Case Studies
We conducted two case studies to effectively estimate the performance of our technique and implementation. Both cases were implemented in two production simulations, each with different mesh structures.
The first simulation, **Cloverleaf3D**, is a widely used open-source application for simulating multi-physics problems, particularly in computational fluid dynamics (CFD). It provided a straightforward and effective solution for simulating complex geometries and enabled researchers to explore various physical phenomena, such as combustion, turbulence, and heat transfer. The simulation employed a rectilinear mesh for simulation, and we designed the first case study to evaluate the performance and scalability of our technique for scalar field visualizations.
The second simulation, **NekRS**, is a high-performance, scalable,
and parallel spectral element code for solving partial differential equations. It is being used in a broad range of scientific and engineering applications, including fluid mechanics, acoustics, and electromagnetics. The Taylor-Green Vortex (**TGV**) example is a specific benchmark utilized to evaluate NekRS' performance and scalability. The Taylor-Green vortex is a well-known analytical solution for the Navier-Stokes equations that describe fluid flow, and NekRS employs an unstructured hexahedral mesh for the simulation. We designed the second case study to evaluate our technique's effectiveness on flow field visualizations.
To ensure a robust evaluation of the performance of our technique and implementation, we benchmarked all cases on the Polaris supercomputer from the Argonne Leadership Computing Facility. Each Polaris compute node is equipped with 4 NVIDIA A100 GPUs and 1 AMD EPYC Milan CPU. We scaled all cases up to 128 compute nodes, each with 4 MPI ranks per node. Each MPI rank was bound to a single GPU following round-robin assignment. We evaluated our technique from multiple perspectives, including quality, correctness, and scalability.
### Direct Volume Rendering
The first case study examined the utilization of direct volume rendering as a visualization technique for time-varying scalar fields. Specifically, an in situ approach was presented, which initiated the rendering process based on a pre-defined condition. The trigger condition was established as the first simulation step after time T, denoted by C=first(time>T). To generate a DNR of the volume field F, our neural volume operator was employed, which produced a representation R with a target peak signal-to-noise ratio (PSNR) Q, as denoted by R=encode(F,target_psnr=Q). Subsequently, a temporal array W of R was constructed, encompassing the N most recent simulation steps, by utilizing a window operator, expressed as W=window(R,size=N). Upon satisfying the trigger condition, the entire temporal array was rendered using a fixed visualization setting, as expressed by trigger(render(volumes=W,...),condition=C). It is noteworthy that the volume rendering algorithm employed here is described in Section 5.1.
The target reconstruction quality of \(Q=45dB\) and a window size of \(N=40\) were used. Volume rendering for CloverLeaf3D was initiated at approximately the \(70^{th}\) timestep (\(T=0.35s\)), while for NekRS-TGV, it was triggered at the \(500^{th}\) timestep (\(T=5.75s\)). At each step, we recorded the time spent on simulation computations, visualizations, and specifically the time spent on neural compression during the visualization period. In addition, we monitored the peak memory usage of each rank at each timestep by accessing the proc filesystem. Our results are presented in Figure 7. Given the limitations on space and the similarity of their behaviors, we only report the average peak RAM consumption of CloverLeaf3D runs (A), the weak scaling result of CloverLeaf3D (B), and the strong scaling result of NekRS-TGV (C).
The peak memory footprint generated by our implementation corresponded with our expectations. As shown in Figure 7A, a consistent rise in memory consumption was noticed during the initial 40 timesteps, which can be attributed to the caching of newly constructed neural representations. Post 40 steps, the memory consumption ceased to escalate since the temporal array generated by the window operator had reached its maximum capacity. As a result, the oldest neural representation was displaced to create room for new data. At the 70th timestep, an increase in memory consumption was observed, which was attributed to the allocation of memory by the volume rendering engine.
The case study also presented promising findings on weak scaling, as evidenced in Figure 7B. It was observed that the average time required to compute the neural compression decreased as the simulation scaled up. This outcome was attributed to the fact that, while the simulation resolution grew proportionally with the total number of MPI ranks, the physical event under investigation remained constant. As a result, the data complexity represented by each local grid declined, enabling the neural network to compress a grid with less "information" to the target PSNR using fewer training iterations. This outcome implies that the cost of running the neural compression algorithms correlates to the information complexity. Additionally, a slight increase in the overall visualization cost was observed, stemming from the MPI communications necessary for evaluating the DIVA workflow. It is worth noting that optimizing the DIVA workflow is beyond the scope of this study. As such, it is left as one of the future works.
We also evaluated the strong scaling behavior of the case study. Specifically, we conducted an experiment to assess the scaling of the neural compression time with respect to the number of MPI ranks. As shown in Figure 7C, we plotted the scaling behavior of the algorithm alongside the ideal strong scaling curve. Our findings revealed that the neural compression algorithm exhibited poor scaling behavior under these conditions. This result is not unexpected, given that even for a small domain, the neural network would still
Figure 7: The direct volume rendering case running. A) The average peak RAM consumption of all the scaling runs with CloverLeaf3D. B) The weak scaling result with CloverLeaf3D. C) The strong scaling result with NekRS-TGV.
require multiple iterations to approximate the complex information presented in the volume field. In other words, the computational cost of compressing volume data using neural networks does not scale linearly with the data resolution.
### Backward Particle Advection with VTKm
Pathline tracing is a well-established visualization technique for examining flow patterns in diverse fields, such as combustion, aerodynamics, and cosmology. Pathlines, which are integral curves of a time-varying vector field \(V(\vec{x})\) beginning from a seed spatial coordinate \(\vec{x_{0}}\) at time \(t_{0}\), serve as the basis for this technique. Numerical integration algorithms, such as the Euler or Runge-Kutta methods, are frequently employed to compute pathlines. Pathlines can be integrated forward or backward in time, with the latter being particularly intriguing as it can aid in the identification of event-related regions and the comprehension of their causal relationships. However, backward integration poses significant challenges for in situ visualization, such as the impracticality of saving all generated data and the difficulty of establishing trigger conditions for time periods preceding a detectable event. Nevertheless, we demonstrate that our distributed neural representation makes backward integration a feasible task.
For the sake of simplicity and consistency, the same synthetic trigger condition as in the previous case study, C=first(time>T), was utilized. At the time when the trigger condition was activated, a set of \(M\) seeds were randomly selected from a pre-defined bounding box. Subsequently, utilizing our encoding function, we produced a DNR \(R\) of the velocity field \(V\) with a target PSNR of \(Q\). Following this, we constructed a long temporal window \(W\) and applied array operators to reverse and negate the temporal window. Pathlines were then generated over this window, denoted as P=pathline(negate(reverse(W)),...). Upon triggering the condition, pathlines were computed and the resulting data was saved for further visualization and analysis.
VTKm was used to implement the pathline tracing operator even though it did not inherently support neural representations. To overcome this limitation, the velocity field was decoded back to the original mesh grid before being passed to the operator. This decoding process was performed on an on-demand basis, allowing for the retention of only two additional copies of the mesh grid at any given time. The operation of negate was executed in-place and multiplied all field values in all array entries by -1. Additionally, the reverse operation was applied, which simply entailed the alteration of the index of the \(i\)-th element to \(N-i-1\).
We utilized the common settings inherited from the first case study to compute the forward and backward pathlines of both the neural representation and the ground truth data. It should be noted that the backward pathlines of the ground truth data were computed post hoc, requiring explicit caching of volume data to disk. This method proved to be inefficient, and as such, we were only able to achieve good scalability through the use of DNRs. The primary focus of this case study was to investigate the feasibility of utilizing DNR for pathline tracing, as well as analyzing the quality of the traced pathlines. To this end, we present two sets of comparisons in Figure 8.
We first compared the backward and forward tracing of a given velocity field (Figure 8A). Initially, we conducted a backward tracing process using a set of predetermined initial seeds. Subsequently, we identified the end point of all traced pathlines and utilized them as new seeds for the forward tracing process. Through this comparison, we aimed to verify the accuracy of our pathline tracing algorithm and validate the appropriateness of selected parameters for the tracing algorithm. Our findings revealed that the accuracy of backward tracing could be influenced by the integration step size. Nonetheless, provided that the step size remains reasonably small, both forward and backward tracing methods can yield nearly identical results. In Figure 8A, we depict the trajectories of particles traced backward in time using transparent tubes, while the forward pathlines are represented using solid tubes. The line color corresponds to the velocity of the particles. It is important to note that some solid forward pathlines are absent from the plot, as highlighted in the orange square. This does not signify inaccuracy in our approach; rather, it is due to the end points of some backward tracing pathlines falling outside the data domain. Consequently, these
Figure 8: Pathline tracing results generated by our particle advection case. A) Comparing the backward (transparent tubes) and forward (solid tubes) pathline tracing of the same distributed neural representation. B) Comparing the backward tracing result of the ground truth data (transparent tubes) and distributed neural representation (solid tubes). Note that for B, the ground truth tracing was done post hoc. Our particle advection case directly outputs line data. All the renderings were generated using ParaView [AGLO5].
seeds were automatically ignored by the algorithm during the forward tracing pass.
Then, we employed backward tracing to examine both neural representations and ground truth fields. The resulting pathlines were subsequently compared, as illustrated in Figure 8B. Notably, the pathlines generated from neural representations were generally precise, with noticeable deviations from the ground truth observed only after many integration steps. Our analysis revealed that these deviations occurred primarily in regions where the velocity magnitudes were small. This finding is reasonable, as the neural compression noise in regions with small velocity magnitudes, could exert greater influence, thus leading to incorrect particle movements. Once such deviations occurred, it became impossible to rectify them.
## 6 Discussion and Future Work
We conducted a comprehensive evaluation of our technique from two distinct perspectives. Firstly, we investigated the correctness and quality of our distributed neural representation construction. Secondly, we focused on our distributed neural representation implementation in DIVA and our integration to the Ascent infrastructure by studying the usability and performance.
In terms of quality, our technique offers a simple yet effective approach to apply implicit neural representation to distributed data. We found that our method is able to achieve fairly high reconstruction quality in a matter of seconds due to the efficiency of the base neural network. By using ghost region information and introducing a weighted boundary loss, we were able to ensure boundary connectivity and reduce artifacts. Our case study results demonstrate that our neural representation has great potential and can be applied to a range of in situ visualization and analysis tasks. As it is capable of preserving many details presented in the data, it is suitable for many scientific visualization tasks such as volume rendering of scalar fields, as well as working with vector fields and integral lines, with reasonable accuracy. However, it should be used with care, as some of these tasks are more sensitive to errors.
In terms of usability, our distributed neural representation technique significantly reduces data size and can be used to achieve aggressive temporal data caching. This is essential for enabling the full potential of reactive programming for writing adaptive in situ workflows. Our integration in Ascent is easy to use, preserving most of its conventions and using the same simulation interface. We demonstrate that complicated tasks such as backward data analysis can now be easily achieved using DIVA.
In terms of performance, by taking advantage of well-established parallel rendering pipelines and the recently proposed INR rendering algorithm, our distributed neural representation technique can enable efficient and scalable visualization and analysis. The cost of neural compression remains constant as the simulation scales, making it particularly suitable for modern large-scale simulations. Our DIVA integration also scales well, as demonstrated in Figure 7. We only observe a slight increase in visualization costs due to communication overheads after scaling the application to many nodes.
## 7 Conclusion
In this paper, we propose a novel technique called distributed neural representation for constructing implicit neural representation for distributed data. We then present the implementation of this technique within the DIVA reactive programming system, which enables efficient temporal data caching. Finally, to enhance the applicability of our implementation, we integrate it into the Ascent infrastructure and showcase its ability to solve real-world problems. Our results indicate promising scalability and performance.
Our approach enables memory-efficient reactive programming, which takes an essential step towards realizing the full potential of reactive programming for adaptive in situ visualization and analysis. We anticipate that this work will inspire further research to advance the exascale evolution of data processing.
## Acknowledgments
This research was supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration. This research was also supported in part by the Department of Energy through grant DE-SC0019486. This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. The authors also express sincere gratitude to Saumil Patel (Argonne National Laboratory) and Abhishek Yenpure (Kitware) for their invaluable assistance and insightful discussions.
|
2303.05802 | Gradient estimates for a nonlinear parabolic equation on smooth metric
measure spaces with evolving metrics and potentials | This article presents new parabolic and elliptic type gradient estimates for
positive smooth solutions to a nonlinear parabolic equation involving the
Witten Laplacian in the context of smooth metric measure spaces. The metric and
potential here are time dependent and evolve under a super Perelman-Ricci flow.
The estimates are derived under natural lower bounds on the associated
generalised Bakry-\'Emery Ricci curvature tensors and are utilised in
establishing fairly general local and global bounds, Harnack-type inequalities
and Liouville-type global constancy theorems to mention a few. Other
implications and consequences of the results are also discussed. | Ali Taheri, Vahideh Vahidifar | 2023-03-10T09:17:07Z | http://arxiv.org/abs/2303.05802v1 | Gradient estimates for a nonlinear parabolic equation on smooth metric measure spaces with evolving metrics and potentials
###### Abstract.
This article presents new parabolic and elliptic type gradient estimates for positive smooth solutions to a nonlinear parabolic equation involving the Witten Laplacian in the context of smooth metric measure spaces. The metric and potential here are time dependent and evolve under a super Perelman-Ricci flow. The estimates are derived under natural lower bounds on the associated generalised Bakry-Emery Ricci curvature tensors and are utilised in establishing fairly general local and global bounds, Harnack-type inequalities and Liouville-type global constancy theorems to mention a few. Other implications and consequences of the results are also discussed.
Key words and phrases:Smooth metric measure spaces, Witten Laplacian, super Perelman-Ricci flow, Li-Yau estimates, Liouville-type results, Harnack inequalities 2020 Mathematics Subject Classification: 53C44, 58J60, 58J35, 60J60
###### Contents
* 1 Introduction
* 2 Statement of the main results
* 2.1 A local and a global Souplet-Zhang type gradient estimate for (1.1)
* 2.2 A local and a global Hamilton-type gradient estimate for (1.1)
* 2.3 A local and a global parabolic differential Harnack inequality for (1.1)
* 2.4 Liouville-type results, global bounds and elliptic Harnack inequalities
* 3 Proof of the Souplet-Zhang estimate in Theorem 2.1
* 3.1 Some intermediate parabolic lemmas (**I**)
* 3.2 Localising in space-time and cut-offs
* 3.3 Proof of the local estimate in Theorem 2.1
* 3.4 Proof of the elliptic Harnack inequality in Theorem 2.3
* 4.1 Some intermediate parabolic lemmas (**II**)
* 4.2 Proof of the local estimate in Theorem 2.4
* 4.2.5 Proof of the parabolic Li-Yau type estimate in Theorem 2.6
* 9.3 *
## 1. Introduction
In this paper we study gradient estimates for positive smooth solutions to a class of nonlinear parabolic equations on smooth metric measure spaces \((M,g,d\mu)\) with metrics and potentials evolving under a \((\mathsf{k},m)\)-super Perelman-Ricci flow. More specifically we consider positive smooth solutions \(u=u(x,t)\) to the coupled system (for \(x\in M\), \(t>0\))
\[\begin{cases}\Box_{q}u(x,t):=\left[\frac{\partial}{\partial t}-q(x,t)-\Delta_{ f}\right]u(x,t)=\Sigma(t,x,u(x,t)),\\ \frac{\partial g}{\partial t}(x,t)+2\mathscr{R}ic_{f}^{m}(g)(x,t) \geq-2\mathsf{k}g(x,t).\end{cases} \tag{1.1}\]
Here \(M\) is a complete Riemannian manifold of dimension \(n\geq 2\), \(d\mu=e^{-f}dv_{g}\) is a weighted measure associated with the smooth potential \(f=f(x,t)\) and \(dv_{g}\) is the usual Riemannain volume measure. Note that both the metric \(g\) and the potential \(f\) are time dependent and evolve under a \((\mathsf{k},m)\)-super Perelman-Ricci flow as is formulated by the symmetric \((0,2)\)-tensor flow inequality on the second line in the system (1.1). Referring to the equation on the first line in (1.1) the operator \(\Delta_{f}\) is the Witten Laplacian (also known as the weighted or drifting Laplacian, or at times to emphasise the choice of \(f\), the \(f\)-Laplacian) acting on functions \(v\in\mathscr{C}^{2}(M)\) through the identity
\[\Delta_{f}v=e^{f}\mathrm{div}(e^{-f}\nabla v)=\Delta v-\langle\nabla f,\nabla v\rangle, \tag{1.2}\]
where \(\Delta,\mathrm{div}\) and \(\nabla\) are the usual Laplace-Beltrami, divergence and gradient operators associated with the metric \(g\) respectively. Moreover, the evolution operators
\[\Box_{q}=\partial_{t}-q-\Delta_{f},\qquad\Box=\Box_{0}=\partial_{t}-\Delta_{ f}, \tag{1.3}\]
are the \(q\)-weighted (and weighted) heat operators respectively with \(q=q(x,t)\) a smooth function of the space-time variables.
As indicated above the evolution of the metric-potential pair \((g,f)\) is governed by the flow inequality on the second line in (1.1) with \(\mathsf{k}\) in \(\mathbb{R}\) and \(m\geq n\) (note in particular that \(m\) is not necessarily an integer). Now referring to the left-hand side of this inequality, the symmetric second order space-time dependent tensor field
\[\mathscr{R}ic_{f}^{m}(g)(x,t)=\mathscr{R}ic(g)(x,t)+\mathrm{Hess}(f)(x,t)- \frac{\nabla f\otimes\nabla f}{m-n}(x,t), \tag{1.4}\]
is the generalised Bakry-Emery Ricci curvature tensor of the triple \((M,g,d\mu)\) with \(\mathscr{R}ic(g)\) the usual Riemannain Ricci curvature tensor of \(g\) and \(\mathrm{Hess}(f)\) the Hessian of \(f\) (_see_[3, 4, 5]). For the sake of clarity we point out that when \(m=n\), by convention, \(f\) is only allowed to be a constant, resulting in \(\mathscr{R}ic_{f}^{m}(g)=\mathscr{R}ic(g)\), whilst, we also allow for \(m=\infty\), in which case by formally passing to the limit in (1.4) we set
\[\mathscr{R}ic_{f}^{\infty}(g)(x,t)=\mathscr{R}ic(g)(x,t)+\mathrm{Hess}(f)(x,t) :=\mathscr{R}ic_{f}(g)(x,t). \tag{1.5}\]
The nonlinearity \(\Sigma=\Sigma(t,x,u)\) in (1.1) is a sufficiently smooth function depending on both the space-time variables \((x,t)\) with \(x\in M\), \(t\geq 0\) and the independent variable \(u\). We give several examples of such nonlinearities arising from different contexts, e.g., conformal geometry and mathematical physics, each presenting a different phenomenon whilst depicting a corresponding singular or regular behaviour in certain variable ranges and regimes.
The \(f\)-Laplacian (1.2) is a symmetric diffusion operator with respect to the invariant weighted measure \(d\mu=e^{-f}dv_{g}\). It arises in a variety of contexts including probability theory and stochastic analysis, geometry, quantum field theory, statistical mechanics and kinetic theory [3, 5, 21, 57]. It is a natural generalisation of the Laplace-Beltrami operator to the smooth metric measure space context and it coincides with the latter precisely when the potential \(f\) is a constant. By an application of the integration by parts formula it can been that for \(u,w\in\mathscr{C}_{0}^{\infty}(M)\) it holds
\[\int_{M}e^{-f}w\Delta_{f}u\,dv_{g}=-\int_{M}e^{-f}\langle\nabla u,\nabla w \rangle\,dv_{g}=\int_{M}e^{-f}u\Delta_{f}w\,dv_{g} \tag{1.6}\]
Our main objective in this paper is to establish new and fairly general local and global elliptic as well as parabolic type gradient estimates for positive smooth solutions to (1.1) and present some of their implications in a context where the metric and potential are time dependent and evolve under a \((\mathsf{k},m)\)-super Perelman-Ricci flow.
Gradient estimates occupy a central place in geometric analysis with a huge scope of applications (see [2, 8, 15, 19, 25, 32, 33, 34], [1, 6, 14, 18, 43, 47, 52, 53, 55, 56, 58, 61, 62] as well as [5, 21, 23, 31, 51, 64] and the references therein). The estimates of interest in this paper fall under the category of Souplet-Zhang, Hamilton and Li-Yau types that were first formulated and proved for the linear heat equation on static manifolds in [25, 47] and later extended by many authors to contexts including evolving manifolds and nonlinear equations. To the best of our knowledge the results presented here are the first under such generalities on the nonlinearity in (1.1) and/or subject to the evolution of the metric-potential pair. In particular they encompass, unify and extend many existing results in the literature for very specific types of nonlinearities.
According to the weighted Bochner-Weitzenbock formula, the Witten Laplacian \(\Delta_{f}\) and the generalised curvature tensor \(\mathscr{R}ic_{f}(g)\) are related, for every function \(u\in\mathscr{C}^{3}(M)\), via the identity
\[\frac{1}{2}\Delta_{f}|\nabla u|^{2}=|\mathrm{Hess}(u)|^{2}+\langle\nabla u, \nabla\Delta_{f}u\rangle+\mathscr{R}ic_{f}(\nabla u,\nabla u). \tag{1.7}\]
Since by an application of the Cauchy-Schwartz inequality we have \(\Delta u\leq\sqrt{n}|\mathrm{Hess}(u)|\) (with the norm of the Hessian on the right being the usual Hilbert-Schmidt norm on 2-tensors), upon recalling \(\Delta_{f}u=\Delta u-\langle\nabla f,\nabla u\rangle\) in (1.2) and by using Young's inequality, it is seen that
\[|\mathrm{Hess}(u)|^{2}+\frac{\langle\nabla f,\nabla u\rangle^{2}}{m-n}\geq \frac{(\Delta u)^{2}}{n}+\frac{\langle\nabla f,\nabla u\rangle^{2}}{m-n}\geq \frac{(\Delta u-\langle\nabla f,\nabla u\rangle)^{2}}{m}=\frac{(\Delta_{f}u)^ {2}}{m}. \tag{1.8}\]
Hence (1.8) in conjunction with (1.7) gives the inequality
\[\frac{1}{2}\Delta_{f}|\nabla u|^{2}-\langle\nabla u,\nabla\Delta_{f}u\rangle \geq\frac{1}{m}(\Delta_{f}u)^{2}+\mathscr{R}ic_{f}^{m}(\nabla u,\nabla u). \tag{1.9}\]
As a result, subject to a curvature lower bound of the form \(\mathscr{R}ic_{f}^{m}(g)\geq-\mathsf{k}g\) the operator \(L=\Delta_{f}\) is seen to satisfy the curvature dimension condition \(\mathrm{CD}(-\mathsf{k},m)\) for symmetric diffusion operators, specifically, (_cf._[4, 5])
\[\frac{1}{2}L|\nabla u|^{2}-\langle\nabla u,\nabla Lu\rangle\geq\frac{1}{m}(Lu )^{2}-\mathsf{k}|\nabla u|^{2}. \tag{1.10}\]
The latter condition and inequality plays a fundamental role in the analysis of diffusion operators and their geometric properties.
Let us recall that in the static case, i.e., non-evolving metric-potential pairs, gradient estimates for positive solutions to linear and nonlinear heat type equations have been studied extensively starting from the seminal paper of Li and Yau [32] (_see_ also [31]). In the nonlinear setting the first equation to be considered is the one with a logarithmic type nonlinearity (_see_, e.g., [38])
\[\square_{q}u=[\partial_{t}-q(x,t)-\Delta_{f}]u=p(x,t)u\log u. \tag{1.11}\]
The interest in such problems originates partly from its natural links with gradient Ricci solitons and partly from links with geometric and functional inequalities on manifolds, notably, the logarithmic Sobolev and energy-entropy inequalities [4, 5, 22, 57]. Recall that a Riemannian manifold \((M,g)\) is said to be a gradient Ricci soliton _iff_ there there exists a smooth function \(f\) on \(M\) and a constant \(\lambda\in\mathbb{R}\) such that (_cf._[12, 17, 37])
\[\mathscr{R}ic_{f}(g)=\mathscr{R}ic(g)+\operatorname{Hess}(f)=\lambda g. \tag{1.12}\]
A gradient Ricci soliton can be expanding \((\lambda>0)\), steady \((\lambda=0)\) or shrinking \((\lambda<0)\). The notion is a generalisation of an Einstein manifold and has a fundamental role in the analysis of singularities of the Ricci flow [26, 64]. Other classes of equations closely relating to (1.11) include
\[\square_{q}u=[\partial_{t}-q(x,t)-\Delta_{f}]u=a(x,t)\Gamma(\log u)u^{ \mathsf{p}}+b(x,t)u^{\mathsf{q}}, \tag{1.13}\]
where \(\mathsf{p},\mathsf{q}\) are real exponents, \(a,b\) are sufficiently smooth functions and \(\Gamma\in\mathscr{C}^{2}(\mathbb{R},\mathbb{R})\) (_see_[9, 20, 52, 53, 60, 61] and the references therein for gradient estimates and related results in this direction).
Another class of equations that have been extensively studied and whose nonlinearity is in the form of superposition of power-like nonlinearities are the Yamabe equations (_see_, e.g., [6, 13, 24, 30]). In the context of smooth metric measure spaces a far-reaching generalisation of these equations take the form (_see_, e.g., [62])
\[\square_{q}u=[\partial_{t}-q(x,t)-\Delta_{f}]u=a(x,t)u^{\mathsf{p}}+b(x,t)u^ {\mathsf{q}}. \tag{1.14}\]
A closely related and yet more general form of Yamabe type equations is the Einstein-scalar field Lichnerowicz equation (see, e.g., Choquet-Bruhat [16], Chow [17] and Zhang [64]). When the underlying manifold has dimension \(n\geq 3\) the evolutionary form of this equation takes the form \(\partial_{t}u=\Delta u+a(x)u^{\mathsf{p}}+b(x)u^{\mathsf{q}}+c(x)u\) with \(\mathsf{p}=(n+2)/(n-2)\) and \(\mathsf{q}=(3n-2)/(n-2)\) while when \(n=2\) the same evolutionary equation takes the form \(\partial_{t}u=\Delta u+a(x)e^{2u}+b(x)e^{-2u}+c(x)\). In the setting of smooth metric measure spaces a generalisation of the Einstein-scalar field Lichnerowicz equation with space-time dependent coefficients can be described as:
\[\square_{q}u=[\partial_{t}-q(x,t)-\Delta_{f}]u=a(x,t)u^{\mathsf{p}}+b(x,t)u^ {\mathsf{q}}+c(x,t)u\log u, \tag{1.15}\]
and
\[\square u=(\partial_{t}-\Delta_{f})u=a(x,t)e^{2u}+b(x,t)e^{-2u}+c(x,t). \tag{1.16}\]
For gradient estimates, Harnack inequalities, Liouville type theorems and other related results in this direction see [19, 46, 52, 53, 62] and the references therein.
Moving on to the evolving case the time dependence of the metric-potential pair adds further complications and technical details as far as gradient estimates are concerned.
Here the case of the weighted heat equation under the Perelman-Ricci flow, generalising in turn, the heat equation under the Ricci flow to the setting to smooth metric measure spaces, given by the system
\[\begin{cases}\square u(x,t)=\left[\frac{\partial}{\partial t}-\Delta_{f}\right]u (x,t)=0,\\ \frac{\partial g}{\partial t}(x,t)=-2\mathscr{R}ic_{f}(g)(x,t),\end{cases} \tag{1.17}\]
has been considered by many authors (see in particular [2, 11, 17, 25, 26, 42, 44, 49, 64] and the references therein).
The system (1.1) can be seen as a generalisation of (1.17) in two substantial ways. Firstly, the weighted linear heat equation is replaced by its nonlinear counterpart where the nonlinearity takes a considerably general formulation. Secondly, the Perelman-Ricci flow gives way to the \((\mathsf{k},m)\)-_super_ Perelman-Ricci flow which is again a substantial and far-reaching generalisation. For example, in the static case (with \(\partial_{t}g\equiv 0\), \(\partial_{t}f\equiv 0\)), the latter includes spaces with curvature lower bound \(\mathscr{R}ic_{f}(g)\geq-\mathsf{k}g\) (see the discussion in the next section) whereas the counterpart of the former [_cf._ (1.17)] in the static case includes only \(\mathscr{R}ic_{f}\)-flat spaces which is a much narrower subset. In passing we point out that \(\mathscr{R}ic\)-flat spaces are _particular_ classes of Einstein manifolds, verifying \(\mathscr{R}ic(g)=\lambda g\) with \(\lambda=0\), and \(\mathscr{R}ic_{f}^{m}\)-flat spaces, naturally extending the notion to the smooth metric measure space context, are particular classes of \(m\)-quasi Einstein manifolds, verifying \(\mathscr{R}ic_{f}^{m}(g)=\lambda g\), or gradient Ricci solitons, verifying \(\mathscr{R}ic_{f}(g)=\lambda g\), as indicated earlier in (1.12), each in the special case with \(\lambda=0\) (_see_[34, 35, 64]).
For the sake of future reference a \((\mathsf{k},m)\)-_super_ Perelman-Ricci flow specifically refers to a complete smooth solution \((g,f)\) to the flow inequality (with \(n\leq m<\infty\)):
\[\begin{cases}\frac{\partial g}{\partial t}(x,t)+2\mathscr{R}ic_{f}^{m}(g)(x,t )\geq-2\mathsf{k}g(x,t),\\ \mathscr{R}ic_{f}^{m}(g)(x,t)=\mathscr{R}ic(g)(x,t)+\operatorname{Hess}(f)(x,t)-\frac{\nabla f\otimes\nabla f}{m-n}(x,t).\end{cases} \tag{1.18}\]
In the event \(m=\infty\) and with \(\mathscr{R}ic_{f}(g)\) as described by (1.5) the above system should be interpreted explicitly as
\[\begin{cases}\frac{\partial g}{\partial t}(x,t)+2\mathscr{R}ic_{f}(g)(x,t) \geq-2\mathsf{k}g(x,t),\\ \mathscr{R}ic_{f}(g)(x,t)=\mathscr{R}ic(g)(x,t)+\operatorname{Hess}(f)(x,t). \end{cases} \tag{1.19}\]
Thus hereafter when referring to the flow inequality in (1.1) (the symmetric tensor inequality on the second line) one of the above is intended depending on whether \(m\) is finite or not.
### Plan of the paper
Let us finish-off this long introduction by briefly describing the layout and plan of the paper. In Section 2 we present the statements of the main results. Section 3 is devoted to the proof of the local Soulpet-Zhang estimate in Theorem 2.1 followed by the proof of the elliptic Harnack inequality in Theorem 2.3. In Section 4 we present the proof of the local Hamilton-type estimates in Theorem 2.4 and in Section 5, which is the core and most involved part of the paper, we establish the local parabolic Li-Yau type estimate in Theorem 2.6. In Section 6 we present the proof of
the parabolic Harnack inequality in Theorem 2.8 and in Section 7 we give the proof of the two Liouville results in Theorem 2.9 and Theorem 2.10. Finally in Section 8 we present the proof of the global bound in Theorem 2.11 and discuss its consequences.
**Notation.** For \(X\in\mathbb{R}\) we write \(X_{+}=\max(X,0)\) and \(X_{-}=\min(X,0)\). Therefore \(X=X_{+}+X_{-}\) with \(X_{+}\geq 0\) and \(X_{-}\leq 0\). Fixing a reference point \(x_{0}\in M\) we denote by \(d=d(x,x_{0},t)\) the Riemannian distance between \(x\) and \(x_{0}\) on \(M\) with respect to the metric \(g=g(t)\). We write \(\varrho=\varrho(x,x_{0},t)\) for the geodesic radial variable measuring the distance between \(x\) and \(x_{0}\) at time \(t>0\). For \(R>0\), \(T>0\) we define the space-time cylinder
\[Q_{R,T}(x_{0})\equiv\{(x,t):d(x,x_{0},t)\leq R\text{ and }0\leq t\leq T\}\subset M \times[0,T], \tag{1.20}\]
and for fixed \(0<t\leq T\) we denote by \(B_{r}(x_{0})\subset M\) the geodesic ball of radius \(r>0\) centred at \(x_{0}\) with respect to \(g=g(t)\). When the choice of the reference point \(x_{0}\) is clear from the context we often abbreviate and simply write \(d(x,t)\), \(\varrho(x,t)\) or \(B_{r}\), \(Q_{R,T}\) respectively.
We typically denote partial derivatives by subscripts unless otherwise specified. In particular, for the nonlinear function \(\Sigma=\Sigma(t,x,u)\) in (1.1) we frequently use \(\Sigma_{x}\), \(\Sigma_{u}\), \(\Sigma_{xx}\), \(\Sigma_{xu}\) and \(\Sigma_{uu}\) for the respective partial derivatives of first and second orders in the spatial variables \(x\) and \(u\). [Note that in local coordinates we have \(\Sigma_{x}=(\Sigma_{x_{1}},\dots,\Sigma_{x_{n}})\).] We also write \(\Sigma^{x}:x\mapsto\Sigma(t,x,u)\) for the function resulting from freezing the variables \((t,u)\) and viewing \(\Sigma\) as a function of \(x\) only. Thus we speak of \(\nabla\Sigma^{x}\), \(\Delta\Sigma^{x}\), \(\Delta_{f}\Sigma^{x}\) and so on.
For a bounded function \(u=u(x,t)\) on \(Q_{R,T}\) we write \(\overline{u}=\overline{u}(R,T)\) and \(\underline{u}=\underline{u}(R,T)\) for the supremum and infimum of \(u\) on \(Q_{R,T}\) respectively. We also introduce the set
\[\Theta_{R,T}\equiv\{(t,x,u):(x,t)\in Q_{R,T}\text{ and }\underline{u}\leq u \leq\overline{u}\}\subset[0,T]\times M\times\mathbb{R}. \tag{1.21}\]
It is evident that for any \(F=F(t,x,u)\) we have the inequalities
\[\sup_{Q_{R,T}}F(t,x,u(x,t))\leq\sup_{\Theta_{R,T}}F(t,x,u), \tag{1.22}\]
\[\inf_{Q_{R,T}}F(t,x,u(x,t))\geq\inf_{\Theta_{R,T}}F(t,x,u). \tag{1.23}\]
Note that whilst the quantities on the left depend explicitly on the function \(u\), the quantities on the right depend only on the upper and lower bounds \(\overline{u}\), \(\underline{u}\) of \(u\). Depending on context and need both these bounds will be utilised as appropriate in future.
## 2. Statement of the main results
In this section we present the main results of the paper along with some accompanying discussion. The complete proofs and further details are delegated to the subsequent sections. For the convenience of the reader and ease of reference we have grouped this section into four subsections based on the nature of the estimates and results involved.
### A local and a global Souplet-Zhang type gradient estimate for (1.1)
The first result is a local elliptic gradient estimate of Souplet-Zhang type for positive smooth bounded solutions to (1.1). We emphasise that here the metric-potential pair \((g,f)\) is assumed to be time dependent and a complete smooth solution to the flow inequality (1.19) for a suitable \(\mathsf{k}\geq 0\). Note that as a result of this evolution the usual differential operators \(\nabla\), \(\mathrm{div}\), \(\Delta\) and \(\Delta_{f}=\Delta-\langle\nabla f,\nabla\rangle\) are all time dependent.
For the sake of this local estimate we pick a reference point \(x_{0}\in M\) and restrict to the compact set \(Q_{R,T}=Q_{R,T}(x_{0})\) where \(R\geq 2\) and \(T>0\) are fixed. The estimate makes use of the upper bound on the solution \(u\), the lower bound on the generalised Bakry-Emery Ricci curvature tensor \(\mathscr{R}ic_{f}(g)\geq-(n-1)k_{1}g\) and the lower bound \(\partial_{t}g\geq-2k_{2}g\), with \(k_{1},k_{2}\geq 0\), all within the set \(Q_{R,T}\). Two important quantities appearing in the formulation of the estimate that directly link to the nonlinearity \(\Sigma\) and the solution \(u\) are respectively \(\mathsf{R}_{\Sigma}=\mathsf{R}_{\Sigma}(u)\), \(\mathsf{P}_{\Sigma}=\mathsf{P}_{\Sigma}(u)\) [see (2.2)-(2.3)]. These play a key role in the subsequent bounds, Harnack inequalities and Liouville-type results on solutions. Note also that as \(u>0\) and \(Q_{R,T}\) is compact, \(u\) is bounded away from zero and from above hence these quantities are finite.
**Theorem 2.1**.: _Let \((M,g,d\mu)\) be a complete smooth metric measure space with \(d\mu=e^{-f}dv_{g}\) and suppose that the metric-potential pair \((g,f)\) is time dependent and of class \(\mathscr{C}^{2}\). Assume the bounds_
\[\mathscr{R}ic_{f}(g)\geq-(n-1)k_{1}g,\qquad\partial_{t}g\geq-2k_{2}g, \tag{2.1}\]
_in the compact set \(Q_{R,T}\) for some \(k_{1},k_{2}\geq 0\). Let \(u\) be a positive solution to (1.1) with \(0<u\leq D\) in \(Q_{R,T}\). Then for all \((x,t)\) in \(Q_{R/2,T}\) with \(t>0\) we have:_
\[\frac{|\nabla u|}{u}\leq C\left\{\frac{1}{R}+\sqrt{\frac{[\gamma_{\Delta_{f}} ]_{+}}{R}}+\frac{1}{\sqrt{t}}+\sqrt{k}+\sup_{Q_{R,T}}\left(\mathsf{N}_{q}+ \mathsf{R}_{\Sigma}^{1/2}(u)+\mathsf{P}_{\Sigma}^{1/3}(u)\right)\right\} \left(1-\log\frac{u}{D}\right), \tag{2.2}\]
_where \(C>0\) depends only on \(n\), \(\mathsf{N}_{q}=q_{+}^{1/2}+|\nabla q|^{1/3}\) with \(q_{+}=q_{+}(x,t)=[q(x,t)]_{+}\) and_
\[\mathsf{R}_{\Sigma}(u)=\left[\frac{\Sigma_{u}(t,x,u)}{1-\log(u/D)}+\frac{\log (u/D)\Sigma(t,x,u)}{u[1-\log(u/D)]^{2}}\right]_{+},\quad\mathsf{P}_{\Sigma}(u) =\frac{|\Sigma_{x}(t,x,u)|}{u[1-\log(u/D)]^{2}}. \tag{2.3}\]
_Additionally, \(k=\sqrt{k_{1}^{2}+k_{2}^{2}}\) and \([\gamma_{\Delta_{f}}]_{+}=\max(\gamma_{\Delta_{f}},0)\) where_
\[\gamma_{\Delta_{f}}=\max_{(x,t)}\{\Delta_{f}\varrho(x,t):d(x,x_{0},t)=1\text{ and }0\leq t\leq T\}. \tag{2.4}\]
Let us make a few useful comments on the theorem, its assumptions and proof. First, \(\mathscr{R}ic_{f}(g)\geq-(n-1)k_{1}g\) and \(\partial_{t}g\geq-2k_{2}g\) in \(Q_{R,T}\) give (1.19) with \(\mathsf{k}=(n-1)k_{1}+k_{2}\geq 0\). In the proof, the lower bound on \(\mathscr{R}ic_{f}(g)\) is utilised in the application of the generalised Laplacian comparison theorem and the bound \(\partial_{t}g\geq-2k_{2}g\) is used for controlling the time derivative of the geodesic distance function: \(\partial_{t}d=\partial[d(x,x_{0},t)]/\partial t\). Both these estimates arise in the localisation stage in the later part of the proof.
Second, in the static case we have \(\partial_{t}g\equiv 0\), \(\partial_{t}f\equiv 0\) and \(\mathscr{R}ic_{f}(g)\geq-(n-1)k_{1}g\) and so we can set \(k_{2}=0\) and \(k=k_{1}\). [This case is certainly a solution to the flow inequality in (1.1) with \(m=\infty\) and \(\mathsf{k}=(n-1)k_{1}\).] Thus here \(\gamma_{\Delta_{f}}=\max\{\Delta_{f}\varrho(x):d(x,x_{0})=1\}\) and evidently \(\partial_{t}d=\partial[d(x,x_{0},t)]/\partial t\equiv 0\). This way Theorem 2.1 can also be seen as giving
local gradient estimates for positive bounded solutions to (1.1) on non-evolving (static) smooth metric measure spaces which is of course of independent interest. Finally, note that by virtue of the bound \(0<u\leq D\) we have the inequality \(0<1/[1-\log(u/D)]\leq 1\) and so in particular
\[0\leq\mathsf{R}_{\Sigma}(u)\leq\left[\frac{u\Sigma_{u}(t,x,u)+\log(u/D)[\Sigma( t,x,u)-u\Sigma_{u}(t,x,u)]}{u[1-\log(u/D)]}\right]_{+}, \tag{2.5}\]
and \(0\leq\mathsf{P}_{\Sigma}(u)\leq|\Sigma_{x}(t,x,u)|/(u[1-\log(u/D)])\leq|\Sigma_ {x}(t,x,u)|/u\). (It is instructive to compare the bound (2.5) with those in [52, 53].)
The local estimate above has a global in space counterpart subject to the prescribed bounds in the theorem being global in space. The proof follows by passing to the limit \(R\to\infty\) in (2.2) and taking into account the vanishing of the terms involving \(R\) by virtue of the bounds being global and the relevant constants being independent of \(R\). The precise formulation of this is given in the following theorem.
**Theorem 2.2**.: _Let \((M,g,d\mu)\) be a complete smooth metric measure space with \(d\mu=e^{-f}dv_{g}\) and suppose that the metric-potential pair \((g,f)\) is time dependent and of class \(\mathscr{C}^{2}\). Assume \(\mathscr{R}ic_{f}(g)\geq-(n-1)k_{1}g\) and \(\partial_{t}g\geq-2k_{2}g\) in \(M\times[0,T]\) for \(k_{1},k_{2}\geq 0\). If \(u\) is a positive solution to (1.1) with \(0<u\leq D\), then for all \(x\in M\) and \(0<t\leq T\) we have:_
\[\frac{|\nabla u|}{u}\leq C\left\{\frac{1}{\sqrt{t}}+\sqrt{k}+\sup_{M\times[0,T ]}\left(\mathsf{N}_{q}+\mathsf{R}_{\Sigma}^{1/2}(u)+\mathsf{P}_{\Sigma}^{1/3} (u)\right)\right\}\left(1-\log\frac{u}{D}\right). \tag{2.6}\]
_The quantities \(\mathsf{N}_{q}\), \(\mathsf{R}_{\Sigma}(u)\) and \(\mathsf{P}_{\Sigma}(u)\) in (2.6) are the same as those in Theorem 2.1._
One of the useful consequences of the estimates above is the following elliptic Harnack inequality for bounded positive solutions to equation (1.1). Later on we will also prove a parabolic counterpart for this inequality from another type of estimate on solutions (_see_ Theorem 2.8 below and Section 6 for the proof). Note that here the solution \(u\) is compared at two different spatial points \(x_{1}\), \(x_{2}\) but the same time \(t>0\).
**Theorem 2.3**.: _Under the assumptions of Theorem 2.1 for all \((x_{1},t)\), \((x_{2},t)\) in \(Q_{R/2,T}\) with \(t>0\) we have:_
\[u(x_{1},t)\leq(eD)^{1-\gamma}[u(x_{2},t)]^{\gamma}, \tag{2.7}\]
_where the exponent \(\gamma\) in (2.7) is explicitly given \((\)with \(d=d(x_{1},x_{2},t)\) below\()\) by_
\[\gamma=\exp\left[-Cd\left(\frac{1}{R}+\sqrt{k}+\frac{1}{\sqrt{t}}+\sup_{Q_{R, T}}\left[\mathsf{N}_{q}+\mathsf{R}_{\Sigma}^{1/2}(u)+\mathsf{P}_{\Sigma}^{1/3} (u)\right]+\sqrt{\frac{[\gamma_{\Delta_{f}}]_{+}}{R}}\right)\right]. \tag{2.8}\]
_Moreover, subject to the global bounds in Theorem 2.2, for all \(x_{1}\), \(x_{2}\) in \(M\) and \(0<t\leq T\) we have the same (2.7)-(2.8) with \(M\times[0,T]\) replacing \(Q_{R,T}\) in (2.8)._
### A local and a global Hamilton-type gradient estimate for (1.1)
We now present the second gradient estimate of elliptic type for the positive smooth solutions to (1.1). Here again the metric-potential pair \((g,f)\) is time dependent and a complete smooth solution to the flow inequality (1.19) and the estimate makes use of the bounds on the solution \(u\), the lower bound on the generalised Bakry-Emery Ricci curvature tensor \(\mathscr{R}ic_{f}(g)\geq-(n-1)k_{1}g\) and the lower bound \(\partial_{t}g\geq-2k_{2}g\), with \(k_{1},k_{2}\geq 0\), all
within the compact set \(Q_{R,T}\). The setting and notation here is mostly similar to those in the previous theorem however the proof is based on the use of different objects and tools.
**Theorem 2.4**.: _Let \((M,g,d\mu)\) be a complete smooth metric measure space with \(d\mu=e^{-f}dv_{g}\) and suppose that the metric-potential pair \((g,f)\) is time dependent and of class \(\mathscr{C}^{2}\). Assume the bounds_
\[\mathscr{R}ic_{f}(g)\geq-(n-1)k_{1}g,\qquad\partial_{t}g\geq-2k_{2}g, \tag{2.9}\]
_in the compact set \(Q_{R,T}\) for some \(k_{1},k_{2}\geq 0\). Let \(u\) be a positive solution to (1.1) in \(Q_{R,T}\). Then for all \((x,t)\) in \(Q_{R/2,T}\) with \(t>0\) we have:_
\[\frac{|\nabla u|}{\sqrt{u}}\leq C\left\{\frac{1}{R}+\sqrt{\frac{[\gamma_{ \Delta_{f}}]_{+}}{R}}+\frac{1}{\sqrt{t}}+\sqrt{k}+\sup_{Q_{R,T}}\Big{(}\mathsf{ N}_{q}+\mathsf{T}_{\Sigma}^{1/2}(u)+\mathsf{S}_{\Sigma}^{1/3}(u)\Big{)} \right\}\Big{(}\sup_{Q_{R,T}}\sqrt{u}\Big{)}, \tag{2.10}\]
_where \(C>0\) depends only on \(n\), \(\mathsf{N}_{q}=q_{+}^{1/2}+|\nabla q|^{1/3}\), \(\gamma_{\Delta_{f}}\) is as in (2.4), \(k=\sqrt{k_{1}^{2}+k_{2}^{2}}\) and_
\[\mathsf{T}_{\Sigma}(u)=\left[\frac{2u\Sigma_{u}(t,x,u)-\Sigma(t,x,u)}{u} \right]_{+},\qquad\mathsf{S}_{\Sigma}(u)=\frac{|\Sigma_{x}(t,x,u)|}{u}. \tag{2.11}\]
It is instructive to compare the nonlinear quantities \(\mathsf{R}_{\Sigma}(u)\), \(\mathsf{P}_{\Sigma}(u)\) in (2.3) of Theorem 2.1 with the corresponding ones \(\mathsf{T}_{\Sigma}(u)\), \(\mathsf{S}_{\Sigma}(u)\) in (2.11) of Theorem 2.4 as appearing on the right-hand sides of the formulations of the estimates for the gradient of the solution \(u\). More on this and its implications will be said later when discussing Liouville-type results, Harnack inequalities and other applications of the estimates.
Again, the local estimate above has a global counterpart, when the asserted bounds in the theorem are global. The proof follows by passing to the limit \(R\to\infty\) in (2.10) taking into account the vanishing of the terms involving \(R\) resulting from the bounds being global and the relevant constants being independent of \(R\). This is the content of the following theorem.
**Theorem 2.5**.: _Let \((M,g,d\mu)\) be a complete smooth metric measure space with \(d\mu=e^{-f}dv_{g}\) and suppose that the metric-potential pair \((g,f)\) is time dependent and of class \(\mathscr{C}^{2}\). Assume \(\mathscr{R}ic_{f}(g)\geq-(n-1)k_{1}g\) and \(\partial_{t}g\geq-2k_{2}g\) in \(M\times[0,T]\) for \(k_{1},k_{2}\geq 0\). If \(u\) is a positive solution to (1.1), then for all \(x\in M\) and \(0<t\leq T\) we have:_
\[\frac{|\nabla u|}{\sqrt{u}}\leq C\left\{\frac{1}{\sqrt{t}}+\sqrt{k}+\sup_{M \times[0,T]}\Big{(}\mathsf{N}_{q}+\mathsf{T}_{\Sigma}^{1/2}(u)+\mathsf{S}_{ \Sigma}^{1/3}(u)\Big{)}\right\}\Big{(}\sup_{M\times[0,T]}\sqrt{u}\Big{)}. \tag{2.12}\]
_The quantities \(\mathsf{N}_{q}\), \(\mathsf{T}_{\Sigma}(u)\) and \(\mathsf{S}_{\Sigma}(u)\) in (2.12) are the same as those in Theorem 2.4._
### A local and a global parabolic differential Harnack inequality for (1.1)
We now move on to a different type of estimate to the elliptic ones established above, namely, a Li-Yau type estimate (also known as a differential Harnack inequality). To this end we pick \(x_{0}\in M\) and set \(Q_{2R,T}=Q_{2R,T}(x_{0})\) where \(R>0\), \(T>0\). The estimate makes use of the lower bound on \(\mathscr{R}ic_{f}^{m}(g)\) (with \(n\leq m<\infty\)), along with the bounds on \(u\), \(\partial_{t}g\), \(\nabla f\), \(\nabla\partial_{t}g\) and \(\nabla\partial_{t}f\) all in \(Q_{2R,T}\). Applications to Harnack inequalities and Liouville-type results for solutions of (1.1) are given later on.
**Theorem 2.6**.: _Let \((M,g,d\mu)\) be a complete smooth metric measure space with \(d\mu=e^{-f}dv_{g}\) and suppose that the metric-potential pair \((g,f)\) is time dependent and of class \(\mathscr{C}^{2}\). Assume that \(\mathscr{R}ic_{f}^{m}(g)\geq-(m-1)k_{1}g\) and that the following bounds_
\[-2\underline{k}_{2}g\leq\partial_{t}g\leq 2\overline{k}_{2}g,\qquad| \nabla\partial_{t}g|\leq 2k_{3}, \tag{2.13}\]
\[|\nabla f|\leq\ell_{1},\qquad|\nabla\partial_{t}f|\leq\ell_{2}, \tag{2.14}\]
_hold in \(Q_{2R,T}\) for suitable constants \(k_{1},\underline{k}_{2},\overline{k}_{2},k_{3}\geq 0\) and \(\ell_{1},\ell_{2}\geq 0\). Let \(u=u(x,t)\) be a positive solution to (1.1) in \(Q_{2R,T}\). Then for every \(\lambda>1\), \(\varepsilon\in(0,1)\) and all \((x,t)\) in \(Q_{R,T}\) with \(t>0\) we have_
\[\frac{|\nabla u|^{2}}{\lambda u^{2}}-\frac{\partial_{t}u}{u}+q+ \frac{\Sigma(t,x,u)}{u}\leq m\lambda\left[\frac{1}{t}+c_{1}\underline{k}_{2}+\gamma_{1}^{ \Sigma}\right]\\ +\frac{m\lambda}{R^{2}}\left[\frac{mc_{1}^{2}\lambda^{2}}{2( \lambda-1)}+c_{2}+(m-1)c_{1}(1+R\sqrt{k_{1}})+2c_{1}^{2}\right]\\ +\sqrt{m}\biggl{\{}\frac{m\lambda^{2}\mathsf{A}^{2}}{4(1- \varepsilon)(\lambda-1)^{2}}+\frac{3}{4}\left[\frac{m\lambda^{2}\mathsf{B}^{4 }}{4\varepsilon(\lambda-1)^{2}}\right]^{1/3}\\ +\lambda^{2}n(\underline{k}_{2}+\overline{k}_{2})^{2}+2\lambda^ {2}nk_{3}+\lambda(\gamma_{2}^{\Sigma}+\gamma_{2}^{qu})\biggr{\}}^{1/2}. \tag{2.15}\]
_The quantities and constants appearing on the right-hand side of the inequality (2.15) are given respectively by_
\[\mathsf{A}= \ 2[(m-1)k_{1}+(\lambda-1)\overline{k}_{2}+k_{3}]\] \[-\inf_{\Theta_{2R,T}}\left\{\frac{1}{u}[\Sigma(t,x,u)-u\Sigma_{u }(t,x,u)+\lambda u^{2}\Sigma_{uu}(t,x,u)]_{-}\right\}, \tag{2.16}\]
_and_
\[\mathsf{B}=\lambda\ell_{2}+2\lambda\underline{k}_{2}\ell_{1}+\sup_{\Theta_{2R,T}}\left\{\frac{2}{u}|\Sigma_{x}(t,x,u)-\lambda u\Sigma_{xu}(t,x,u)|+2( \lambda-1)|\nabla q|\right\}. \tag{2.17}\]
_Moreover we have_
\[\gamma_{1}^{\Sigma}=\sup_{\Theta_{2R,T}}\left\{\frac{1}{u}[u\Sigma_{u}(t,x,u) -\Sigma(t,x,u)]_{+}\right\}, \tag{2.18}\]
_and similarly_
\[\gamma_{2}^{\Sigma}=-\inf_{\Theta_{2R,T}}\left[\frac{1}{u}\Delta_{f}\Sigma^{x }(t,x,u)\right]_{-},\qquad\gamma_{2}^{qu}=-\inf_{\Theta_{2R,T}}\left[\Delta_{ f}q\right]_{-}. \tag{2.19}\]
_Finally \(\Theta_{2R,T}=\{(t,x,u):(x,t)\in Q_{2R,T},\,\underline{u}\leq u\leq\overline{u}\} \subset[0,T]\times M\times(0,\infty)\), where \(\overline{u}\), \(\underline{u}\) denote the maximum and minimum of \(u\) on the compact space-time cylinder \(Q_{2R,T}\)._
Note that unlike in Theorem 2.1 and Theorem 2.4, here, the curvature lower bound is imposed on the generalised Bakry-Emery Ricci tensor \(\mathscr{R}ic_{f}^{m}(g)\) with \(n\leq m<\infty\) and not on \(\mathscr{R}ic_{f}(g)\). Evidently, a lower bound on \(\mathscr{R}ic_{f}^{m}(g)\) is a stronger condition than one on \(\mathscr{R}ic_{f}(g)\) as the former implies the latter but not _vice versa_ [see (1.4)-(1.5)]. Also as is seen from (1.7), (1.9) the bound \(\mathscr{R}ic_{f}^{m}(g)\geq-\mathsf{k}g\) leads to the curvature-dimension condition \(\operatorname{CD}(-\mathsf{k},m)\) [whilst \(\mathscr{R}ic_{f}(g)\geq-\mathsf{k}g\) leads to \(\operatorname{CD}(-\mathsf{k},\infty)\)] (_cf._[4, 3, 5], [62, 63]).
From a purely technical view point these two conditions are of fundamentally different strengths and nature (_see_[5] for more on their implications, in particular, on functional and other types of geometric inequalities associated with their respective diffusion operators and semigroups).
The global counterpart of the above local estimate can be obtained by imposing appropriate global bounds in the assumptions as is formulated in the following theorem.
**Theorem 2.7**.: _Let \((M,g,d\mu)\) be a complete smooth metric measure space with \(d\mu=e^{-f}dv_{g}\) and assume that the metric-potential pair \((g,f)\) is time dependent and of class \(\mathscr{C}^{2}\). Assume \(\mathscr{R}ic_{f}^{m}(g)\geq-(m-1)k_{1}g\), (2.13) and (2.14) hold globally in \(M\times[0,T]\). Let \(u=u(x,t)\) be a positive solution to (1.1). Then for every \(\lambda>1\), \(\varepsilon\in(0,1)\) and all \(x\in M\), \(0<t\leq T\) we have_
\[\frac{|\nabla u|^{2}}{\lambda u^{2}}-\frac{\partial_{t}u}{u}+q+ \frac{\Sigma(t,x,u)}{u}\leq m\lambda\left[\frac{1}{t}+c_{1}\underline{k}_{2}+\gamma_{1}^{ \Sigma}\right]\\ +\sqrt{m}\biggl{\{}\frac{m\lambda^{2}\mathsf{A}^{2}}{4(1- \varepsilon)(\lambda-1)^{2}}+\frac{3}{4}\left[\frac{m\lambda^{2}\mathsf{B}^{ 4}}{4\varepsilon(\lambda-1)^{2}}\right]^{1/3}\\ +\lambda^{2}n(\underline{k}_{2}+\overline{k}_{2})^{2}+2\lambda^ {2}nk_{3}+\lambda(\gamma_{2}^{\Sigma}+\gamma_{2}^{qu})\biggr{\}}^{1/2}. \tag{2.20}\]
_Here \(\mathsf{A}\) is as in (2.16), \(\mathsf{B}\) is as in (2.17) and the quantities \(\gamma_{1}^{\Sigma}\), \(\gamma_{2}^{\Sigma}\) and \(\gamma_{2}^{qu}\) are as in (2.18) and (2.19) with the supremum and infimums taken over \(M\times[0,T]\) respectively._
**Theorem 2.8**.: _Under the assumptions of Theorem 2.6, if \(u\) is a positive solution to (1.1), then for every \((x_{1},t_{1})\), \((x_{2},t_{2})\) in \(Q_{R,T}\) with \(t_{2}>t_{1}\) and \(\lambda>1\) we have_
\[u(x_{2},t_{2})\geq u(x_{1},t_{1})\left(\frac{t_{2}}{t_{1}}\right)^{-m\lambda}e ^{-\lambda L(x_{1},x_{2},t_{2}-t_{1})}e^{(t_{2}-t_{1})S}. \tag{2.21}\]
_Here \(S\) is a constant depending only on the bounds given in Theorem 2.6 and is explicitly given by (6.2). Furthermore \(L\) is given by_
\[L(x_{1},x_{2},t_{2}-t_{1})=\inf_{\gamma\in\Gamma}\left[\frac{1}{4(t_{2}-t_{1} )}\int_{0}^{1}|\dot{\gamma}(t)|_{g(t)}^{2}\,dt\right], \tag{2.22}\]
_where \(\Gamma\) is the set of all curves \(\gamma\in\mathscr{C}^{1}([t_{1},t_{2}];M)\) lying entirely in \(Q_{R,T}\) with \(\gamma(t_{1})=x_{1}\) and \(\gamma(t_{2})=x_{2}\). If the bounds are global as in Theorem 2.7 then the above estimate is global._
### Liouville-type results, global bounds and elliptic Harnack inequalities
We now move on to presenting some more applications of the results formulated above. Towards this end we begin with two independent global constancy and Liouville-type results for the elliptic counterpart of equation (1.1). The first result follows from the local elliptic gradient estimate in Theorem 2.4 and the second result puts to use the local parabolic gradient estimate in Theorem 2.6. Some closely related applications, in particular, to parabolic Liouville-type results and ancient solutions of (1.1) that are equally of great interest are discussed in our forthcoming paper.
**Theorem 2.9**.: _Let \((M,g,d\mu)\) be a complete smooth metric measure space with \(d\mu=e^{-f}dv_{g}\) and time independent metric and potential of class \(\mathscr{C}^{2}\). Assume \(\mathscr{R}ic_{f}(g)\geq 0\) in \(M\). Let \(u=u(x)\) be a positive bounded solution to the equation_
\[\Delta_{f}u+\Sigma(u)=0, \tag{2.23}\]
_such that \(\Sigma(u)-2u\Sigma_{u}(u)\geq 0\) everywhere in \(M\). Then \(u\) must be a constant._
**Theorem 2.10**.: _Let \((M,g,d\mu)\) be a complete smooth metric measure space with \(d\mu=e^{-f}dv_{g}\) and time independent metric and potential of class \(\mathscr{C}^{2}\). Assume \(\mathscr{R}ic_{f}^{m}(g)\geq 0\) in \(M\). Let \(u=u(x)\) be a positive solution to_
\[\Delta_{f}u+\Sigma(u)=0. \tag{2.24}\]
_Then for every \(\lambda>1\), \(\varepsilon\in(0,1)\) and all \(x\in M\) we have_
\[\frac{|\nabla u|^{2}}{\lambda u^{2}}+\frac{\Sigma(u)}{u}\leq \;m\lambda\sup_{\Theta}\left\{\left[\frac{-[\Sigma(u)-u\Sigma_{u}( u)]}{u}\right]_{+}\right\}\] \[+\frac{m\lambda/\sqrt{1-\varepsilon}}{2(\lambda-1)}\sup_{\Theta} \left\{\left[\frac{-[\Sigma(u)-u\Sigma_{u}(u)+\lambda u^{2}\Sigma_{uu}(u)]}{u }\right]_{+}\right\}. \tag{2.25}\]
_In particular if along the solution \(u\) we have \(\Sigma(u)\geq 0\), \(\Sigma(u)-u\Sigma_{u}(u)\geq 0\) and for some \(\lambda>1\)_
\[\Sigma(u)-u\Sigma_{u}(u)+\lambda u^{2}\Sigma_{uu}(u)\geq 0, \tag{2.26}\]
_everywhere on \(M\) then \(u\) must be a constant._
The final result we present here is a Hamilton-type global bound and a corresponding global Harnack interpolation inequality for positive solutions to (1.1) under the flow inequality (1.19). Here we assume \(M\) is closed.
**Theorem 2.11**.: _Let \(u\) be a positive solution to (1.1) with \(q=0\) satisfying \(0<u\leq D\) in \(M\times[0,T]\) and assume that the metric-potential pair \((g,f)\) evolves under the \(\mathsf{k}\)-super Perelman-Ricci flow (1.19) with \(\mathsf{k}\geq 0\). Assume that \(\Sigma(u)\geq 0\) and \(\Sigma(u)-2u\Sigma_{u}(u)\geq 0\) along the solution \(u\). Then_
\[t|\nabla\log u|^{2}\leq(1+2\mathsf{k}t)[1+\log(D/u)],\qquad 0<t<T. \tag{2.27}\]
_As a result there holds the following Harnack-interpolation inequality: For any \(s>0\), \(x_{1},x_{2}\in M\) and \(0<t\leq T\), we have_
\[u(x_{1},t)\leq(eD)^{s/(1+s)}\exp\left(d^{2}(x_{1},x_{2},t)\frac{1+2kt}{4st} \right)[u(x_{2},t)]^{1/(1+s)}. \tag{2.28}\]
## 3. Proof of the Souplet-Zhang estimate in Theorem 2.1
### Some intermediate parabolic lemmas (I)
Before proceeding onto presenting the proof of Theorem 2.1 we need some intermediate results and lemmas. In this subsection we prove a parabolic differential inequality utilised in the proof of the theorem. This is inequality (3.10) in Lemma 3.3 where \(u\) is a positive solution to (1.1) with the metric-potential pair \((g,f)\) being a complete solution to the \(\mathsf{k}\)-super Perelman-Ricci flow (1.19). We begin with the following parabolic differential _identity_ first.
**Lemma 3.1**.: _Let \(u\) be a positive bounded solution to the equation \((\partial_{t}-\Delta_{f})u=\Sigma(t,x,u)\) with \(0<u\leq D\). Then the function \(h=\log(u/D)\) satisfies the equation_
\[\square h=(\partial_{t}-\Delta_{f})h=|\nabla h|^{2}+D^{-1}e^{-h}\Sigma(t,x,De^{ h}). \tag{3.1}\]
Proof.: This is an easy calculation and the proof is left to the reader.
**Lemma 3.2**.: _Suppose \(g=g(t)\), \(f=f(t)\) are of class \(\mathscr{C}^{2}\) and let \(u\) be a positive bounded solution to the equation \((\partial_{t}-\Delta_{f})u=\Sigma(t,x,u)\) verifying \(0<u\leq D\). Put \(h=\log(u/D)\) and let \(w=|\nabla h|^{2}/(1-h)^{2}\). Then \(w\) satisfies the equation_
\[\square w=(\partial_{t}-\Delta_{f})w= -\frac{[\partial_{t}g+2\mathscr{R}ic_{f}]}{(1-h)^{2}}(\nabla h, \nabla h)-\frac{2h\langle\nabla h,\nabla w\rangle}{1-h}-2(1-h)w^{2}\] \[-2\left|\frac{\nabla^{2}h}{1-h}+\frac{\nabla h\otimes\nabla h}{( 1-h)^{2}}\right|^{2}+\frac{2\langle\nabla h,\Sigma_{x}(t,x,De^{h})\rangle}{ De^{h}(1-h)^{2}}\] \[+2w\left[\Sigma_{u}(t,x,De^{h})+\frac{h\Sigma(t,x,De^{h})}{De^{ h}(1-h)}\right]. \tag{3.2}\]
Proof.: Referring to (3.1) in Lemma 3.1, a straightforward calculation gives
\[\langle\nabla h,\nabla\partial_{t}h\rangle =\langle\nabla h,\nabla[\Delta_{f}h+|\nabla h|^{2}+D^{-1}e^{-h} \Sigma(t,x,De^{h})]\rangle \tag{3.3}\] \[=\langle\nabla h,\nabla\Delta_{f}h\rangle+\langle\nabla h,\nabla |\nabla h|^{2}\rangle+|\nabla h|^{2}\Sigma_{u}\] \[\quad+D^{-1}e^{-h}(-|\nabla h|^{2}\Sigma+\langle\nabla h,\Sigma_ {x}\rangle),\]
where we have abbreviated the arguments of \(\Sigma\) and its partial derivatives \(\Sigma_{x},\Sigma_{u}\). Now by virtue of the identity \(\partial_{t}|\nabla h|^{2}=-[\partial_{t}g](\nabla h,\nabla h)+2\langle\nabla h,\nabla\partial_{t}h\rangle\) (_see_ also (5.2) in Lemma 5.1) it follows that
\[\partial_{t}|\nabla h|^{2}= -[\partial_{t}g](\nabla h,\nabla h)+2\langle\nabla h,\nabla \Delta_{f}h\rangle+2\langle\nabla h,\nabla|\nabla h|^{2}\rangle\] \[+\frac{2\langle\nabla h,\Sigma_{x}\rangle}{De^{h}}+2|\nabla h|^{ 2}\left(\Sigma_{u}-\frac{\Sigma}{De^{h}}\right). \tag{3.4}\]
Moving next to the function \(w\), by using \(\partial_{t}w=[\partial_{t}|\nabla h|^{2}]/(1-h)^{2}+[2|\nabla h|^{2}\partial_ {t}h]/(1-h)^{3}\), we have
\[\partial_{t}w= -\frac{[\partial_{t}g](\nabla h,\nabla h)}{(1-h)^{2}}+\frac{2 \langle\nabla h,\nabla\Delta_{f}h\rangle}{(1-h)^{2}}+\frac{2\langle\nabla h, \nabla|\nabla h|^{2}\rangle}{(1-h)^{2}}+\frac{2\langle\nabla h,\Sigma_{x} \rangle}{De^{h}(1-h)^{2}}\] \[+\frac{2|\nabla h|^{2}}{(1-h)^{2}}\left(\Sigma_{u}-\frac{\Sigma} {De^{h}}\right)+\frac{2|\nabla h|^{2}\Delta_{f}h}{(1-h)^{3}}+\frac{2|\nabla h |^{4}}{(1-h)^{3}}+\frac{2|\nabla h|^{2}\Sigma}{De^{h}(1-h)^{3}}. \tag{3.5}\]
Likewise we have \(\nabla w=[\nabla|\nabla h|^{2}]/(1-h)^{2}+[2|\nabla h|^{2}\nabla h]/(1-h)^{3}\) and so by recalling the relation \(\Delta_{f}w=\Delta w-\langle\nabla f,\nabla w\rangle\) it follows that
\[\Delta_{f}w=\frac{\Delta_{f}|\nabla h|^{2}}{(1-h)^{2}}+\frac{4\langle\nabla h, \nabla|\nabla h|^{2}\rangle}{(1-h)^{3}}+\frac{2|\nabla h|^{2}\Delta_{f}h}{(1-h )^{3}}+\frac{6|\nabla h|^{4}}{(1-h)^{4}}. \tag{3.6}\]
Putting (3.5)-(3.6) together and taking into account the necessary cancellations gives
\[\Delta_{f}w= \ \partial_{t}w+\frac{[\partial_{t}g](\nabla h,\nabla h)}{(1-h)^{2}}+ \frac{\Delta_{f}|\nabla h|^{2}}{(1-h)^{2}}-\frac{2\langle\nabla h,\nabla\Delta_ {f}h\rangle}{(1-h)^{2}}\] \[-\frac{2\langle\nabla h,\nabla|\nabla h|^{2}\rangle}{(1-h)^{2}}+ \frac{2|\nabla h|^{4}}{(1-h)^{3}}+\frac{4\langle\nabla h,\nabla|\nabla h|^{2} \rangle}{(1-h)^{3}}+\frac{6|\nabla h|^{4}}{(1-h)^{4}}\] \[-\frac{2\langle\nabla h,\Sigma_{x}\rangle}{De^{h}(1-h)^{2}}- \frac{2|\nabla h|^{2}}{(1-h)^{2}}\left(\Sigma_{u}-\frac{\Sigma}{De^{h}}\right) -\frac{2|\nabla h|^{2}\Sigma}{De^{h}(1-h)^{3}}. \tag{3.7}\]
Now by making use of the weighted Bochner-Weitzenbock formula (1.7) this gives
\[\Delta_{f}w= \ \partial_{t}w+\frac{[\partial_{t}g](\nabla h,\nabla h)}{(1-h)^{ 2}}+\frac{2|\nabla^{2}h|^{2}}{(1-h)^{2}}+\frac{2\mathscr{R}ic_{f}(\nabla h, \nabla h)}{(1-h)^{2}}\] \[-\frac{2\langle\nabla h,\nabla|\nabla h|^{2}\rangle}{(1-h)^{2}} +\frac{4\langle\nabla h,\nabla|\nabla h|^{2}\rangle}{(1-h)^{3}}-\frac{2|\nabla h |^{4}}{(1-h)^{3}}+\frac{6|\nabla h|^{4}}{(1-h)^{4}}\] \[-\frac{2\langle\nabla h,\Sigma_{x}\rangle}{De^{h}(1-h)^{2}}- \frac{2|\nabla h|^{2}}{(1-h)^{2}}\left(\Sigma_{u}-\frac{\Sigma}{De^{h}}\right) -\frac{2|\nabla h|^{2}\Sigma}{De^{h}(1-h)^{3}}, \tag{3.8}\]
and therefore a rearrangement of terms and basic considerations leads to the formulation
\[\Delta_{f}w= \ \partial_{t}w+\frac{[\partial_{t}g+2\mathscr{R}ic_{f}]}{(1-h)^{ 2}}(\nabla h,\nabla h)\] \[+2\left|\frac{\nabla^{2}h}{1-h}+\frac{\nabla h\otimes\nabla h}{(1 -h)^{2}}\right|^{2}+\frac{2|\nabla h|^{4}}{(1-h)^{3}}\] \[-\frac{2\langle\nabla h,\nabla|\nabla h|^{2}\rangle}{(1-h)^{2}}- \frac{4|\nabla h|^{4}}{(1-h)^{3}}+\frac{2\langle\nabla h,\nabla|\nabla h|^{2} \rangle}{(1-h)^{3}}+\frac{4|\nabla h|^{4}}{(1-h)^{4}}\] \[-\frac{2\langle\nabla h,\Sigma_{x}\rangle}{De^{h}(1-h)^{2}}- \frac{2|\nabla h|^{2}}{(1-h)^{2}}\left(\Sigma_{u}-\frac{\Sigma}{De^{h}}\right) -\frac{2|\nabla h|^{2}\Sigma}{De^{h}(1-h)^{3}}. \tag{3.9}\]
Finally making note of the identity \((1-h)^{3}\langle\nabla h,\nabla w\rangle=2(1-h)\nabla^{2}h(\nabla h,\nabla h) +2|\nabla h|^{4}\) results in the desired inequality. This therefore completes the proof.
**Lemma 3.3**.: _Under the assumption of Lemma 3.2, if the metric-potential pair \((g,f)\) evolves under the \(\mathsf{k}\)-super Perelman-Ricci flow inequality \(\partial_{t}g+2\mathscr{R}ic_{f}(g)\geq-2\mathsf{k}g\), then the function \(w=|\nabla h|^{2}/(1-h)^{2}\) satisfies the parabolic differential inequality_
\[\Box w=(\partial_{t}-\Delta_{f})w\leq -2(1-h)w^{2}-\frac{2h\langle\nabla h,\nabla w\rangle}{1-h}\] \[+2\mathsf{k}w+\frac{2\langle\nabla h,\Sigma_{x}(t,x,De^{h}) \rangle}{De^{h}(1-h)^{2}}\] \[+2w\left[\Sigma_{u}(t,x,De^{h})+\frac{h\Sigma(t,x,De^{h})}{De^{h }(1-h)}\right]. \tag{3.10}\]
Proof.: This is a straightforward consequence of the flow inequality (1.19) and the identity (3.2) established in Lemma 3.2 with the substitution \(w=|\nabla h|^{2}/(1-h)^{2}\)
### Localising in space-time and cut-offs
In order to prove the local estimate in Theorem 2.1, we make use of the parabolic inequality in Lemma 3.3 in conjunction with a localisation argument. Towards this end fix \(R,T>0\) and pick \(\tau\in(0,T]\). Let \(\varrho(x,t)\) denote the geodesic radial variable with respect to a fixed reference point \(x_{0}\) at time \(t\) and for \(x\in M\) and \(0\leq t\leq T\) set
\[\phi(x,t)=\bar{\phi}(\varrho(x,t),t). \tag{3.11}\]
Here \(\bar{\phi}\) is a suitable function of the real non-negative variables \(\varrho\) and \(t\). The resulting space-time function \(\phi\) will then serve as a smooth cut-off function supported in the compact set \(Q_{R,T}\subset M\times[0,T]\). The existence of \(\bar{\phi}=\bar{\phi}(\varrho,t)\) as used in (3.11) and its properties is granted by the following result (see [2, 8, 47, 61]).
**Lemma 3.4**.: _There exists a smooth function \(\bar{\phi}:[0,\infty)\times[0,T]\to\mathbb{R}\) such that:_
1. \(\operatorname{supp}\bar{\phi}(\varrho,t)\subset[0,R]\times[0,T]\) _and_ \(0\leq\bar{\phi}(\varrho,t)\leq 1\) _in_ \([0,R]\times[0,T]\)_._
2. \(\bar{\phi}=1\) _in_ \([0,R/2]\times[\tau,T]\) _and_ \(\partial\bar{\phi}/\partial\varrho=0\) _in_ \([0,R/2]\times[0,T]\)_, respectively._
3. \(\bar{\phi}(\varrho,0)=0\) _for all_ \(\varrho\in[0,\infty)\) _and there exists_ \(c>0\) _such that the bound_ \[\left|\frac{\partial\bar{\phi}}{\partial t}\right|\leq c\frac{\bar{\phi}^{1/2 }}{\tau},\] (3.12) _holds on_ \([0,\infty)\times[0,T]\)_._
4. _For every_ \(0<a<1\) _there exists_ \(c_{a}>0\) _such that the bounds_ \[-c_{a}\frac{\bar{\phi}^{a}}{R}\leq\frac{\partial\bar{\phi}}{\partial\varrho} \leq 0,\qquad\left|\frac{\partial^{2}\bar{\phi}}{\partial\varrho^{2}} \right|\leq c_{a}\frac{\bar{\phi}^{a}}{R^{2}},\] (3.13) _hold on_ \([0,\infty)\times[0,T]\)_._
### Proof of the local estimate in Theorem 2.1
This will be carried out in two stages. First we establish the estimate in the case \(q\equiv 0\) and then we pass to the general case. So for now suppose \(q\equiv 0\). Fix \(\tau\in(0,T]\) and \(\phi(x,t)=\bar{\phi}(\varrho(x,t),t)\) with \(\bar{\phi}\) as in Lemma 3.4. We show the respective estimate to hold at \((x,\tau)\) with \(d(x,x_{0},\tau)\leq R/2\). The arbitrariness of \(\tau>0\) will then give the estimate for all \((x,t)\) in \(Q_{R/2,T}\) satisfying \(t>0\). Now starting with the localised function \(\phi w\) it is clear that
\[\square(\phi w)=\phi\square w-2[\langle\nabla\phi,\nabla(\phi w)\rangle-| \nabla\phi|^{2}w]/\phi+w\square\phi. \tag{3.14}\]
Substituting from (3.10) in Lemma 3.3 (whilst recalling the relation \(\mathsf{k}=(n-1)k_{1}+k_{2}\)) then gives
\[\square(\phi w)=(\partial_{t}-\Delta_{f})(\phi w)\leq -\left\langle\frac{2h\nabla h}{1-h}+\frac{2\nabla\phi}{\phi}, \nabla(\phi w)\right\rangle\] \[+w\left\langle\frac{2h\nabla h}{1-h}+\frac{2\nabla\phi}{\phi}, \nabla\phi\right\rangle-2(1-h)\phi w^{2}\] \[+w(\partial_{t}-\Delta_{f}+2\mathsf{k})\phi+\frac{2\phi\langle \nabla h,\Sigma_{x}(t,x,De^{h})\rangle}{De^{h}(1-h)^{2}}\] \[+2\phi w\left[\Sigma_{u}(t,x,De^{h})+\frac{h\Sigma(t,x,De^{h})}{ De^{h}(1-h)}\right]. \tag{3.15}\]
Suppose now that the localised function \(\phi w\) is maximised at the point \((x_{1},t_{1})\) in the compact set \(\{(x,t):d(x,x_{0},t)\leq R\) and \(0\leq t\leq\tau\}\subset M\times[0,T]\). By virtue of Calabi's
standard argument (_cf._[10] or [45] p. 21) we can assume that \(x_{1}\) is not in the cut locus of \(x_{0}\) and so \(\phi\) is smooth at \((x_{1},t_{1})\) for the application of the maximum principle. Additionally, we can assume that \((\phi w)(x_{1},t_{1})>0\) as otherwise the conclusion is true with \(w(x,\tau)\leq 0\) for all \(d(x,x_{0},\tau)\leq R/2\). In particular \(t_{1}>0\) and at the point \((x_{1},t_{1})\) we have \(\partial_{t}(\phi w)\geq 0\), \(\nabla(\phi w)=0\) and \(\Delta_{f}(\phi w)\leq 0\). From (3.15) it thus follows that
\[2(1-h)\phi w^{2}\leq w\left\langle\frac{2h\nabla h}{1-h}+\frac{2\nabla\phi}{\phi}, \nabla\phi\right\rangle-w\Delta_{f}\phi+w\partial_{t}\phi+2\mathbf{k}w\phi\] \[+\frac{2\phi\langle\nabla h,\Sigma_{x}(t,x,De^{h})\rangle}{De^{h }(1-h)^{2}}+2\phi w\left[\Sigma_{u}(t,x,De^{h})+\frac{h\Sigma(t,x,De^{h})}{De ^{h}(1-h)}\right]\]
at \((x_{1},t_{1})\) or dividing through by \(2(1-h)\geq 2\) that
\[\phi w^{2}\leq w\left\langle\frac{h\nabla h}{1-h}+\frac{\nabla\phi}{\phi},\frac{ \nabla\phi}{1-h}\right\rangle+\frac{w[-\Delta_{f}\phi+\partial_{t}\phi+2 \mathbf{k}\phi]}{2(1-h)}\] \[+\frac{\phi\langle\nabla h,\Sigma_{x}(t,x,De^{h})\rangle}{De^{h }(1-h)^{3}}+\phi w\left[\frac{\Sigma_{u}(t,x,De^{h})}{1-h}+\frac{h\Sigma(t,x, De^{h})}{De^{h}(1-h)^{2}}\right]. \tag{3.16}\]
The goal is now to use (3.16) to establish the required estimate at \((x,\tau)\). To this end we consider two cases. Firstly, we consider \(d(x_{1},x_{0},t_{1})\leq 1\) and next \(d(x_{1},x_{0},t_{1})\geq 1\).
**Case 1.** Since here \(\phi\) is a constant function in the space direction [for all \(x\) satisfying \(d(x,x_{0},t)\leq R/2\), where \(t\in[0,T]\) and \(R\geq 2\) by property \((ii)\)] all the terms involving space derivatives of \(\phi\) at \((x_{1},t_{1})\) vanish (in particular \(\nabla\phi=0\), \(\Delta_{f}\phi=0\) and \(\phi_{t}=\bar{\phi}_{t}\)). So as a result it follows from (3.16) that at the point \((x_{1},t_{1})\), we have the bound
\[\phi w^{2}\leq\frac{w[|\phi_{t}|+2\mathbf{k}\phi]}{2(1-h)}+\frac{\phi|\langle \nabla h,\Sigma_{x}(t,x,De^{h})\rangle|}{De^{h}(1-h)^{3}}+\phi w\left[\frac{ \Sigma_{u}(t,x,De^{h})}{1-h}+\frac{h\Sigma(t,x,De^{h})}{De^{h}(1-h)^{2}} \right]_{+}\]
Now upon writing \(w[|\phi_{t}|+2\mathbf{k}\phi]=\sqrt{\phi}w[|\phi_{t}|/\sqrt{\phi}+2\mathbf{k} \sqrt{\phi}]\leq\sqrt{\phi}w[c/\tau+2\mathbf{k}\sqrt{\phi}]\), hence giving \(w[|\phi_{t}|+2\mathbf{k}\phi]/[2(1-h)]\leq(1/4)\phi w^{2}+C[1/\tau^{2}+ \mathbf{k}^{2}]/(1-h)^{2}\) (by an application of the Cauchy-Schwarz inequality and \((iii)\) in Lemma 3.4) and using similar bounds on the other two terms on the right-hand side by utilising Young's inequality, we obtain after rearranging terms,
\[\phi w^{2}\leq C\left\{\frac{1/\tau^{2}+\mathbf{k}^{2}}{(1-h)^{2}}+\left[ \frac{\Sigma_{u}(t,x,De^{h})}{1-h}+\frac{h\Sigma(t,x,De^{h})}{De^{h}(1-h)^{2} }\right]_{+}^{2}+\left[\frac{|\Sigma_{x}(t,x,De^{h})|}{De^{h}(1-h)^{2}} \right]^{4/3}\right\}.\]
As \(\phi\equiv 1\), when \(d(x,x_{0},t)\leq R/2\) and \(\tau\leq t\leq T\) (hence in particular for \(t=\tau\)) by \((ii)\) in Lemma 3.4, we have \(w(x,\tau)=(\phi w)(x,\tau)\leq(\phi w)(x_{1},t_{1})\leq(\sqrt{\phi}w)(x_{1},t_{1})\). Thus recalling the definitions of \(\mathbf{k}\), \(k\), \(w=|\nabla h|^{2}/(1-h)^{2}\), \(h=\log(u/D)\), noting \(1/(1-h)\leq 1\), and adjusting the constant if necessary, we arrive at the bound at \((x,\tau)\)
\[\frac{|\nabla h|}{1-h}\leq C\left\{\frac{1}{\sqrt{\tau}}+\sqrt{k}+\sup_{Q_{R,T }}\left[\frac{De^{h}(1-h)\Sigma_{u}+h\Sigma}{De^{h}(1-h)^{2}}\right]_{+}^{1/2} +\sup_{Q_{R,T}}\left[\frac{|\Sigma_{x}|}{De^{h}(1-h)^{2}}\right]^{1/3}\right\}.\]
This together with the arbitrariness of \(\tau>0\) is now immediately seen to be a special case of the estimate (2.2).
**Case 2.** Upon referring to the right-hand side of (3.16), and noting the properties of \(\bar{\phi}\) as listed in Lemma 3.4 we proceed onto bounding the full expression on the right-hand
side on (3.16) in the case \(d(x_{1},x_{0},t_{1})\geq 1\). Towards this end dealing with the first term first, we have
\[\left\langle w\left[\frac{h\nabla h}{1-h}+\frac{\nabla\phi}{\phi} \right],\frac{\nabla\phi}{1-h}\right\rangle \leq w\left[\frac{h|\nabla h|}{1-h}+\frac{|\nabla\phi|}{\phi} \right]\frac{|\nabla\phi|}{1-h}\] \[\leq w\left[h\sqrt{w}+\frac{|\nabla\phi|}{\phi}\right]\frac{| \nabla\phi|}{1-h}\] \[\leq w\sqrt{\phi}\left[\phi^{1/4}\frac{\sqrt{w}|h|}{1-h}\frac{| \nabla\phi|}{\phi^{3/4}}+\frac{|\nabla\phi|^{2}}{\phi^{3/2}}\right]\] \[\leq\frac{\phi w^{2}}{4}+\frac{C}{R^{4}}. \tag{3.17}\]
In much the same way regarding the terms involving \(\Sigma\) we have firstly
\[\frac{\phi\langle\nabla h,\Sigma_{x}\rangle}{De^{h}(1-h)^{3}} \leq\frac{\phi|\nabla h||\Sigma_{x}|}{De^{h}(1-h)^{3}}=\frac{ \phi\sqrt{w}|\Sigma_{x}|}{De^{h}(1-h)^{2}}\] \[\leq\frac{\phi w^{2}}{8}+C\left(\frac{|\Sigma_{x}|}{De^{h}(1-h)^ {2}}\right)^{4/3}=\frac{\phi w^{2}}{8}+C\mathsf{P}_{\Sigma}^{4/3}, \tag{3.18}\]
with \(\mathsf{P}_{\Sigma}=|\Sigma_{x}|/[(1-h)^{2}De^{h}]\) and likewise for the subsequent terms, upon noting \(-1\leq h/(1-h)\leq 0\), \(h\leq 0\) and \(0\leq\phi\leq 1\), we have
\[\phi w\left(\frac{\Sigma_{u}}{1-h}+\frac{h\Sigma}{De^{h}(1-h)^{2 }}\right) =\phi w\left(\frac{De^{h}(1-h)\Sigma_{u}+h\Sigma}{De^{h}(1-h)^{2}}\right)\] \[\leq\frac{\phi w^{2}}{8}+C\left[\frac{De^{h}(1-h)\Sigma_{u}+h \Sigma}{(1-h)^{2}De^{h}}\right]_{+}^{2}\] \[=\frac{\phi w^{2}}{8}+C\mathsf{R}_{\Sigma}^{2}, \tag{3.19}\]
where in the last equation we have set \(\mathsf{R}_{\Sigma}=\{[De^{h}(1-h)\Sigma_{u}+h\Sigma]/[(1-h)^{2}De^{h}]\}_{+}\).
Now for the \(\Delta_{f}\phi\) term we use the Wei-Wylie weighted Laplacian comparison theorem taking advantage of the fact that it only depends on the lower bound on \(\mathscr{R}ic_{f}(g)\) ([59]). Indeed recalling \(\varrho(x,t)=d(x,x_{0},t)\), \(1\leq d(x_{1},x_{0},t_{1})\leq R\), it follows from Theorem 3.1 in [59] and \(\mathscr{R}ic_{f}(g)\geq-(n-1)k_{1}g\) with \(k_{1}\geq 0\), that
\[\Delta_{f}\varrho\leq\gamma_{\Delta_{f}}+(n-1)(R-1)k_{1}, \tag{3.20}\]
whenever \(1\leq\varrho\leq R\), \(0\leq t\leq T\) [therefore in particular at the space-time point \((x_{1},t_{1})\)]. Here as indicated earlier we have set
\[\gamma_{\Delta_{f}}=\max_{(x,t)}\{\Delta_{f}\varrho(x,t):d(x,x_{0},t)=1,\,0 \leq t\leq T\}. \tag{3.21}\]
Thus proceeding on to bounding \(-\Delta_{f}\phi\), upon referring to (3.11) and using \((ii)\) [\(\bar{\phi}_{r}=0\) when \(0\leq r\leq R/2\)], \((iv)\) [\(\bar{\phi}_{\varrho}\leq 0\) when \(0\leq\varrho<\infty\) together with the bounds] we have
\(-\Delta_{f}\phi=-(\bar{\phi}_{\varrho\varrho}|\nabla\varrho|^{2}+\bar{\phi}_{ \varrho}\Delta_{f}\varrho)\) and so
\[-\Delta_{f}\phi \leq\left(\frac{|\bar{\phi}_{\varrho\varrho}|}{\sqrt{\bar{\phi}}} +\frac{|\bar{\phi}_{\varrho}|}{\sqrt{\bar{\phi}}}\left([\gamma_{\Delta_{f}}]_{+ }+(n-1)(R-1)k_{1}\right)\right)\sqrt{\bar{\phi}}\] \[\leq C\left(\frac{1}{R^{2}}+\frac{[\gamma_{\Delta_{f}}]_{+}}{R}+k _{1}\right)\sqrt{\phi}. \tag{3.22}\]
Next to bound the term \(\partial_{t}\phi\) pick \(x\) such that \(d(x,x_{0};t)\leq R\) and let \(\gamma:[0,1]\to M\) be a minimal geodesic connecting \(x_{0}\) and \(x\) at the fixed time \(t\) where we write \(\gamma=\gamma(s)\) with \(\gamma(0)=x_{0}\) and \(\gamma(1)=x\). Then, recalling the bound \(\partial_{t}g\geq-2k_{2}g\) in \(\mathcal{B}_{R,T}\), we have
\[\partial_{t}\varrho(x,t) =\partial_{t}\int_{0}^{1}|\gamma^{\prime}(s)|_{g(t)}\,ds=\int_{0 }^{1}\partial_{t}[|\gamma^{\prime}(s)|_{g(t)}]\,ds\] \[=\int_{0}^{1}\frac{[\partial_{t}g](\gamma^{\prime},\gamma^{ \prime})}{2|\gamma^{\prime}|_{g(t)}}\,ds\] \[\geq\int_{0}^{1}-k_{2}|\gamma^{\prime}|_{g(t)}\,ds=-k_{2}\varrho \geq-k_{2}R. \tag{3.23}\]
Hence a straightforward differentiation followed by an application of the properties of \(\bar{\phi}\) in Lemma 3.4 gives \(\partial_{t}\phi=\bar{\phi}_{t}+\bar{\phi}_{\varrho}\partial_{t}\varrho\leq| \bar{\phi}_{t}|-k_{2}R\bar{\phi}_{\varrho}=|\bar{\phi}_{t}|+k_{2}R|\bar{\phi} _{\varrho}|\leq C[1/\tau+k_{2}]\sqrt{\phi}\). Therefore, by summarising, the above estimates on the derivatives of \(\phi\) along with the term \(2\mathsf{k}\phi w\) in (3.16) give, after an application of Young inequality, the bounds
\[\frac{w(-\Delta_{f})\phi}{2(1-h)}\leq\frac{\phi w^{2}}{8}+C \left(\frac{1}{R^{4}}+\frac{[\gamma_{\Delta_{f}}]_{+}^{2}}{R^{2}}+k_{1}^{2} \right), \tag{3.24}\] \[\frac{w(2\mathsf{k}+\partial_{t})\phi}{2(1-h)}\leq\frac{\phi w^{ 2}}{8}+C\left(\frac{1}{\tau^{2}}+\mathsf{k}^{2}+k_{2}^{2}\right). \tag{3.25}\]
Now referring to (3.16), noting the inequality \(1-h\geq 1\) and making use of the bounds obtained in (3.17)-(3.25), it follows after reverting to \(u=De^{h}\), writing \(k=(k_{1}^{2}+k_{2}^{2})^{1/2}\) and noting \(\mathsf{k}=(n-1)k_{1}+k_{2}\), that the following upper bound holds for \(\phi w^{2}\) at \((x_{1},t_{1})\),
\[\phi w^{2}\leq C\left\{\frac{1+R^{2}[\gamma_{\Delta_{f}}]_{+}^{2}}{R^{4}}+\frac{1} {\tau^{2}}+k^{2}+\sup_{Q_{R,T}}\left(\mathsf{R}_{\Sigma}^{2}(u)+\mathsf{P}_{ \Sigma}^{4/3}(u)\right)\right\}. \tag{3.26}\]
Recalling the maximality of \(\phi w\) at \((x_{1},t_{1})\) along with \(\phi\equiv 1\) when \(d(x,x_{0};t)\leq R/2\) and \(\tau\leq t\leq T\), it follows that \(w^{2}(x,\tau)=(\phi^{2}w^{2})(x,\tau)\leq(\phi^{2}w^{2})(x_{1},t_{1})\leq( \phi w^{2})(x_{1},t_{1})\) when \(d(x,x_{0};\tau)\leq R/2\). Hence upon noting \(w=|\nabla h|^{2}/(1-h)^{2}\), the above gives
\[\frac{|\nabla\log u|}{1-\log(u/D)}\leq C\left\{\frac{1}{R}+\sqrt{\frac{[\gamma_{\Delta_{f}}]_{+}}{R}}+ \frac{1}{\sqrt{\tau}}+\sqrt{k}+\sup_{Q_{R,T}}\left(\mathsf{R}_{\Sigma}^{1/2}( u)+\mathsf{P}_{\Sigma}^{1/3}(u)\right)\right\}. \tag{3.27}\]
Thus in either case the estimate is true at \((x,\tau)\) and so the desired conclusion when \(q\equiv 0\) and hence \(\mathsf{N}\equiv 0\) follows from the arbitrariness of \(\tau\in(0,T]\).
The passage to the general case (non-zero \(q\)) now follows by replacing \(\Sigma\) with \(\Sigma+qu\). Indeed by the sub-additivity of \([\cdot]_{+}\) we have \(\mathsf{R}_{\Sigma+qu}\leq\mathsf{R}_{\Sigma}+\mathsf{R}_{qu}\) and \(\mathsf{P}_{\Sigma+qu}\leq\mathsf{P}_{\Sigma}+\mathsf{P}_{qu}\)
Moreover, referring to (2.3) it is easily seen that,
\[\mathsf{R}_{qu}=\left[\frac{q}{1-h}+\frac{hqu}{u(1-h)^{2}}\right]_{+}=\left[\frac {qu(1-h)+hqu}{u(1-h)^{2}}\right]_{+}=\frac{q_{+}}{(1-h)^{2}}\leq q_{+}, \tag{3.28}\]
and
\[\mathsf{P}_{qu}=\frac{|u\nabla q|}{u(1-h)^{2}}=\frac{|\nabla q|}{(1-h)^{2}} \leq|\nabla q|. \tag{3.29}\]
Thus invoking the conclusion of the theorem from the first part (when \(q\equiv 0\)) and using the above calculation, the desired estimate (2.2) for general \(q\) follows by noting,
\[\mathsf{R}^{1/2}_{\Sigma+qu}+\mathsf{P}^{1/3}_{\Sigma+qu} \leq[\mathsf{R}_{\Sigma}+\mathsf{R}_{qu}]^{1/2}+[\mathsf{P}_{ \Sigma}+\mathsf{P}_{qu}]^{1/3}\] \[\leq\mathsf{R}^{1/2}_{\Sigma}+\mathsf{R}^{1/2}_{qu}+\mathsf{P}^ {1/3}_{\Sigma}+\mathsf{P}^{1/3}_{qu}\] \[\leq\mathsf{R}^{1/2}_{\Sigma}+q_{+}^{1/2}+\mathsf{P}^{1/3}_{ \Sigma}+|\nabla q|^{1/3}=\mathsf{N}_{q}+\mathsf{R}^{1/2}_{\Sigma}+\mathsf{P}^ {1/3}_{\Sigma}. \tag{3.30}\]
The proof is thus complete.
### Proof of the elliptic Harnack inequality in Theorem 2.3
We now come to the proof of the Harnack inequality as formulated in Theorem 2.3. In its local form this is a consequence of the local estimate in Theorem 2.1 and in its global form this is a consequence of the global estimate in Theorem 2.2. Below we give the proof in the local case as the other is similar.
Towards this end fix \(t\in(0,T)\) and pick \((x_{1},t)\) and \((x_{2},t)\) in \(Q_{R/2,T}\) as in the theorem. Let \(\zeta=\zeta(s)\) with \(0\leq s\leq 1\) be a shortest curve with respect to \(g=g(t)\) joining \(x_{1}\), \(x_{2}\) [that is, \(\zeta(0)=x_{1}\), \(\zeta(1)=x_{2}\)] with \((\gamma(t),t)\) lying entirely in \(Q_{R/2,T}\). Put \(d=d(x_{1},x_{2},t)\) and assume that \(\mathsf{R}_{\Sigma},\mathsf{P}_{\Sigma}<\infty\) (otherwise \(\gamma=0\) and the inequality is trivially true). Now by utilising the estimate (2.2) in Theorem 2.1 we can write
\[\log\frac{1-h(x_{2},t)}{1-h(x_{1},t)} =\int_{0}^{1}\frac{d}{ds}\log[1-h(\zeta(s),t)]\,ds=\int_{0}^{1}- \frac{\langle\nabla h(\zeta(s),t),\zeta^{\prime}(s)\rangle}{1-h(\zeta(s),t)} \,ds\] \[\leq\int_{0}^{1}\frac{|\nabla h||\zeta^{\prime}|}{1-h}\,ds\leq \left[\sup_{Q_{R/2,T}}\frac{|\nabla h|}{1-h}\right]\int_{0}^{1}|\zeta^{\prime} |\,ds\] \[\leq C\left\{\frac{1}{R}+\frac{1}{\sqrt{t}}+\sqrt{k}+\sup_{Q_{R,T} }\left[\mathsf{N}_{q}+\mathsf{R}^{1/2}_{\Sigma}+\mathsf{P}^{1/3}_{\Sigma} \right]+\sqrt{\frac{[\gamma_{\Delta_{f}}]_{+}}{R}}\right\}. \tag{3.31}\]
Here we have used \(|\nabla h|/(1-h)=|\nabla\log u|/[1-\log(u/D)]\). Therefore a direct set of calculations give,
\[\frac{1-h(x_{2},t)}{1-h(x_{1},t)} =\frac{1-\log[u(x_{2},t)/D]}{1-\log[u(x_{1},t)/D]}=\frac{\log[eD/ u(x_{2},t)]}{\log[eD/u(x_{1},t)]} \tag{3.32}\] \[\leq\exp\left[Cd\left(\frac{1}{R}+\sqrt{k}+\frac{1}{\sqrt{t}}+ \sup_{Q_{R,T}}\left[\mathsf{N}_{q}+\mathsf{R}^{1/2}_{\Sigma}+\mathsf{P}^{1/3 }_{\Sigma}\right]+\sqrt{\frac{[\gamma_{\Delta_{f}}]_{+}}{R}}\right)\right]\]
and so the assertion follows. As indicated earlier the proof of the global version of the inequality is similar and is hence abbreviated.
## 4. Proof of the elliptic Hamilton estimate in Theorem 2.4
### Some intermediate parabolic lemmas (II)
Before moving onto presenting the proof of Theorem 2.4 we gather together some useful components and tools that are needed later. In the spirit of what was done earlier in Section 3 we present here a set of parabolic identities for suitable quantities built out the solution.
**Lemma 4.1**.: _Suppose \(u\) is a positive solution to \((\partial_{t}-\Delta_{f})u=\Sigma(t,x,u)\) and put \(h=u^{\beta}\) where \(\beta\in(0,1)\). Then \(h\) satisfies the equation_
\[\square h=(\partial_{t}-\Delta_{f})h=(1-\beta)|\nabla h|^{2}/(\beta h)+\beta h ^{1-1/\beta}\Sigma(t,x,h^{1/\beta}). \tag{4.1}\]
Proof.: A basic calculation gives \(\Delta_{f}h=\beta(\beta-1)u^{\beta-2}|\nabla u|^{2}+\beta u^{\beta-1}\Delta_{ f}u\) and \(\partial_{t}h=\beta u^{\beta-1}\partial_{t}u\). Therefore (4.1) follows at once by substitution in (1.1).
**Lemma 4.2**.: _Under the assumptions of Lemma 4.1 on \(u\), \(h\), the function \(|\nabla h|^{2}\) satisfies the equation_
\[\square|\nabla h|^{2}=(\partial_{t}-\Delta_{f})|\nabla h|^{2}= -[\partial_{t}g(\nabla h,\nabla h)+2\mathscr{R}ic_{f}(\nabla h, \nabla h)]\] \[-2|\nabla^{2}h|^{2}+\frac{2(\beta-1)}{\beta h^{2}}\left[|\nabla h |^{4}-h\langle\nabla h,\nabla|\nabla h|^{2}\rangle\right]\] \[+2\beta\langle\nabla h,\nabla[h^{1-1/\beta}\Sigma(t,x,h^{1/\beta })]\rangle. \tag{4.2}\]
Proof.: By making use of the weighted Bochner-Weitzenbock formula (1.7) and the evolution of \(h\) described by (4.1) we have
\[(\partial_{t}-\Delta_{f})|\nabla h|^{2}= \ \partial_{t}|\nabla h|^{2}-2|\nabla^{2}h|^{2}-2\langle\nabla h, \nabla\Delta_{f}h\rangle-2\mathscr{R}ic_{f}(\nabla h,\nabla h)\] \[= \ -g_{t}(\nabla h,\nabla h)+2\langle\nabla h,\nabla(\partial_{t} -\Delta_{f})h\rangle-2|\nabla^{2}h|^{2}-2\mathscr{R}ic_{f}(\nabla h,\nabla h)\] \[= \ -[g_{t}(\nabla h,\nabla h)+2\mathscr{R}ic_{f}(\nabla h,\nabla h )]-2|\nabla^{2}h|^{2}\] \[+\langle 2(1-\beta)\nabla h,\nabla[|\nabla h|^{2}/(\beta h)] \rangle+2\beta\langle\nabla h,\nabla[h^{1-1/\beta}\Sigma(t,x,h^{1/\beta})]\rangle.\]
Expanding the gradient and the inner product then gives the desired conclusion.
With the conclusions of the above two lemmas at hand let us now calculate \(\square(h|\nabla h|^{2})\). To this end first using the weighted Bochner-Weitzenbock formula we can write
\[\Delta_{f}(h|\nabla h|^{2})= \ h\Delta_{f}|\nabla h|^{2}+|\nabla h|^{2}\Delta_{f}h+2\langle \nabla h,\nabla|\nabla h|^{2}\rangle\] \[= \ 2h|\nabla^{2}h\big{|}^{2}+2h\langle\nabla h,\nabla\Delta_{f}h \rangle+2h\mathscr{R}ic_{f}(\nabla h,\nabla h)\] \[+|\nabla h|^{2}\Delta_{f}h+2\langle\nabla h,\nabla|\nabla h|^{2}\rangle,\]
and next we have
\[\partial_{t}(h|\nabla h|^{2})=|\nabla h|^{2}\partial_{t}h+h\partial_{t}|\nabla h |^{2}=|\nabla h|^{2}\partial_{t}h-h[\partial_{t}g](\nabla h,\nabla h)+2h \langle\nabla h,\nabla\partial_{t}h\rangle.\]
Thus putting the two together gives
\[\square(h|\nabla h|^{2})= -2h|\nabla^{2}h|^{2}+2h\langle\nabla h,\nabla\square h\rangle-2 h\mathscr{R}ic_{f}(\nabla h,\nabla h)\] \[+|\nabla h|^{2}\square h-2\langle\nabla h,\nabla|\nabla h|^{2} \rangle-h[\partial_{t}g](\nabla h,\nabla h). \tag{4.3}\]
### Proof of the local estimate in Theorem 2.4
We consider first the case \(q\equiv 0\). From (1.19) and \(h\geq 0\) we have \(h[\partial_{t}g](\nabla h,\nabla h)+2h\mathscr{R}ic_{f}(\nabla h,\nabla h)\geq-2 \mathsf{k}h|\nabla h|^{2}\) and so substituting into (4.3) and making use of Lemma 4.1 results in
\[(\partial_{t}-\Delta_{f})[h|\nabla h|^{2}]\leq -2h|\nabla^{2}h|^{2}-2(\beta-1)[h\langle\nabla h,\nabla|\nabla h| ^{2}\rangle-|\nabla h|^{4}]/(\beta h)+2\mathsf{k}h|\nabla h|^{2}\] \[+2\beta h\langle\nabla h,\nabla[h^{1-1/\beta}\Sigma(t,x,h^{1/ \beta})]\rangle-(\beta-1)|\nabla h|^{4}/(\beta h)\] \[+\beta h^{1-1/\beta}\Sigma(t,x,h^{1/\beta})|\nabla h|^{2}-2 \langle\nabla h,\nabla|\nabla h|^{2}\rangle. \tag{4.4}\]
Using \(2|\sqrt{h}\nabla^{2}h+[\nabla h\otimes\nabla h]/\sqrt{h}|^{2}=2h|\nabla^{2}h |^{2}+2\langle\nabla h,\nabla|\nabla h|^{2}\rangle+2|\nabla h|^{4}/h\geq 0\) we can rewrite (4.4) as
\[(\partial_{t}-\Delta_{f})[h|\nabla h|^{2}]\leq -[2(\beta-1)/\beta]\langle\nabla h,\nabla|\nabla h|^{2}\rangle+[ (3\beta-1)/(\beta h)]|\nabla h|^{4}\] \[+2\mathsf{k}h|\nabla h|^{2}+2\beta h\langle\nabla h,\nabla[h^{1- 1/\beta}\Sigma(t,x,h^{1/\beta})]\rangle\] \[+\beta h^{1-1/\beta}\Sigma(t,x,h^{1/\beta})|\nabla h|^{2}, \tag{4.5}\]
and upon making note of \(\langle\nabla h,\nabla(h|\nabla h|^{2})\rangle=|\nabla h|^{4}+h\langle\nabla h,\nabla|\nabla h|^{2}\rangle\) and substituting in (4.5) we can write
\[(\partial_{t}-\Delta_{f})[h|\nabla h|^{2}]\leq -[2(\beta-1)/(\beta h)]\langle\nabla h,\nabla(h|\nabla h|^{2})\rangle\] \[-[(3-5\beta)/(\beta h^{3})](h|\nabla h|^{2})^{2}+2\mathsf{k}(h| \nabla h|^{2})\] \[+2\beta h\langle\nabla h,\nabla[h^{1-1/\beta}\Sigma(t,x,h^{1/ \beta})]\rangle\] \[+\beta h^{1-1/\beta}\Sigma(t,x,h^{1/\beta})|\nabla h|^{2}. \tag{4.6}\]
Let \(\mathsf{Z}_{\Sigma}\) denote the sum of the last two terms on the right-hand side of (4.5) and let us abbreviate the arguments of \(\Sigma,\Sigma_{x},\Sigma_{u}\) for convenience. A basic calculation gives
\[\langle\nabla h,\nabla[h^{1-1/\beta}\Sigma]\rangle=\frac{(\beta-1)}{\beta h^ {1/\beta}}|\nabla h|^{2}\Sigma+h^{1-1/\beta}\langle\nabla h,\nabla\Sigma\rangle, \tag{4.7}\]
where the last term on the right in (4.7) can in turn be calculated as
\[\langle\nabla h,\nabla\Sigma\rangle =\langle\nabla h,\Sigma_{x}+\nabla(h^{1/\beta})\Sigma_{u}\rangle\] \[=\langle\nabla h,\Sigma_{x}\rangle+\beta^{-1}h^{1/\beta-1}|\nabla h |^{2}\Sigma_{u}\] \[=\langle\nabla h,\Sigma_{x}\rangle+\beta^{-1}h^{1/\beta-2}(h| \nabla h|^{2})\Sigma_{u}. \tag{4.8}\]
Substituting (4.7)-(4.8) back into the expression for \(\mathsf{Z}_{\Sigma}\) leads to
\[\mathsf{Z}_{\Sigma} =2\beta h\langle\nabla h,\nabla[h^{1-1/\beta}\Sigma]\rangle+ \beta h^{1-1/\beta}|\nabla h|^{2}\Sigma\] \[=(3\beta-2)h^{1-1/\beta}|\nabla h|^{2}\Sigma+2\beta h^{2-1/\beta} \langle\nabla h,\nabla\Sigma\rangle\] \[=\frac{(3\beta-2)\Sigma+2u\Sigma_{u}}{u}(h|\nabla h|^{2})+\frac{2 \beta h^{2}\langle\nabla h,\Sigma_{x}\rangle}{u}. \tag{4.9}\]
We now take a space-time cut-off function \(\phi\) as in (3.11) and consider the localised function \(\phi h|\nabla h|^{2}\). Then as in the proof of Theorem 2.1 we can write
\[(\partial_{t}-\Delta_{f})[\phi h|\nabla h|^{2}]\leq -\left\langle 2\left[\frac{(\beta-1)\nabla h}{\beta h}+\frac{ \nabla\phi}{\phi}\right],\nabla(\phi h|\nabla h|^{2})\right\rangle\] \[+2(h|\nabla h|^{2})\left\langle\frac{(\beta-1)\nabla h}{\beta h}+ \frac{\nabla\phi}{\phi},\nabla\phi\right\rangle \tag{4.10}\] \[-\frac{3-5\beta}{\beta h^{3}}\phi(h|\nabla h|^{2})^{2}+h|\nabla h |^{2}(\partial_{t}-\Delta_{f}+2\mathsf{k})\phi+\mathsf{Z}_{\Sigma}\phi.\]
For fixed \(\tau\in(0,T]\) let \((x_{1},t_{1})\) be a maximum point for the localised function \(\phi h|\nabla h|^{2}\) in the compact set \(\{d(x,x_{0},t)\leq R,0\leq t\leq\tau\}\subset M\times[0,T]\). Without loss of generality we can take \(t_{1}>0\) and for the sake of establishing the estimate at \((x,\tau)\) in \(Q_{R/2,T}\) it suffices to confine to the case \(d(x_{1},x_{0},t_{1})\geq 1\). Now at \((x_{1},t_{1})\) we have the relations \(\partial_{t}(\phi h|\nabla h|^{2})\geq 0\), \(\nabla(\phi h|\nabla h|^{2})=0\) and \(\Delta_{f}(\phi h|\nabla h|^{2})\leq 0\). Therefore applying these to (4.10) and rearranging the inequality result in
\[\frac{3-5\beta}{\beta}(h|\nabla h|^{2})^{2}\phi\leq \ 2h^{3}(h|\nabla h|^{2})\left\langle\frac{(\beta-1)\nabla h}{\beta h }+\frac{\nabla\phi}{\phi},\nabla\phi\right\rangle\] \[+h^{3}(h|\nabla h|^{2})(\partial_{t}-\Delta_{f}+2\mathsf{k})\phi +h^{3}\mathsf{Z}_{\Sigma}\phi. \tag{4.11}\]
We now proceed onto bounding from above each of the terms on the right-hand side of (4.11). Again the argument proceeds by considering the two case \(d(x_{1},x_{0},t_{1})\leq 1\) and \(d(x_{1},x_{0},t_{1})\geq 1\). However in view of certain similarities with the proof of Theorem 2.1, we shall remain brief, focusing on case two only and mainly on the differences. Towards this end the first two terms on the right-hand side of (4.11) are bounded directly in modulus by using the Cauchy-Schwarz followed by Young's inequality:
\[2\frac{\beta-1}{\beta}h^{2}(h|\nabla h|^{2})\langle\nabla h, \nabla\phi\rangle \leq 2\left|\frac{1-\beta}{\beta}h^{2}(h|\nabla h|^{2})\langle \nabla h,\nabla\phi\rangle\right|\] \[\leq\frac{2(1-\beta)}{\beta}\phi^{3/4}(h|\nabla h|^{2})^{3/2}h^{3 /2}\frac{|\nabla\phi|}{\phi^{3/4}}\] \[\leq\frac{1-\beta}{4\beta}(h|\nabla h|^{2})^{2}\phi+C(\beta) \frac{|\nabla\phi|^{4}}{\phi^{3}}h^{6}\] \[\leq\frac{1-\beta}{4\beta}(h|\nabla h|^{2})^{2}\phi+\frac{C(\beta )}{R^{4}}h^{6}, \tag{4.12}\]
where we have used \(\sqrt{W}=\sqrt{h}|\nabla h|\) and in much the same way
\[2\frac{|\nabla\phi|^{2}}{\phi}h^{3}(h|\nabla h|^{2}) \leq 2\phi^{1/2}(h|\nabla h|^{2})\frac{|\nabla\phi|^{2}}{\phi^{3/ 2}}h^{3}\] \[\leq\frac{1-\beta}{4\beta}(h|\nabla h|^{2})^{2}\phi+C(\beta)\frac{ |\nabla\phi|^{4}}{\phi^{3}}h^{6}\] \[\leq\frac{1-\beta}{4\beta}(h|\nabla h|^{2})^{2}\phi+\frac{C(\beta )}{R^{4}}h^{6}. \tag{4.13}\]
For the \(\mathsf{Z}_{\Sigma}\) term by noting \(h^{5}|\langle\nabla h,\Sigma_{x}\rangle|\leq h^{5}|\nabla h||\Sigma_{x}|=h^{9/2} \sqrt{h}|\nabla h||\Sigma_{x}|\) we have
\[h^{3}\mathsf{Z}_{\Sigma}\phi =\frac{2u\Sigma_{u}-(2-3\beta)\Sigma}{u}h^{3}(h|\nabla h|^{2}) \phi+\frac{2\beta\langle\nabla h,\Sigma_{x}\rangle}{u}h^{5}\phi\] \[\leq\left[\frac{2u\Sigma_{u}-(2-3\beta)\Sigma}{u}\right]_{+}h^{3} (h|\nabla h|^{2})\phi+2\beta h^{9/2}\sqrt{h|\nabla h|^{2}}\frac{|\Sigma_{x}|}{u}\phi\] \[\leq\frac{1-\beta}{4\beta}(h|\nabla h|^{2})^{2}\phi+C(\beta)[ \mathsf{T}_{\Sigma}^{2}+\mathsf{S}_{\Sigma}^{4/3}]h^{6}, \tag{4.14}\]
where in the last line we have written \(\mathsf{T}_{\Sigma}=\{[2u\Sigma_{u}-(2-3\beta)\Sigma]/u\}_{+}\) and \(\mathsf{S}_{\Sigma}=|\Sigma_{x}|/u\). For the terms involving \(\Delta_{f}\phi\) and \(\partial_{t}\phi\) we proceed similar to the proof of Theorem 2.1. Indeed recall from the Laplacian comparison theorem and the discussion on \((\partial_{t}-\Delta_{f})\phi\) in Section 2 that here we have the bounds
\[-\Delta_{f}\phi\leq C\left(\frac{1}{R^{2}}+\frac{[\gamma_{\Delta_{f}}]_{+}}{R }+k_{1}\right)\sqrt{\phi}\qquad\text{and}\qquad\partial_{t}\phi\leq C\left( \frac{1}{\tau}+k_{2}\right)\sqrt{\phi}.\]
Hence, by putting together all the three terms in this set in (4.11) we have,
\[h^{4}|\nabla h|^{2}\left(\partial_{t}-\Delta_{f}+2\mathsf{k}\right) \phi \leq Ch|\nabla h|^{2}\sqrt{\phi}\left(\frac{1}{R^{2}}+\frac{[ \gamma_{\Delta_{f}}]_{+}}{R}+\frac{1}{\tau}+k_{1}+k_{2}+\mathsf{k}\right)h^{3}\] \[\leq\frac{1-\beta}{4\beta}(h|\nabla h|^{2})^{2}\phi+C(\beta)\left( \frac{1}{R^{4}}+\frac{[\gamma_{\Delta_{f}}]_{+}^{2}}{R^{2}}+\frac{1}{\tau^{2}} +k^{2}\right)h^{6}. \tag{4.15}\]
Having now estimated each of the individual terms on the right-hand side of (4.11) we proceed next by substituting these back into the inequality and finalising the estimate. To this end noting that the term \((3-5\beta)\phi(h|\nabla h|^{2})^{2}/\beta\) on the left majorises, after summation, the term \((1-\beta)\phi(h|\nabla h|^{2})^{2}/\beta\) on the right subject to \(1-2\beta>0\), it follows from (4.11)-(4.15), upon rearranging terms and a basic set of calculations that at the point \((x_{1},t_{1})\) we have
\[(h|\nabla h|^{2})^{2}\phi\leq C(\beta)\left\{\frac{1}{R^{4}}+\frac{[\gamma_{ \Delta_{f}}]_{+}^{2}}{R^{2}}+\frac{1}{\tau^{2}}+k^{2}+[\mathsf{T}_{\Sigma}^{2} +\mathsf{S}_{\Sigma}^{4/3}]\right\}h^{6}. \tag{4.16}\]
By the maximality of \(\phi h|\nabla h|^{2}\) at \((x_{1},t_{1})\) we have for any \(x\) with \(d(x,x_{0},\tau)\leq R/2\) the chain of inequalities
\[(h^{2}|\nabla h|^{4})(x,\tau)\leq(\phi^{2}h^{2}|\nabla h|^{4})(x,\tau)\leq( \phi^{2}h^{2}|\nabla h|^{4})(x_{1},t_{1})\leq(\phi h^{2}|\nabla h|^{4})(x_{1},t_{1}), \tag{4.17}\]
[recall that \(\phi(x,\tau)=1\) when \(d(x,x_{0};\tau)\leq R/2\)]. Hence combining the latter with (4.16) and making note of the relations \(h=u^{\beta}\) and \(h|\nabla h|^{2}=\beta^{2}h^{3}|\nabla u|^{2}/u^{2}\) this gives
\[\frac{|\nabla u|^{2}}{u^{2-3\beta}}\leq C(\beta)\Big{(}\sup_{Q_{R,T}}u\Big{)}^ {3\beta}\left\{\frac{1}{R^{2}}+\frac{[\gamma_{\Delta_{f}}]_{+}}{R}+\frac{1}{ \tau}+k+\sup_{Q_{R,T}}[\mathsf{T}_{\Sigma}+\mathsf{S}_{\Sigma}^{2/3}]\right\}.\]
The arbitrariness of \(\tau\in(0,T]\) now gives the desired estimate for every \((x,t)\in Q_{R/2,T}\) with \(t>0\). In particular setting \(\beta=1/3\) and rearranging terms gives (2.10) when \(q\equiv 0\) and \(\mathsf{N}\equiv 0\). The passage to the general case with non-zero \(q\) is analogous to Theorem 2.1 upon noting \(\mathsf{T}_{\Sigma+qu}\leq\mathsf{T}_{\Sigma}+\mathsf{T}_{qu}=\mathsf{T}_{ \Sigma}+q_{+}\) and \(\mathsf{S}_{\Sigma+qu}\leq\mathsf{S}_{\Sigma}+\mathsf{S}_{qu}=\mathsf{S}_{ \Sigma}+|\nabla q|\). The rest of the proof is similar and thus abbreviated.
## 5. Proof of the parabolic Li-Yau type estimate in Theorem 2.6
This section is devoted to the proof of the nonlinear version of the Li-Yau estimate (also known as the differential Harnack inequality) in Theorem 2.6. As the proof is quite involved and requires several intermediate steps, for the sake of clarity and convenience, we break this into three subsections, focusing first on deriving and establishing some of the necessary tools and identities and then finalising the proof in the last subsection.
### Some basic identities on evolutionary metrics and potentials
The results and identities proved here will be repeatedly used throughout the section. Our first task is to obtain a relationship between \(\partial_{t}\Delta_{f}h\) and \(\Delta_{f}\partial_{t}h\) for a smooth function \(h=h(x,t)\) given that both the metric \(g\) and the potential \(f\) are time dependent. The following lemma gives the required ingredients. For convenience in writing we hereafter set
\[\frac{\partial g}{\partial t}(x,t)=2v(x,t). \tag{5.1}\]
**Lemma 5.1**.: _With the notation introduced above for every smooth function \(h=h(x,t)\) we have the relations_
\[\partial_{t}|\nabla h|^{2}=-2v(\nabla h,\nabla h)+2\langle\nabla h,\nabla \partial_{t}h\rangle, \tag{5.2}\]
\[\partial_{t}\Delta h=\Delta\partial_{t}h-2\langle v,\nabla^{2}h\rangle- \langle 2\mathrm{div}\,v-\nabla(\mathrm{Tr}_{g}\,v),\nabla h\rangle, \tag{5.3}\]
_where \((\mathrm{div}\,v)_{k}=g^{ij}\nabla_{i}v_{jk}\) and \(\mathrm{Tr}_{g}\,v=g^{ij}v_{ij}\)._
Proof.: The identity in (5.2) follows by first writing \(|\nabla h|^{2}=g^{ij}\nabla_{i}h\nabla_{j}h\) and then taking \(\partial_{t}\) making note of \(\partial_{t}g^{ij}=-2g^{ik}g^{j\ell}v_{k\ell}=-2v^{ij}\). For the identity in (5.3) we first recall the relation
\[\frac{\partial}{\partial t}\Gamma^{k}_{ij}=g^{k\ell}(\nabla_{i}v_{j\ell}+ \nabla_{j}v_{i\ell}-\nabla_{\ell}v_{ij}). \tag{5.4}\]
Then direct differentiation and making use of (5.4) leads to
\[\partial_{t}\Delta h =\partial_{t}[g^{ij}(\nabla_{i}\nabla_{j}-\Gamma^{k}_{ij}\nabla _{k})h]\] \[=(\partial_{t}g^{ij})(\nabla_{i}\nabla_{j}-\Gamma^{k}_{ij}\nabla _{k})h+g^{ij}(\nabla_{i}\nabla_{j}-\Gamma^{k}_{ij}\nabla_{k})\partial_{t}h-g^ {ij}(\partial_{t}\Gamma^{k}_{ij})\nabla_{k}h\] \[=-2\langle v,\nabla^{2}h\rangle+\Delta\partial_{t}h-g^{ij}g^{k \ell}[\nabla_{i}v_{j\ell}+\nabla_{j}v_{i\ell}-\nabla_{\ell}v_{ij}]\nabla_{k}h\] \[=\Delta\partial_{t}h-2\langle v,\nabla^{2}h\rangle-g^{k\ell}[2g^ {ij}\nabla_{i}v_{j\ell}-\nabla_{\ell}(\mathrm{Tr}_{g}\,v)]\nabla_{k}h\] \[=\Delta\partial_{t}h-2\langle v,\nabla^{2}h\rangle-\langle 2 \mathrm{div}\,v-\nabla(\mathrm{Tr}_{g}\,v),\nabla h\rangle \tag{5.5}\]
which is the desired identity.
**Lemma 5.2**.: _Subject to the notation in (5.1), for every pair of smooth functions \(f=f(x,t)\) and \(h=h(x,t)\) we have_
\[\partial_{t}\langle\nabla f,\nabla h\rangle=-2v(\nabla f,\nabla h)+\langle \nabla\partial_{t}f,\nabla h\rangle+\langle\nabla f,\nabla\partial_{t}h\rangle, \tag{5.6}\]
\[\partial_{t}(\Delta_{f}h)= \ \Delta_{f}(\partial_{t}h)-2\langle v,\nabla^{2}h\rangle- \langle 2\mathrm{div}v-\nabla(\mathrm{Tr}_{g}\,v),\nabla h\rangle\] \[-\langle\nabla\partial_{t}f,\nabla h\rangle+2v(\nabla f,\nabla h). \tag{5.7}\]
Proof.: The first identity follows similar to (5.2) in Lemma 5.1. For the second identity we proceed by directly calculating \(\partial_{t}\Delta_{f}h\) whilst making note of (5.3) and (5.6)
\[\partial_{t}(\Delta_{f}h) =\partial_{t}(\Delta h-\langle\nabla f,\nabla h\rangle)\] \[=\Delta(\partial_{t}h)-2\langle v,\nabla^{2}h\rangle-\langle 2 \mathrm{div}v-\nabla(\mathrm{Tr}_{g}v),\nabla h\rangle\] \[\quad-\langle\nabla\partial_{t}f,\nabla h\rangle-\langle\nabla f,\nabla\partial_{t}h\rangle+2v\langle\nabla f,\nabla h\rangle\] \[=\Delta_{f}(\partial_{t}h)-2\langle v,\nabla^{2}h\rangle- \langle 2\mathrm{div}v-\nabla(\mathrm{Tr}_{g}v),\nabla h\rangle\] \[\quad-\langle\nabla\partial_{t}f,\nabla h\rangle+2v(\nabla f, \nabla h). \tag{5.8}\]
The proof is thus complete.
**Lemma 5.3**.: _Suppose \(a,b,z\in\mathbb{R}\), \(c,y>0\) and \(\lambda>1\) are arbitrary constants such that \(y-\lambda z>0\). Then for any \(\varepsilon\in(0,1)\) we have_
\[(y-z)^{2} -a\sqrt{y}(y-\lambda z)-by-c\sqrt{y}\] \[\geq(y-\lambda z)^{2}/\lambda^{2}-a^{2}\lambda^{2}(y-\lambda z)/ [8(\lambda-1)]\] \[-(\lambda^{2}b^{2})/[4(1-\varepsilon)(\lambda-1)^{2}]-(3/4)c^{4 /3}[\lambda^{2}/(4\varepsilon(\lambda-1)^{2})]^{1/3}. \tag{5.9}\]
Proof.: Starting from the expression on the left-hand side in (5.9) we can write for any \(\delta,\varepsilon\) by basic considerations
\[(y-z)^{2} -a\sqrt{y}(y-\lambda z)-by-c\sqrt{y}\] \[=(1-\varepsilon-\delta)y^{2}-(2-\varepsilon\lambda)yz+z^{2}+( \varepsilon y-a\sqrt{y})(y-\lambda z)+\delta y^{2}-by-c\sqrt{y}\] \[=(1/\lambda-\varepsilon/2)(y-\lambda z)^{2}+(1-\varepsilon- \delta-1/\lambda+\varepsilon/2)y^{2}+(1-\lambda+\varepsilon\lambda^{2}/2)z^{2}\] \[\quad+(\varepsilon y-a\sqrt{y})(y-\lambda z)+\delta y^{2}-by-c \sqrt{y}. \tag{5.10}\]
In particular setting \(\delta=(1/\lambda-1)^{2}\) and \(\varepsilon=2-2/\lambda-2(1/\lambda-1)^{2}=2(\lambda-1)/\lambda^{2}\) gives \(1-\varepsilon-\delta-1/\lambda+\varepsilon/2=0\) and \(1-\lambda+\varepsilon\lambda^{2}/2=0\) and so by making note of the inequality \(\varepsilon y-a\sqrt{y}\geq-a^{2}/(4\varepsilon)\) with \(\varepsilon=2(\lambda-1)/\lambda^{2}>0\) we can deduce from (5.10) that
\[(y-z)^{2} -a\sqrt{y}(y-\lambda z)-by-c\sqrt{y} \tag{5.11}\] \[\geq(y-\lambda z)^{2}/\lambda^{2}-a^{2}\lambda^{2}(y-\lambda z)/[ 8(\lambda-1)]+(\lambda-1)^{2}y^{2}/\lambda^{2}-by-c\sqrt{y}.\]
Next, considering the last three terms in the above inequality only, we can write, for any \(\varepsilon\in(0,1)\),
\[(\lambda-1)^{2} y^{2}/\lambda^{2}-by-c\sqrt{y}\] \[=(\lambda-1)^{2}y^{2}/\lambda^{2}-(1-\varepsilon)(\lambda-1)^{2} y^{2}/\lambda^{2}+(1-\varepsilon)(\lambda-1)^{2}y^{2}/\lambda^{2}-by-c\sqrt{y}\] \[\geq(\lambda-1)^{2}y^{2}/\lambda^{2}-(1-\varepsilon)(\lambda-1)^{ 2}y^{2}/\lambda^{2}-(\lambda^{2}b^{2})/[4(1-\varepsilon)(\lambda-1)^{2}]-c \sqrt{y}\] \[\geq\varepsilon(\lambda-1)^{2}y^{2}/\lambda^{2}-(\lambda^{2}b^{2} )/[4(1-\varepsilon)(\lambda-1)^{2}]-c\sqrt{y}\] \[\geq-(3/4)c^{4/3}[\lambda^{2}/(4\varepsilon(\lambda-1)^{2})]^{1/3 }-(\lambda^{2}b^{2})/[4(1-\varepsilon)(\lambda-1)^{2}] \tag{5.12}\]
where above we have made use of \((1-\varepsilon)(\lambda-1)^{2}y^{2}/\lambda^{2}-by\geq-(\lambda^{2}b^{2})/[4(1 -\varepsilon)(\lambda-1)^{2}]\) and \(\varepsilon(\lambda-1)^{2}y^{2}/\lambda^{2}-c\sqrt{y}\geq-(3/4)c^{4/3}[\lambda ^{2}/(4\varepsilon(\lambda-1)^{2})]^{1/3}\) to deduce the first and last inequalities respectively. Substituting back in (5.11) gives the desired inequality.
### Evolution of a Harnack quantity
In this subsection we introduce a Harnack quantity built out of the solution \(u\) and establish a parabolic inequality by considering its evolution under the weighted heat operator.
**Lemma 5.4**.: _Let \(u\) be a positive solution to the equation \((\partial_{t}-\Delta_{f})u=\Sigma(t,x,u)\) and let \(G=G(x,t)\) be defined by_
\[G=t[|\nabla h|^{2}-\lambda\partial_{t}h+\lambda e^{-h}\Sigma(t,x,e^{h})],\qquad t \geq 0, \tag{5.13}\]
_where \(h=\log u\) and \(\lambda>1\) is a fixed constant. Suppose that the metric-potential pair \((g,f)\) is time dependent and of class \(\mathscr{C}^{2}\) and that we have (5.1). Then \(G\) satisfies the evolution equation_
\[\Box G=(\partial_{t}-\Delta_{f})G= \ 2\langle\nabla h,\nabla G\rangle-2t|\nabla^{2}h|^{2}-2t \mathscr{R}ic_{f}^{m}(\nabla h,\nabla h)\] \[+2t(\lambda-1)v(\nabla h,\nabla h)-2t\langle\nabla f,\nabla h \rangle^{2}/(m-n)+G/t\] \[+2\lambda t[\langle v,\nabla^{2}h\rangle+\langle\mathrm{div}v-(1 /2)\nabla(\mathrm{Tr}_{g}v),\nabla h\rangle] \tag{5.14}\] \[+\lambda t[\langle\nabla\partial_{t}f,\nabla h\rangle-2v(\nabla f,\nabla h)]\] \[-2t(\lambda-1)\langle\nabla h,\nabla[e^{-h}\Sigma(t,x,e^{h})] \rangle-\lambda t\Delta_{f}[e^{-h}\Sigma(t,x,e^{h})].\]
Proof.: Referring to the equation for \(u\) an easy calculation shows that \(h\) in turn satisfies the equation
\[\Box h=(\partial_{t}-\Delta_{f})h=|\nabla h|^{2}+e^{-h}\Sigma(t,x,e^{h}). \tag{5.15}\]
Moreover, using (5.13) and (5.15) it is easily seen that the following relation emerges between \(G\), \(|\nabla h|^{2}\) and \(\Delta_{f}h\):
\[\Delta_{f}h =-\left[|\nabla h|^{2}/\lambda-\partial_{t}h+e^{-h}\Sigma(t,x,e^{ h})\right]-(\lambda-1)|\nabla h|^{2}/\lambda\] \[=-G/(\lambda t)-(\lambda-1)|\nabla h|^{2}/\lambda,\qquad t>0. \tag{5.16}\]
Now having these identities and relations in place we next proceed onto applying the weighted heat operator \(\partial_{t}-\Delta_{f}\) to the Harnack quantity \(G\) given by (5.13). Towards this end we first note that
\[\Delta_{f}G=t(\Delta_{f}|\nabla h|^{2}-\lambda\Delta_{f}(\partial_{t}h)+ \lambda\Delta_{f}[e^{-h}\Sigma(t,x,e^{h})]). \tag{5.17}\]
As for the first term on the right by recalling the weighted Bocnher-Weitzenbock formula as applied to \(h\) we have
\[\Delta_{f}|\nabla h|^{2}/2=|\nabla^{2}h|^{2}+\langle\nabla h,\nabla\Delta_{f}h \rangle+\mathscr{R}ic_{f}^{m}(\nabla h,\nabla h)+\langle\nabla f,\nabla h \rangle^{2}/(m-n), \tag{5.18}\]
and so upon substituting back in (5.17) and making use of (5.7) this gives
\[\Delta_{f}G= \ t\left[2|\nabla^{2}h|^{2}+2\langle\nabla h,\nabla\Delta_{f}h \rangle+2\mathscr{R}ic_{f}^{m}(\nabla h,\nabla h)+2\frac{\langle\nabla f,\nabla h \rangle^{2}}{m-n}\right]\] \[-\lambda t\partial_{t}(\Delta_{f}h)-2\lambda t\left[\langle v, \nabla^{2}h\rangle+\langle\mathrm{div}v-\frac{1}{2}\nabla(\mathrm{Tr}_{g}v), \nabla h\rangle\right]\] \[-\lambda t[\langle\nabla\partial_{t}f,\nabla h\rangle-2v(\nabla f,\nabla h)]+\lambda t\Delta_{f}[e^{-h}\Sigma(t,x,e^{h})]. \tag{5.19}\]
Now referring to the sum on the right the contributions of the second and fifth terms, modulo a factor \(t\) and upon using (5.15) and (5.16) can be simplified and re-written as,
\[2\langle\nabla h,\nabla\Delta_{f}h\rangle -\lambda\partial_{t}(\Delta_{f}h)\] \[= \ 2\langle\nabla h,\nabla\Delta_{f}h\rangle-2(\lambda-1)v(\nabla h,\nabla h)\] \[\ -[-(t\partial_{t}G-G)/t^{2}-2(\lambda-1)\langle\nabla h,\nabla( \partial_{t}h)\rangle]\] \[= \ 2(\lambda-1)\langle\nabla h,\nabla[\Delta_{f}h+|\nabla h|^{2}+ e^{-h}\Sigma(t,x,e^{h})]\rangle\] \[\ +2\langle\nabla h,\nabla\Delta_{f}h\rangle+(t\partial_{t}G-G)/ t^{2}-2(\lambda-1)v(\nabla h,\nabla h)\] \[= \ 2\lambda\langle\nabla h,\nabla[-G/(\lambda t)-(\lambda-1)| \nabla h|^{2}/\lambda]\rangle\] \[\ +2(\lambda-1)\langle\nabla h,\nabla|\nabla h|^{2}\rangle+(t \partial_{t}G-G)/t^{2}\] \[\ +2(\lambda-1)\langle\nabla h,\nabla[e^{-h}\Sigma(t,x,e^{h})] \rangle-2(\lambda-1)v(\nabla h,\nabla h)\] \[= \ 2(\lambda-1)[\langle\nabla h,\nabla[e^{-h}\Sigma(t,x,e^{h})] \rangle-v(\nabla h,\nabla h)]\] \[\ +(t\partial_{t}G-G)/t^{2}-2\langle\nabla h,\nabla G\rangle/t. \tag{5.20}\]
Therefore substituting this back into (5.19) gives
\[\Delta_{f}G= \ \partial_{t}G+2t|\nabla^{2}h|^{2}+2t(\lambda-1)[\langle\nabla h,\nabla[e^{-h}\Sigma(t,x,e^{h})]\rangle-v(\nabla h,\nabla h)]\] \[-G/t-2\langle\nabla h,\nabla G\rangle+2t\mathscr{R}ic^{m}_{f}( \nabla h,\nabla h)+2t\langle\nabla f,\nabla h\rangle^{2}/(m-n)\] \[-2\lambda t\langle v,\nabla^{2}h\rangle-\lambda t\langle 2 \mathrm{div}v-\nabla(\mathrm{Tr}_{g}v),\nabla h\rangle-\lambda t\langle\nabla \partial_{t}f,\nabla h\rangle\] \[+2\lambda tv(\nabla f,\nabla h)+\lambda t\Delta_{f}[e^{-h}\Sigma (t,x,e^{h})] \tag{5.21}\]
which upon a rearrangement of term leads to the desired conclusion.
**Lemma 5.5**.: _Let \(u\) be a positive solution to \((\partial_{t}-\Delta_{f})u=\Sigma(t,x,u)\) and let \(G=G(x,t)\) be as in (5.13). Suppose that the metric-potential pair \((g,f)\) is time dependent and of class \(\mathscr{C}^{2}\). Assume additionally that \(\mathscr{R}ic^{m}_{f}(g)\geq-(m-1)k_{1}g\) and we have [cf. (5.1)]_
\[-\underline{k}_{2}g\leq v\leq\overline{k}_{2}g,\qquad|\nabla v|\leq k_{3} \tag{5.22}\]
_for suitable \(k_{1},\underline{k}_{2},\overline{k}_{2}\) and \(k_{3}\geq 0\). Then_
\[\Box G=(\partial_{t}-\Delta_{f})G\leq -t(\Delta_{f}h)^{2}/m+G/t+2\langle\nabla h,\nabla G\rangle\] \[+2t[(m-1)k_{1}+(\lambda-1)\overline{k}_{2}]|\nabla h|^{2}\] \[+\lambda^{2}nt(\underline{k}_{2}+\overline{k}_{2})^{2}+3\lambda t \sqrt{n}k_{3}|\nabla h|\] \[+\lambda t\langle\nabla\partial_{t}f,\nabla h\rangle-2\lambda tv (\nabla f,\nabla h)\] \[-2(\lambda-1)t\langle\nabla h,\nabla[e^{-h}\Sigma(t,x,e^{h})]\rangle\] \[-\lambda t\Delta_{f}[e^{-h}\Sigma(t,x,e^{h})]. \tag{5.23}\]
Proof.: Firstly from (5.22) we deduce that \(|v|^{2}\leq(\underline{k}_{2}+\overline{k}_{2})^{2}|g|^{2}=n(\underline{k}_{ 2}+\overline{k}_{2})^{2}\) and so
\[|\lambda\langle v,\nabla^{2}h\rangle|\leq\frac{1}{2}|\nabla^{2}h|^{2}+\frac{1} {2}\lambda^{2}|v|^{2}\leq\frac{1}{2}|\nabla^{2}h|^{2}+\frac{1}{2}\lambda^{2}n( \underline{k}_{2}+\overline{k}_{2})^{2}. \tag{5.24}\]
Next in view of \(|g^{ij}(2\nabla_{i}v_{j\ell}-\nabla_{\ell}v_{ij})|\leq 3|g||\nabla v|\), \(|g|=\sqrt{n}\) and \(|\nabla v|\leq k_{3}\) we can write
\[\left|\mathrm{div}v-\frac{1}{2}\nabla(\mathrm{Tr}_{g}v)\right|= \left|g^{ij}\nabla_{i}v_{j\ell}-\frac{1}{2}g^{ij}\nabla_{\ell}v_{ij}\right|= \frac{1}{2}\left|g^{ij}(2\nabla_{i}v_{j\ell}-\nabla_{\ell}v_{ij})\right|\leq \frac{3}{2}\sqrt{n}k_{3}.\]
The conclusion now follows at once by recalling (5.14) in Lemma 5.4, making note of
\[|\nabla^{2}h|^{2}+\frac{\langle\nabla f,\nabla h\rangle^{2}}{m-n} \geq\frac{(\Delta h)^{2}}{n}+\frac{\langle\nabla f,\nabla h\rangle^{2}}{m-n} \geq\frac{(\Delta_{f}h)^{2}}{m}, \tag{5.25}\]
and the Bakry-Emery curvature lower bound \(\mathscr{R}ic_{f}^{m}(g)\geq-(m-1)k_{1}g\) in the lemma.
### Proof of the local estimate in Theorem 2.6
Having all the ingredients and necessary tools at our disposal we now come to the proof of the main estimate. The proof of the theorem is broken into two parts, first, the case \(q\equiv 0\), and then the general case that follows from the latter by incorporating the \(qu\) term in the nonlinearity \(\Sigma\) and then re-calculating the resulting bounds accordingly. The idea for the first part is to combine the inequality established in Lemma 5.5 along with a localisation argument by utilising a suitable cut-off function. The estimates on the cut-off function in turn makes use of the generalised Laplacian comparison theorem and the Bakry-Emery generalised Ricci curvature lower bound as will be described in detail in the course of the proof. To this end we first set \(q\equiv 0\). We pick a reference point \(x_{0}\in M\) and fix \(R,T>0\) and \(0<\tau\leq T\). As before we denote by \(\varrho(x,t)=d(x,x_{0},t)\) the geodesic radial variable at time \(t\) in reference to \(x_{0}\). For the sake of localisation we consider first a function \(\bar{\psi}=\bar{\psi}(s)\) on the half-line \(s\geq 0\) (_see_ Lemma 5.6 below) and then for \(x\in M\) and \(t\geq 0\) set
\[\psi(x,t)=\bar{\psi}\left(\frac{\varrho(x,t)}{R}\right). \tag{5.26}\]
The existence of \(\bar{\psi}=\bar{\psi}(s)\) as used in (5.26) and its properties is granted by the following straightforward and standard statement.
**Lemma 5.6**.: _There exists a function \(\bar{\psi}:[0,\infty)\to\mathbb{R}\) verifying the following properties:_
* \(\bar{\psi}\) _is of class_ \(\mathscr{C}^{2}[0,\infty)\)_,_
* \(0\leq\bar{\psi}(s)\leq 1\) _for_ \(0\leq s<\infty\) _with_ \(\bar{\psi}\equiv 1\) _for_ \(s\leq 1\) _and_ \(\bar{\psi}\equiv 0\) _for_ \(s\geq 2\)_,_
* \(\bar{\psi}\) _is non-increasing_ (_specifically,_ \(\bar{\psi}^{\prime}\leq 0\)_) _and additionally, for suitable constants_ \(c_{1},c_{2}>0\)_, satisfies the bounds_ \[-c_{1}\leq\frac{\bar{\psi}^{{}^{\prime}}}{\sqrt{\bar{\psi}}}\leq 0,\qquad and \qquad\bar{\psi}^{{}^{\prime\prime}}\geq-c_{2},\] (5.27) _on the half-line_ \([0,\infty)\)_._
It is evident from \((ii)\) that \(\psi\equiv 1\) for when \(0\leq\varrho(x,t)\leq R\) and \(\psi\equiv 0\) for when \(\varrho(x,t)\geq 2R\). Let us now consider the spatially localised function \(\psi G\) where \(G\) is as in (5.13). We denote by \((x_{1},t_{1})\) the point where this function attains its maximum over the compact set \(\{d(x,x_{0},t)\leq 2R,0\leq t\leq\tau\}\). We will also assume that \([\psi G](x_{1},t_{1})>0\) as otherwise the desired estimate is trivially true as a result of \(G\leq 0\). It thus follows
that \(t_{1}>0\) and \(d(x_{1},x_{0},t_{1})<2R\) and so at the maximum point \((x_{1},t_{1})\) we have the relations
\[\nabla(\psi G)=0,\qquad\partial_{t}(\psi G)\geq 0,\qquad\Delta(\psi G)\leq 0, \qquad\Delta_{f}(\psi G)\leq 0. \tag{5.28}\]
Starting with the basic identity \(\Delta_{f}(\psi G)=G\Delta_{f}\psi+2\langle\nabla\psi,\nabla G\rangle+\psi \Delta_{f}G\) and making note of the relations (5.28) at the maximum point \((x_{1},t_{1})\) we can write
\[0 \geq\Delta_{f}(\psi G)=G\Delta_{f}\psi+(2/\psi)\langle\nabla\psi, \nabla(\psi G)\rangle-2(|\nabla\psi|^{2}/\psi)G+\psi\Delta_{f}G\] \[\geq G\Delta_{f}\psi-2(|\nabla\psi|^{2}/\psi)G+\psi\Delta_{f}G. \tag{5.29}\]
Now from (5.26) we deduce \(\nabla\psi=(\bar{\psi}^{\prime}/R)\nabla\varrho\) and \(\Delta\psi=\bar{\psi}^{\prime\prime}|\nabla\varrho|^{2}/R^{2}+\bar{\psi}^{ \prime}\Delta\varrho/R\) and so
\[\Delta_{f}\psi=\Delta\psi-\langle\nabla f,\nabla\psi\rangle=(\bar{\psi}^{ \prime\prime}/R^{2})|\nabla\varrho|^{2}+(\bar{\psi}^{\prime}/R)\Delta_{f}\varrho. \tag{5.30}\]
Since \(\mathscr{R}ic_{f}^{m}\geq-(m-1)k_{1}g\) we have \(\Delta_{f}\varrho\leq(m-1)\sqrt{k_{1}}\coth(\sqrt{k_{1}}\varrho)\) (_cf._[59]) and so from (5.30) we have:
\[\Delta_{f}\psi\geq(\bar{\psi}^{\prime\prime}/R^{2})+(m-1)(\bar{\psi}^{\prime} /R)\sqrt{k_{1}}\coth(\sqrt{k_{1}}\varrho). \tag{5.31}\]
Next \(\coth(\sqrt{k_{1}}\varrho)\leq\coth(\sqrt{k_{1}}R)\) and \(\sqrt{k_{1}}\coth(\sqrt{k_{1}}R)\leq(1+\sqrt{k_{1}}R)/R\) for \(R\leq\varrho\leq 2R\) and therefore \((m-1)\bar{\psi}^{\prime}\sqrt{k_{1}}\coth(\sqrt{k_{1}}\varrho)\geq(m-1)[1+ \sqrt{k_{1}}R](\bar{\psi}^{\prime}/R)\). Hence
\[\Delta_{f}\psi \geq\frac{1}{R^{2}}\bar{\psi}^{\prime\prime}+\frac{(m-1)}{R} \left(\frac{1}{R}+\sqrt{k_{1}}\right)\bar{\psi}^{\prime}\] \[\geq-\frac{1}{R^{2}}[c_{2}+(m-1)c_{1}(1+R\sqrt{k_{1}})]. \tag{5.32}\]
Thus returning to (5.29), invoking (5.23) and making note of (5.32), we obtain, at the maximum point \((x_{1},t_{1})\), the inequality
\[0 \geq\Delta_{f}(\psi G)=G\Delta_{f}\psi-2(|\nabla\psi|^{2}/\psi)G+ \psi\Delta_{f}G\] \[\geq-(1/R^{2})[c_{2}+(m-1)c_{1}(1+R\sqrt{k_{1}})]G-2(|\nabla\psi |^{2}/\psi)G+\psi\partial_{t}G\] \[\quad+\psi\bigg{[}(t_{1}/m)(\Delta_{f}h)^{2}-(G/t_{1})-2\langle \nabla h,\nabla G\rangle-2[(m-1)k_{1}+(\lambda-1)\overline{k}_{2}]t_{1}| \nabla h|^{2}\] \[\quad-\lambda^{2}nt_{1}(\underline{k}_{2}+\overline{k}_{2})^{2} -3\sqrt{n}k_{3}\lambda t_{1}|\nabla h|-\lambda t_{1}\langle\nabla\partial_{t} f,\nabla h\rangle+2\lambda t_{1}v(\nabla f,\nabla h)\] \[\quad+2(\lambda-1)t_{1}\langle\nabla h,\nabla(e^{-h}\Sigma) \rangle+\lambda t_{1}\Delta_{f}(e^{-h}\Sigma)\bigg{]}, \tag{5.33}\]
where for the sake of convenience we have abbreviated the arguments of \(\Sigma=\Sigma(t,x,e^{h})\). We now proceed by bounding the individual terms in the last inequality. Here, starting from the last term on the first line, upon recalling (5.26), we have,
\[\psi\partial_{t}G=\partial_{t}(\psi G)-G\partial_{t}\psi=\partial_{t}(\psi G)-G \bar{\psi}^{\prime}\left(\frac{\varrho}{R}\right)\frac{\partial_{t}\varrho}{R}, \tag{5.34}\]
and so recalling that at the maximum point \((x_{1},t_{1})\) we have \(\partial_{t}(\psi G)\geq 0\), by restricting to this point the latter results in,
\[\psi\partial_{t}G \geq-G\partial_{t}\psi=-G\bar{\psi}^{\prime}\,\left(\frac{\varrho} {R}\right)\frac{\partial_{t}\varrho}{R}\geq G\bar{\psi}^{\prime}\,\left(\frac {\varrho}{R}\right)\frac{\underline{k}_{2}\varrho}{R}\] \[\geq-c_{1}\underline{k}_{2}\sqrt{\bar{\psi}\left(\frac{\varrho} {R}\right)}\frac{\varrho}{R}G\geq-c_{1}\underline{k}_{2}G. \tag{5.35}\]
Here we point out that we have used \((iii)\) in the set of assumptions on \(\bar{\psi}\), specifically, the first inequality in (5.27). Note also that in deducing the second inequality on the first line we have made use of the relation
\[\frac{\partial}{\partial t}\varrho(x,t) =\frac{\partial}{\partial t}\int_{0}^{1}|\gamma^{\prime}|_{g_{t}}\,ds\] \[=\frac{\partial}{\partial t}\int_{0}^{1}\sqrt{g_{t}(\gamma^{ \prime},\gamma^{\prime})}\,ds=\frac{1}{2}\int_{0}^{1}\frac{[\partial_{t}g_{t }](\gamma^{\prime},\gamma^{\prime})}{\sqrt{g_{t}(\gamma^{\prime},\gamma^{ \prime})}}\,ds\] \[=\int_{0}^{1}\frac{v(\gamma^{\prime},\gamma^{\prime})}{|\gamma^{ \prime}|_{g_{t}}}\,ds\geq-\underline{k}_{2}\int_{0}^{1}|\gamma^{\prime}|_{g_{t }}\,ds=-\underline{k}_{2}\varrho(x,t), \tag{5.36}\]
where \(\gamma=\gamma(s)\) with \(0\leq s\leq 1\) is a geodesic curve with respect to \(g_{t}\) at fixed \(t\) joining \(\bar{x}\) to \(x_{1}\), that is, \(\gamma(0)=\bar{x}\) and \(\gamma(1)=x_{1}\). Lastly we have used \(v\geq-\underline{k}_{2}g\) to obtain the final inequality.
Referring again to the last inequality in (5.33) and bounding the individual terms it is evident that \(|\nabla\psi|^{2}/\psi=(\bar{\psi}^{\prime 2}/\bar{\psi})(|\nabla\varrho|^{2}/R^{2}) \leq c_{1}^{2}/R^{2}\), where we have again made use of \((iii)\) in the set of assumptions on \(\bar{\psi}\). As a result using the above in (5.33) and rearranging terms we have
\[0\geq -\frac{1}{R^{2}}[c_{2}+(m-1)c_{1}(1+R\sqrt{k_{1}})]G-2\frac{c_{1} ^{2}}{R^{2}}G-c_{1}\underline{k}_{2}G\] \[+\frac{t_{1}}{m}\psi(\Delta_{f}h)^{2}-\psi\frac{G}{t_{1}}-2\psi \langle\nabla h,\nabla G\rangle-2t_{1}[(m-1)k_{1}+(\lambda-1)\overline{k}_{2} ]\psi|\nabla h|^{2}\] \[-\psi t_{1}\bigg{[}\lambda^{2}n(\underline{k}_{2}+\overline{k}_{ 2})^{2}+3\sqrt{n}k_{3}\lambda|\nabla h|\bigg{]}+\lambda\psi t_{1}\bigg{[}2v( \nabla f,\nabla h)-\langle\nabla\partial_{t}f,\nabla h\rangle\bigg{]}\] \[+\psi t_{1}\bigg{[}2(\lambda-1)\langle\nabla h,\nabla(e^{-h} \Sigma)\rangle+\lambda\Delta_{f}(e^{-h}\Sigma)\bigg{]}. \tag{5.37}\]
Using \(\psi\langle\nabla h,\nabla G\rangle=-G\langle\nabla h,\nabla\psi\rangle\leq G |\nabla h||\nabla\psi|\leq c_{1}(\sqrt{\psi}/R)G|\nabla h|\) as \(\nabla(\psi G)=0\) at \((x_{1},t_{1})\) and using \(v(\nabla f,\nabla h)\geq-\underline{k}_{2}|\nabla f||\nabla h|\) and \(\langle\nabla\partial_{t}f,\nabla h\rangle\leq|\nabla\partial_{t}f||\nabla h|\) together with \(3k_{3}\sqrt{n}\lambda|\nabla h|\leq 2nk_{3}\lambda^{2}+2k_{3}|\nabla h|^{2}\) it then follows that
\[0\geq -[c_{2}+(m-1)c_{1}(1+R\sqrt{k_{1}})+2c_{1}^{2}]G/R^{2}\] \[-c_{1}\underline{k}_{2}G-2c_{1}(\sqrt{\psi}/R)G|\nabla h|+t_{1}( \psi/m)(\Delta_{f}h)^{2}\] \[-\psi G/t_{1}-2t_{1}[(m-1)k_{1}+(\lambda-1)\overline{k}_{2}]\psi |\nabla h|^{2}\] \[-t_{1}\psi[\lambda^{2}n(\underline{k}_{2}+\overline{k}_{2})^{2}+2 nk_{3}\lambda^{2}+2k_{3}|\nabla h|^{2}]\] \[-\lambda t_{1}\psi[2\underline{k}_{2}|\nabla f||\nabla h|+|\nabla \partial_{t}f||\nabla h|]\] \[+t_{1}\psi[2(\lambda-1)\langle\nabla h,\nabla(e^{-h}\Sigma) \rangle+\lambda\Delta_{f}(e^{-h}\Sigma)]. \tag{5.38}\]
Next multiplying (5.38) through by \(t_{1}\psi\), making note of (5.15) and rearranging terms gives
\[0\geq -t_{1}\psi G([c_{2}+(m-1)c_{1}(1+R\sqrt{k_{1}})]+2c_{1}^{2})/R^{2}- \psi^{2}G\] \[-c_{1}\underline{k}_{2}t_{1}\psi G+t_{1}^{2}(\psi^{2}/m)\left[| \nabla h|^{2}+e^{-h}\Sigma-\partial_{t}h\right]^{2}\] \[-2c_{1}t_{1}\psi(\sqrt{\psi}/R)G|\nabla h|-2t_{1}^{2}\left[(m-1) k_{1}+(\lambda-1)\overline{k}_{2}+k_{3}\right]\psi^{2}|\nabla h|^{2}\] \[-n\lambda^{2}t_{1}^{2}\psi^{2}[(\underline{k}_{2}+\overline{k}_ {2})^{2}+2k_{3}]-\lambda t_{1}^{2}\psi^{2}|\nabla\partial_{t}f||\nabla h|-2 \lambda t_{1}^{2}\psi^{2}\underline{k}_{2}|\nabla f||\nabla h|\] \[+t_{1}^{2}\psi^{2}[2(\lambda-1)\langle\nabla h,\nabla(e^{-h} \Sigma)\rangle+\lambda\Delta_{f}(e^{-h}\Sigma)]. \tag{5.39}\]
We now go through some calculations relating to the nonlinearity \(\Sigma=\Sigma(t,x,u)\) with \(u=e^{h}\) and \(h=h(x,t)\), abbreviating the arguments \((t,x,u)\) for convenience. Firstly, it is seen, by calculating in local coordinates or directly, that
\[\nabla\Sigma=\Sigma_{x}+e^{h}\Sigma_{u}\nabla h,\qquad\Sigma_{x}=(\Sigma_{x_{ 1}},\dots,\Sigma_{x_{n}}). \tag{5.40}\]
Next, writing \(\Sigma^{x}:x\mapsto\Sigma(t,x,u)\) [that is, viewing \(\Sigma\) as a function of \(x\) whilst freezing the remaining variables \((t,u)\)] we can differentiate (5.40) further to obtain
\[\Delta\Sigma =\Delta\Sigma^{x}+e^{h}\langle\Sigma_{xu}+e^{h}|\nabla h|^{2} \Sigma_{u},\nabla h\rangle\] \[+e^{h}\langle\Sigma_{xu},\nabla h\rangle+e^{2h}|\nabla h|^{2} \Sigma_{uu}+e^{h}\Sigma_{u}\Delta h\] \[=\Delta\Sigma^{x}+2e^{h}\langle\Sigma_{xu},\nabla h\rangle+e^{h} |\nabla h|^{2}(\Sigma_{u}+e^{h}\Sigma_{uu})+e^{h}\Sigma_{u}\Delta h. \tag{5.41}\]
Next, for \(\Delta_{f}\Sigma\), by using the above calculations and substituting accordingly, we have,
\[\Delta_{f}\Sigma =\Delta\Sigma-\langle\nabla f,\nabla\Sigma\rangle=\Delta\Sigma- \langle\nabla f,(\Sigma_{x}+e^{h}\Sigma_{u}\nabla h)\rangle\] \[=\Delta\Sigma-\langle\nabla f,\Sigma_{x}\rangle-e^{h}\Sigma_{u} \langle\nabla f,\nabla h\rangle \tag{5.42}\] \[=\Delta_{f}\Sigma^{x}+2e^{h}\langle\Sigma_{xu},\nabla h\rangle+e ^{h}|\nabla h|^{2}(\Sigma_{u}+e^{h}\Sigma_{uu})+e^{h}\Sigma_{u}\Delta_{f}h.\]
For the sake of future reference we also note that
\[\Delta_{f}e^{-h} =\Delta e^{-h}-\langle\nabla f,\nabla e^{-h}\rangle\] \[=-\text{div}(e^{-h}\nabla h)+e^{-h}\langle\nabla f,\nabla h\rangle\] \[=-e^{-h}\Delta h+e^{-h}|\nabla h|^{2}+e^{-h}\langle\nabla f, \nabla h\rangle=-e^{-h}(\Delta_{f}h-|\nabla h|^{2}). \tag{5.43}\]
Returning to (5.39) and picking up the estimate from where we left, for the last two terms inside the parentheses in the sum on the right, we can write
\[2(\lambda-1)\langle\nabla h,\nabla(e^{-h}\Sigma)\rangle +\lambda\Delta_{f}(e^{-h}\Sigma)\] \[=2(\lambda-1)[-e^{-h}\Sigma|\nabla h|^{2}+e^{-h}\langle\nabla h, \nabla\Sigma\rangle]\] \[\quad+\lambda[e^{-h}\Delta_{f}\Sigma+\Sigma\Delta_{f}e^{-h}-2e^{ -h}\langle\nabla h,\nabla\Sigma\rangle]\] \[= -2(\lambda-1)e^{-h}\Sigma|\nabla h|^{2}+2\lambda e^{-h}\langle \nabla h,\nabla\Sigma\rangle\] \[\quad-2e^{-h}\langle\nabla h,(\Sigma_{x}+e^{h}\Sigma_{u}\nabla h )\rangle+\lambda e^{-h}(\Delta_{f}\Sigma^{x}+2e^{h}\langle\Sigma_{xu},\nabla h\rangle)\] \[\quad+\lambda e^{-h}(e^{h}|\nabla h|^{2}(\Sigma_{u}+e^{h}\Sigma_{ uu})+e^{h}\Sigma_{u}\Delta_{f}h)\] \[\quad+\lambda\Sigma e^{-h}(-\Delta_{f}h+|\nabla h|^{2})-2\lambda e ^{-h}\langle\nabla h,\nabla\Sigma\rangle. \tag{5.44}\]
As according to (5.16) we have
\[\Delta_{f}h[\lambda\Sigma_{u}-\lambda\Sigma e^{-h}] =[-G/(\lambda t_{1})-(\lambda-1)|\nabla h|^{2}/\lambda]\left[\lambda( \Sigma_{u}-\Sigma e^{-h})\right]\] \[=-(G/t_{1})(\Sigma_{u}-\Sigma e^{-h})-(\lambda-1)|\nabla h|^{2}( \Sigma_{u}-\Sigma e^{-h}), \tag{5.45}\]
upon substitution back in (5.44) this gives
\[2(\lambda-1) \langle\nabla h,\nabla(e^{-h}\Sigma)\rangle+\lambda\Delta_{f}(e^{ -h}\Sigma)\] \[= -2(\lambda-1)e^{-h}\Sigma|\nabla h|^{2}-2e^{-h}\langle\nabla h, \Sigma_{x}\rangle-2\Sigma_{u}|\nabla h|^{2}\] \[+\lambda e^{-h}\Delta_{f}\Sigma^{x}+2\lambda\langle\Sigma_{xu}, \nabla h\rangle+\lambda|\nabla h|^{2}\Sigma_{u}+\lambda|\nabla h|^{2}e^{h} \Sigma_{uu}\] \[+\lambda\Sigma e^{-h}|\nabla h|^{2}-(G/t_{1})(\Sigma_{u}-\Sigma e ^{-h})-(\lambda-1)|\nabla h|^{2}(\Sigma_{u}-\Sigma e^{-h})\] \[=|\nabla h|^{2}[-2(\lambda-1)e^{-h}\Sigma-2\Sigma_{u}+\lambda \Sigma_{u}+\lambda e^{h}\Sigma_{uu}]\] \[+|\nabla h|^{2}[\lambda e^{-h}\Sigma-(\lambda-1)\Sigma_{u}+( \lambda-1)\Sigma e^{-h}]\] \[-[2\langle\nabla h,(e^{-h}\Sigma_{x}-\lambda\Sigma_{xu})\rangle+( G/t_{1})(\Sigma_{u}-\Sigma e^{-h})-\lambda e^{-h}\Delta_{f}\Sigma^{x}].\]
Therefore, by taking into account the relevant cancellations, after simplifying terms and using basic inequalities, we can write
\[2(\lambda-1) \langle\nabla h,\nabla(e^{-h}\Sigma)\rangle+\lambda\Delta_{f}(e^{ -h}\Sigma)\] \[\geq|\nabla h|^{2}(e^{-h}\Sigma-\Sigma_{u}+\lambda e^{h}\Sigma_{ uu})-(G/t_{1})(\Sigma_{u}-e^{-h}\Sigma)\] \[-2|\nabla h||e^{-h}\Sigma_{x}-\lambda\Sigma_{xu}|+\lambda e^{-h} \Delta_{f}\Sigma^{x}. \tag{5.46}\]
As a result making use of the relations (5.40)-(5.43) and the inequality (5.46) above and substituting all back into (5.39) and recalling \(0\leq\psi\leq 1\) we obtain:
\[0\geq -\psi G([c_{2}+(m-1)c_{1}(1+R\sqrt{k_{1}})+2c_{1}^{2}]t_{1}/R^{2}+ 1+c_{1}\underline{k}_{2}t_{1})\] \[-t_{1}\psi^{2}G(\Sigma_{u}-e^{-h}\Sigma)+t_{1}^{2}(\psi^{2}/m)[| \nabla h|^{2}+e^{-h}\Sigma-\partial_{t}h]^{2}-2c_{1}t_{1}\psi^{3/2}|\nabla h|G/R\] \[-2t_{1}^{2}\psi^{2}|\nabla h|^{2}[(m-1)k_{1}+(\lambda-1)\overline {k}_{2}+k_{3}-(e^{-h}\Sigma-\Sigma_{u}+\lambda e^{h}\Sigma_{uu})/2]\] \[-t_{1}^{2}\psi^{2}|\nabla h|(2|e^{-h}\Sigma_{x}-\lambda\Sigma_{xu} |+\lambda|\nabla\partial_{t}f|+2\lambda\underline{k}_{2}|\nabla f|)\] \[+\lambda t_{1}^{2}\psi^{2}(e^{-h}\Delta_{f}\Sigma^{x}-\lambda n[( \underline{k}_{2}+\overline{k}_{2})^{2}+2k_{3}]). \tag{5.47}\]
In order to obtain the desired bounds out of this it is more efficient to introduce the quantities \(y,z\) by setting [as before evaluated at the maximum point at \((x_{1},t_{1})\)]
\[y=\psi|\nabla h|^{2},\qquad z=\psi(\partial_{t}h-e^{-h}\Sigma). \tag{5.48}\]
Note in particular that \(y-\lambda z=\psi G/t_{1}>0\). Now referring to (5.47) and recalling the bounds (2.14) we introduce the constants
\[\mathsf{A}=\mathsf{A}^{\Sigma} =2[(m-1)k_{1}+(\lambda-1)\overline{k}_{2}+k_{3}]\] \[\quad-\inf_{\Theta_{2R,T}}\{[(\lambda u^{2}\Sigma_{uu}-u\Sigma_{ u}+\Sigma)/u]_{-}\}, \tag{5.49}\]
and
\[\mathsf{B}=\mathsf{B}^{\Sigma}=\lambda\ell_{2}+2\lambda\underline{k}_{2}\ell_ {1}+\sup_{\Theta_{2R,T}}\{2|(\Sigma_{x}-\lambda u\Sigma_{xu})/u|\}. \tag{5.50}\]
We remind the reader that here we are making use of the notation introduced earlier on
\[\Theta_{2R,T}=\{(t,x,u):(x,t)\in Q_{2R,T},\,\underline{u}\leq u\leq\overline{u}\} \subset[0,T]\times M\times(0,\infty), \tag{5.51}\]
where \(\overline{u}\), \(\underline{u}\) denote the maximum and minimum of \(u\) on the compact space-time cylinder \(Q_{2R,T}\). In particular since \(u\) is positive we have \([\underline{u},\overline{u}]\subset(0,\infty)\). Now substituting the quantities (5.48) back in (5.47), recalling again \(0\leq\psi\leq 1\), and utilising (5.49)-(5.50), basic considerations and bounds lead to
\[0\geq -\psi G([c_{2}+(m-1)c_{1}(1+R\sqrt{k_{1}})+2c_{1}^{2}]t_{1}/R^{2}+1 +c_{1}\underline{k}_{2}t_{1})\] \[+(t_{1}^{2}/m)[(y-z)^{2}-(2mc_{1}/R)\sqrt{y}(y-\lambda z)-m \mathsf{A}y-m\mathsf{B}\sqrt{y}]\] \[-t_{1}\psi G[\Sigma_{u}-e^{-h}\Sigma]_{+}+\lambda t_{1}^{2}\psi^{ 2}[e^{-h}\Delta_{f}\Sigma^{x}]_{-}-\lambda t_{1}^{2}\psi^{2}[\lambda n( \underline{k}_{2}+\overline{k}_{2})^{2}+2\lambda nk_{3}]. \tag{5.52}\]
Next an application of Lemma 5.3 with the choices \(a=2mc_{1}/R\), \(b=m\mathsf{A}\) and \(c=m\mathsf{B}\) and with \(y,z\) as in (5.48) and \(\lambda>1\) as above gives, for any \(\varepsilon\in(0,1)\),
\[(y-z)^{2}- (2mc_{1}/R)\sqrt{y}(y-\lambda z)-m\mathsf{A}y-m\mathsf{B}\sqrt{y}\] \[\geq (y-\lambda z)^{2}/\lambda^{2}-m^{2}c_{1}^{2}\lambda^{2}(y-\lambda z )/[2(\lambda-1)R^{2}]\] \[-m^{2}\lambda^{2}\mathsf{A}^{2}/[4(1-\varepsilon)(\lambda-1)^{2}] -(3/4)(m^{4}\lambda^{2}\mathsf{B}^{4}/[4\varepsilon(\lambda-1)^{2}])^{1/3}. \tag{5.53}\]
Hence by substituting back in (5.52) it follows that
\[0\geq -\psi G([c_{2}+(m-1)c_{1}(1+R\sqrt{k_{1}})+2c_{1}^{2}]t_{1}/R^{2}+ 1+c_{1}\underline{k}_{2}t_{1})\] \[+(t_{1}^{2}/m)[(\psi G)^{2}/(t_{1}^{2}\lambda^{2})-m^{2}c_{1}^{2} \lambda^{2}(\psi G)/(2(\lambda-1)R^{2}t_{1})]\] \[-(mt_{1}^{2}\lambda^{2}\mathsf{A}^{2})/[4(1-\varepsilon)(\lambda -1)^{2}]-t_{1}\psi G[\Sigma_{u}-e^{-h}\Sigma]_{+}\] \[-[(3t_{1}^{2})/(4m)](m^{4}\lambda^{2}\mathsf{B}^{4}/[4 \varepsilon(\lambda-1)^{2}])^{1/3}\] \[+\lambda t_{1}^{2}\psi^{2}[e^{-h}\Delta_{f}\Sigma^{x}]_{-}- \lambda t_{1}^{2}\psi^{2}[\lambda n(\underline{k}_{2}+\overline{k}_{2})^{2}+2 \lambda nk_{3}]. \tag{5.54}\]
Upon setting
\[\mathsf{D}= [c_{2}+(m-1)c_{1}(1+R\sqrt{k_{1}})+2c_{1}^{2}]t_{1}/R^{2}+1\] \[+c_{1}\underline{k}_{2}t_{1}+mt_{1}c_{1}^{2}\lambda^{2}/[2( \lambda-1)R^{2}]+t_{1}\gamma_{1}^{\Sigma}, \tag{5.55}\]
and
\[\mathsf{E}= m\lambda^{2}\mathsf{A}^{2}/[4(1-\varepsilon)(\lambda-1)^{2}]\] \[+(3/4)[m\lambda^{2}\mathsf{B}^{4}/(4\varepsilon(\lambda-1)^{2})] ^{1/3}\] \[+\lambda^{2}n(\underline{k}_{2}+\overline{k}_{2})^{2}+2\lambda^{2 }nk_{3}+\lambda\gamma_{2}^{\Sigma}, \tag{5.56}\]
where
\[\gamma_{1}^{\Sigma}=\sup_{\Theta_{2R,T}}\{[(u\Sigma_{u}-\Sigma)/u]_{+}\}, \qquad\gamma_{2}^{\Sigma}=-\inf_{\Theta_{2R,T}}\{[\Delta_{f}\Sigma^{x}/u]_{-}\}, \tag{5.57}\]
we can rewrite (5.54) after rearranging terms as
\[0\geq(\psi G)^{2}/(m\lambda^{2})-(\psi G)\mathsf{D}-t_{1}^{2}\mathsf{E}. \tag{5.58}\]
As a result basic considerations on the inequality (5.58) lead to the conclusion
\[\psi G \leq(m\lambda^{2}/2)\left(\mathsf{D}+\sqrt{\mathsf{D}^{2}+4t_{1}^{2} \mathsf{E}/(m\lambda^{2})}\right)\] \[\leq(m\lambda^{2}/2)\left(2\mathsf{D}+\sqrt{(4t_{1}^{2}\mathsf{E} )/(m\lambda^{2})}\right)\] \[=m\lambda^{2}\mathsf{D}+t_{1}\lambda\sqrt{m\mathsf{E}}. \tag{5.59}\]
Since \(\psi\equiv 1\) for \(d(x,x_{0},\tau)\leq R\) and \((x_{1},t_{1})\) is the point where \(\psi G\) attains its maximum on \(\{d(x,x_{0},t)\leq 2R,0\leq t\leq\tau\}\) we have
\[G(x,\tau)=[\psi G](x,\tau)\leq[\psi G](x_{1},t_{1})\leq m\lambda^{2}\mathsf{D} +t_{1}\lambda\sqrt{m\mathsf{E}}. \tag{5.60}\]
Therefore recalling (5.13), substituting for \(\mathsf{D}\) and \(\mathsf{E}\) from (5.55) and (5.56) above and making noting \(t_{1}\leq\tau\), we can write after dividing both sides \(\lambda\tau\),
\[\lambda^{-1}|\nabla h|^{2}-\partial_{t}h+e^{-h}\Sigma \leq(m\lambda/\tau)\mathsf{D}+\sqrt{m\mathsf{E}}\] \[\leq(m\lambda)[c_{2}+(m-1)c_{1}(1+R\sqrt{k_{1}})+2c_{1}^{2}]/R^{2}\] \[\quad+(m\lambda/\tau)+(m\lambda)(\gamma_{1}^{\Sigma}+c_{1} \underline{k}_{2}+mc_{1}^{2}\lambda^{2}/[2(\lambda-1)R^{2}])\] \[\quad+\sqrt{m}\{m\lambda^{2}\mathsf{A}^{2}/[4(1-\varepsilon)( \lambda-1)^{2}]\] \[\quad+(3/4)[m\lambda^{2}\mathsf{B}^{4}/(4\varepsilon(\lambda-1)^ {2})]^{1/3}\] \[\quad+\lambda^{2}n(\underline{k}_{2}+\overline{k}_{2})^{2}+2 \lambda^{2}nk_{3}+\lambda\gamma_{2}^{\Sigma}\}^{1/2}. \tag{5.61}\]
Finally using the arbitrariness of \(0<\tau\leq T\) it follows after reverting back to \(u\) upon noting the relation \(h=\log u\) and rearranging terms that
\[\frac{|\nabla u|^{2}}{\lambda u^{2}}-\frac{\partial_{t}u}{u}+ \frac{\Sigma}{u} \leq(m\lambda)[1/t+\gamma_{1}^{\Sigma}+c_{1}\underline{k}_{2}]\] \[\quad+(m\lambda)[mc_{1}^{2}\lambda^{2}/[2(\lambda-1)]+c_{2}+(m-1 )c_{1}(1+R\sqrt{k_{1}})+2c_{1}^{2}]/R^{2}\] \[\quad+\sqrt{m}\{m\lambda^{2}\mathsf{A}^{2}/[4(1-\varepsilon)( \lambda-1)^{2}]+(3/4)[m\lambda^{2}\mathsf{B}^{4}/(4\varepsilon(\lambda-1)^{2} )]^{1/3}\] \[\quad+\lambda^{2}n(\underline{k}_{2}+\overline{k}_{2})^{2}+2 \lambda^{2}nk_{3}+\lambda\gamma_{2}^{\Sigma}\}^{1/2}. \tag{5.62}\]
which is the desired estimate (2.15) with \(q\equiv 0\). Now to establish the estimate in its full strength (for general \(q\)) it suffices to replace \(\Sigma\) with \(\Sigma+qu\) and use the conclusion from the first part. Then making note of [_cf._ (2.19) and (5.57)]
\[\gamma_{1}^{\Sigma+qu}=\gamma_{1}^{\Sigma},\qquad\gamma_{2}^{\Sigma+qu}\leq \gamma_{2}^{\Sigma}+\gamma_{2}^{qu}, \tag{5.63}\]
along with \(\mathsf{A}^{\Sigma+qu}=\mathsf{A}^{\Sigma}\)[_cf._ (2.16) and (5.49)] and \(\mathsf{B}\) in (5.50) changing to (2.17) gives the estimate (2.15). The proof is thus complete.
## 6. Proof of the parabolic Harnack inequality in Theorem 2.8
With the aid of the estimates established in Theorem 2.6 we can now prove the desired parabolic Harnack inequality in Theorem 2.8. Towards this end it suffices to integrate the former estimate along suitable space-times curves in \(Q_{R,T}\subset M\times[0,T]\). Here we shall prove only the local Harnack inequality. The global inequality is similar
(see the comments at the end). Towards this end let us first move on to rewriting the Li-Yau Harnack inequality (2.15) as follows:
\[\frac{|\nabla u|^{2}}{\lambda u^{2}}-\frac{\partial_{t}u}{u}+q+ \frac{\Sigma(t,x,u)}{u} \leq\frac{m\lambda}{t}+m\lambda(\gamma_{1}^{\Sigma}+c_{1}\underline {k}_{2})\] \[\quad+\frac{m\lambda}{R^{2}}\left[\frac{mc_{1}^{2}\lambda^{2}}{2( \lambda-1)}+c_{2}+(m-1)c_{1}(1+R\sqrt{k_{1}})+2c_{1}^{2}\right]\] \[\quad+\sqrt{m}\biggl{\{}\frac{m\lambda^{2}\mathsf{A}^{2}}{4(1- \varepsilon)(\lambda-1)^{2}}+\frac{3}{4}\left[\frac{m\lambda^{2}\mathsf{B}^{4 }}{4\varepsilon(\lambda-1)^{2}}\right]^{1/3}\] \[\quad+\lambda^{2}n(\underline{k}_{2}+\overline{k}_{2})^{2}+2 \lambda^{2}nk_{3}+\lambda(\gamma_{2}^{\Sigma}+\gamma_{2}^{qu})\biggr{\}}^{1/2}. \tag{6.1}\]
Put \(\overrightarrow{k}=(k_{1},\underline{k}_{2},\overline{k}_{2},k_{3})\), \(\overrightarrow{\gamma}=(\gamma_{1}^{\Sigma},\gamma_{2}^{\Sigma},\gamma_{2} ^{qu},\gamma_{3}^{\Sigma})\) and let \(\mathsf{S}=\mathsf{S}(m,\varepsilon,\lambda,\mathsf{A},\mathsf{B},R,T, \overrightarrow{k},\overrightarrow{\gamma},\underline{q})\) be defined by
\[\mathsf{S}= -\frac{m\lambda}{R^{2}}\left[\frac{mc_{1}^{2}\lambda^{2}}{2( \lambda-1)}+c_{2}+(m-1)c_{1}(1+R\sqrt{k_{1}})+2c_{1}^{2}\right]\] \[-\sqrt{m}\biggl{\{}\frac{m\lambda^{2}\mathsf{A}^{2}}{4(1- \varepsilon)(\lambda-1)^{2}}+\frac{3}{4}\left[\frac{m\lambda^{2}\mathsf{B}^{ 4}}{4\varepsilon(\lambda-1)^{2}}\right]^{1/3}\] \[+\lambda^{2}n(\underline{k}_{2}+\overline{k}_{2})^{2}+2\lambda^{ 2}nk_{3}+\lambda(\gamma_{2}^{\Sigma}+\gamma_{2}^{qu})\biggr{\}}^{1/2}\] \[-m\lambda(\gamma_{1}^{\Sigma}+c_{1}\underline{k}_{2})+\underline {q}+\gamma_{3}^{\Sigma}, \tag{6.2}\]
where
\[\gamma_{3}^{\Sigma}=\inf_{\Theta_{2R,T}}\left\{\frac{\Sigma(t,x,u)}{u}\right\},\qquad\underline{q}=\inf_{Q_{2R,T}}q=\gamma_{3}^{qu}. \tag{6.3}\]
It follows from (6.1) that \(\partial_{t}u/u\geq|\nabla u|^{2}/(\lambda u^{2})-m\lambda/t+\mathsf{S}\). Suppose \(\gamma\in\mathscr{C}^{1}([t_{1},t_{2}];M)\) is an arbitrary curve lying entirely in \(Q_{R,T}\) with \(\gamma(t_{1})=x_{1}\) and \(\gamma(t_{2})=x_{2}\). Using the above it is seen that
\[d/dt[\log u(\gamma(t),t)] =\langle\nabla u/u,\dot{\gamma}(t)\rangle+\partial_{t}u/u\] \[\geq\langle\nabla u/u,\dot{\gamma}(t)\rangle+|\nabla u|^{2}/( \lambda u^{2})-(m\lambda)/t+\mathsf{S}\] \[=\lambda^{-1}|\nabla u/u+\lambda\dot{\gamma}(t)/2|^{2}-\lambda| \dot{\gamma}(t)|^{2}/4-(m\lambda)/t+\mathsf{S}\] \[\geq-\lambda|\dot{\gamma}(t)|^{2}/4-(m\lambda)/t+\mathsf{S}, \tag{6.4}\]
where the inner products are with respect to \(g(t)\). Therefore integrating the above inequality gives
\[\log\frac{u(x_{2},t_{2})}{u(x_{1},t_{1})} =\log u(\gamma(t),t)\bigg{|}_{t_{1}}^{t_{2}}=\int_{t_{1}}^{t_{2}} \frac{d}{dt}\log u(\gamma(t),t)\,dt\] \[\geq\int_{t_{1}}^{t_{2}}-\frac{\lambda}{4}|\dot{\gamma}(t)|^{2}\, dt-\int_{t_{1}}^{t_{2}}\frac{m\lambda}{t}dt+\int_{t_{1}}^{t_{2}}\mathsf{S}\,dt\] \[=-m\lambda\log(t_{2}/t_{1})-(\lambda/4)\int_{t_{1}}^{t_{2}}| \dot{\gamma}(t)|^{2}\,dt+(t_{2}-t_{1})\mathsf{S}. \tag{6.5}\]
Hence upon exponentiating we have
\[\frac{u(x_{2},t_{2})}{u(x_{1},t_{1})}\geq\left(\frac{t_{2}}{t_{1}} \right)^{-m\lambda}\exp\left[-\int_{t_{1}}^{t_{2}}\frac{\lambda}{4}|\dot{\gamma} (t)|^{2}\,dt\right]\exp[(t_{2}-t_{1})\mathsf{S}], \tag{6.6}\]
or upon rearranging terms and rescaling the integral:
\[u(x_{2},t_{2})\geq u(x_{1},t_{1})\left(\frac{t_{2}}{t_{1}}\right) ^{-m\lambda}e^{-\lambda L(x_{1},x_{2},t_{2}-t_{1})}e^{(t_{2}-t_{1})\mathsf{S}} \tag{6.7}\]
where
\[L(x_{1},x_{2},t_{2}-t_{1})=\inf_{\gamma}\left[\frac{1}{4(t_{2}-t_ {1})}\int_{0}^{1}|\dot{\gamma}(t)|^{2}\,dt\right]. \tag{6.8}\]
This gives the parabolic Harnack inequality in its local form. Now if the bounds are global by arguing exactly as above using the global estimate in Theorem 2.7 we obtain the global counterpart of the inequality. This therefore completes the proof.
## 7. Proof of the Liouville results in Theorem 2.9 and Theorem 2.10
**Proof of Theorem 2.9.** Under the stated assumptions and the fact that \(u\) and the metric-potential pair \((g,f)\) are time independent (i.e., \(\partial_{t}u\equiv 0\), \(\partial_{t}g\equiv 0\), \(\partial_{t}f\equiv 0\)), it follows from the global estimate (2.12) in Theorem 2.5 with \(q\equiv 0\) and \(k=0\) that
\[\sup_{M}\left(\frac{|\nabla u|}{\sqrt{u}}\right)\leq C\sup_{M}\left(\mathsf{T}_{\Sigma}^{1/2}(u)+\mathsf{S}_{\Sigma}^{1/3}(u) \right)\Big{(}\sup_{M}\sqrt{u}\Big{)}, \tag{7.1}\]
where \(\mathsf{T}_{\Sigma}(u)\) and \(\mathsf{S}_{\Sigma}(u)\) are the expressions given in (2.11). Now a close inspection of the latter expressions upon recalling the condition \(\Sigma=\Sigma(u)\) gives \(\mathsf{S}_{\Sigma}(u)=0\) while by virtue of the inequality \(\Sigma(u)-2u\Sigma_{u}(u)\geq 0\) we have
\[\mathsf{T}_{\Sigma}(u)=\left[\frac{2u\Sigma_{u}(u)-\Sigma(u)}{u} \right]_{+}=0. \tag{7.2}\]
As a result it follows from (7.1) and the global bound \(\sup_{M}\sqrt{u}=\sqrt{\sup_{M}u}<\infty\) that \(|\nabla u|\equiv 0\) and so the conclusion follows.
**Proof of Theorem 2.10.** The proof of (2.25) follows directly from the global estimate (2.20) in Theorem 2.7 upon noting that \(u\) and the metric-potential pair \((g,f)\) are time independent (i.e., \(\partial_{t}u\equiv 0\), \(\partial_{t}g\equiv 0\), \(\partial_{t}f\equiv 0\)). Now combining the latter with the assumptions on the nonlinearity \(\Sigma\) and the solution \(u\) as formulated in the theorem it follows that
\[\frac{|\nabla u|^{2}}{\lambda u^{2}}+\frac{\Sigma(u)}{u}\leq m\lambda\sup_{\Theta}\left\{\left[\frac{u\Sigma_{u}(u)-\Sigma(u)}{u} \right]_{+}\right\}\\ +\frac{m\lambda/\sqrt{1-\varepsilon}}{2(\lambda-1)}\sup_{\Theta} \left\{\left[\frac{-\Sigma(u)+u\Sigma_{u}(u)-\lambda u^{2}\Sigma_{uu}(u)}{u} \right]_{+}\right\}=0. \tag{7.3}\]
Since \(\Sigma(u)\geq 0\) we thus infer that \(|\nabla u|^{2}/(\lambda u^{2})+\Sigma(u)/u\equiv 0\) and therefore \(|\nabla u|\equiv 0\) on \(M\). The conclusion on \(u\) being a constant is now immediate.
## 8. Hamilton type estimates and universal global bounds on closed manifolds
**Lemma 8.1**.: _Let u be a bounded positive solution to \((\partial_{t}-q(x,t)-\Delta_{f})u=\Sigma(t,x,u)\) with \(0<u\leq D\) and suppose the metric-potential pair \((g,f)\) is of class \(\mathscr{C}^{2}\) and evolves under the \(\mathsf{k}\)-super Perelman-Ricci flow (1.19). Let_
\[\mathscr{F}_{\gamma}[u]=\gamma(t)|\nabla u|^{2}/u-u\log(D/u), \tag{8.1}\]
_where \(\gamma\) is a smooth, non-negative but otherwise arbitrary function. Then we have_
\[\Box_{q}\mathscr{F}_{\gamma}[u] \leq 2\gamma\langle\nabla u,\nabla q\rangle+(\gamma^{{}^{\prime}}+2 \mathsf{k}\gamma-1)|\nabla u|^{2}/u+2(\gamma/u)\langle\nabla u,\nabla\Sigma\rangle \tag{8.2}\] \[+[1-\log(D/u)]\Sigma(t,x,u)-\gamma(|\nabla u|^{2}/u^{2})\Sigma(t,x,u)+qu,\]
_where as before \(\Box_{q}=(\partial_{t}-q(x,t)-\Delta_{f})\)._
Proof.: Starting from (8.1) and working our way forward, it is firstly seen that,
\[\left[\begin{array}{c}\nabla\\ \partial_{t}\end{array}\right]\frac{|\nabla u|^{2}}{u}=\left[\begin{array}{c }(1/u)\nabla|\nabla u|^{2}-(|\nabla u|^{2}/u^{2})\nabla u\\ (2/u)\langle\nabla u,\nabla\partial_{t}u\rangle-(|\nabla u|^{2}/u^{2}) \partial_{t}u-(1/u)[\partial_{t}g](\nabla u,\nabla u)\end{array}\right]. \tag{8.3}\]
A straightforward calculation upon further differentiation and forming the expression \(\Delta_{f}(|\nabla u|^{2}/u)\) then gives
\[\Delta_{f}\frac{|\nabla u|^{2}}{u} =\Delta\frac{|\nabla u|^{2}}{u}-\left\langle\nabla f,\nabla\frac {|\nabla u|^{2}}{u}\right\rangle\] \[=\frac{1}{u}\Delta_{f}|\nabla u|^{2}-2\frac{\langle\nabla|\nabla u |^{2},\nabla u\rangle}{u^{2}}-\frac{|\nabla u|^{2}}{u^{2}}\Delta_{f}u+2\frac{| \nabla u|^{4}}{u^{3}}. \tag{8.4}\]
Similar calculations for the second term \(u\log(D/u)\) in (8.1) lead to
\[\left[\begin{array}{c}\nabla\\ \partial_{t}\end{array}\right][u\log(D/u)]=[\log(D/u)-1]\left[\begin{array}{ c}\nabla\\ \partial_{t}\end{array}\right]u, \tag{8.5}\]
and so after a further differentiation result in
\[\Delta_{f}[u\log(D/u)]=[\log(D/u)-1]\Delta_{f}u-|\nabla u|^{2}/u. \tag{8.6}\]
An application of the weighted \(q\)-heat operator \(\Box_{q}=(\partial_{t}-q-\Delta_{f})\) to \(\mathscr{F}_{\gamma}[u]\) in (8.1) and putting together the above fragments give
\[\Box_{q}\mathscr{F}_{\gamma}[u] =\gamma^{{}^{\prime}}[|\nabla u|^{2}/u]+\gamma(\partial_{t}-q- \Delta_{f})[|\nabla u|^{2}/u]-(\partial_{t}-q-\Delta_{f})[u\log(D/u)]\] \[=\gamma^{{}^{\prime}}\frac{|\nabla u|^{2}}{u}+\frac{\gamma}{u} \left[-\partial_{t}g(\nabla u,\nabla u)+2\langle\nabla u,\nabla\partial_{t}u \rangle-\frac{|\nabla u|^{2}}{u}\partial_{t}u\right]\] \[\quad-\gamma q\frac{|\nabla u|^{2}}{u}-\gamma\left[\frac{\Delta_ {f}|\nabla u|^{2}}{u}-\frac{2\langle\nabla|\nabla u|^{2},\nabla u\rangle}{u^{ 2}}-\Delta_{f}u\frac{|\nabla u|^{2}}{u^{2}}+2\frac{|\nabla u|^{4}}{u^{3}}\right]\] \[\quad-[\log(D/u)-1]\partial_{t}u+qu\log(D/u)+[\log(D/u)-1]\Delta_ {f}u-\frac{|\nabla u|^{2}}{u}\] \[\quad-qu+qu-\gamma q\frac{|\nabla u|^{2}}{u}+\gamma q\frac{| \nabla u|^{2}}{u}, \tag{8.7}\]
or after some calculation and rearrangement of terms
\[\square_{q}\mathscr{F}_{\gamma}[u]= \ (\gamma^{{}^{\prime}}-1)\frac{|\nabla u|^{2}}{u}+\frac{\gamma}{u}[- \partial_{t}g(\nabla u,\nabla u)+2\langle\nabla u,\nabla\partial_{t}u\rangle- \Delta_{f}|\nabla u|^{2}\] \[+\frac{2}{u}\langle\nabla|\nabla u|^{2},\nabla u\rangle-2\frac{| \nabla u|^{4}}{u^{2}}]-[\log\frac{D}{u}-1](\partial_{t}-q-\Delta_{f})u\] \[-\gamma\frac{|\nabla u|^{2}}{u^{2}}(\partial_{t}-q-\Delta_{f})u+ qu-2\gamma q\frac{|\nabla u|^{2}}{u}. \tag{8.8}\]
Now by recalling the equation \(\square_{q}u=(\partial_{t}-q-\Delta_{f})u=\Sigma(t,x,u)\) satsfied by \(u\) and hence that \(2\langle\nabla u,\nabla\partial_{t}u\rangle=2\langle\nabla u,\nabla\Delta_{f} u\rangle+2\langle\nabla u,\nabla(qu)\rangle+2\langle\nabla u,\nabla\Sigma\rangle\) and upon invoking the Bochner-Weitzenbock formula \(\Delta_{f}|\nabla u|^{2}=2|\nabla^{2}u|^{2}+2\langle\nabla u,\nabla\Delta_{f} u\rangle+2\mathscr{R}ic_{f}(\nabla u,\nabla u)\) we can rewrite (8.8) after substitution from the above as
\[\square_{q}\mathscr{F}_{\gamma}[u]= \ (\gamma^{{}^{\prime}}-1)\frac{|\nabla u|^{2}}{u}-\frac{\gamma}{u}[ \partial_{t}g(\nabla u,\nabla u)+2|\nabla^{2}u|^{2}+2\mathscr{R}ic_{f}(\nabla u,\nabla u)\] \[-\frac{2}{u}\langle\nabla|\nabla u|^{2},\nabla u\rangle]-2\frac{ \gamma}{u}\frac{|\nabla u|^{4}}{u^{2}}+2\frac{\gamma}{u}\langle\nabla u, \nabla(qu)\rangle+2\frac{\gamma}{u}\langle\nabla u,\nabla\Sigma\rangle\] \[-[\log(D/u)-1]\Sigma(t,x,u)-\gamma\frac{|\nabla u|^{2}}{u^{2}}[ \Sigma(t,x,u)+2qu]+qu. \tag{8.9}\]
Using basic tensor algebra and making note of the the non-negativity of the expression
\[|\nabla^{2}u|^{2}-\frac{\langle\nabla|\nabla u|^{2},\nabla u\rangle}{u}+\frac {|\nabla u|^{4}}{u^{2}}=\left|\nabla^{2}u-\frac{\nabla u\otimes\nabla u}{u} \right|^{2}\geq 0, \tag{8.10}\]
followed by an application of the Perelman-Ricci flow inequality \(\partial_{t}g+2\mathscr{R}ic_{f}(g)\geq-2\mathsf{k}g\) as satisfied by \(g\), we arrive at
\[\square_{q}\mathscr{F}_{\gamma}[u]\leq \ 2\gamma\langle\nabla u,\nabla q\rangle+(\gamma^{{}^{\prime}}+2 \mathsf{k}\gamma-1)\frac{|\nabla u|^{2}}{u}+2\frac{\gamma}{u}\langle\nabla u, \nabla\Sigma\rangle\] \[+[1-\log(D/u)]\Sigma(t,x,u)-\gamma\frac{|\nabla u|^{2}}{u^{2}} \Sigma(t,x,u)+qu, \tag{8.11}\]
which is the required conclusion.
**Proof of Theorem 2.11.** The function \(\gamma(t)=t/(1+2\mathsf{k}t)\) is non-negative and satisfies \(\gamma^{{}^{\prime}}+2\mathsf{k}\gamma-1\leq 0\). Applying Lemma 8.1 with \(q=0\) and \(eD\) in place of \(D\) (note that \(u\leq D\implies u\leq eD\)) we have from (8.2)
\[\square_{q}\mathscr{F}_{\gamma}[u]\leq \ 2\gamma\langle\nabla u,\nabla q\rangle+(\gamma^{{}^{\prime}}+2 \mathsf{k}\gamma-1)\frac{|\nabla u|^{2}}{u}+2\frac{\gamma}{u}\langle\nabla u, \nabla\Sigma(t,x,u)\rangle\] \[+[1-\log(eD/u)]\Sigma(t,x,u)-\gamma\frac{|\nabla u|^{2}}{u^{2}} \Sigma(t,x,u)+qu\] \[\leq \ (\gamma^{{}^{\prime}}+2\mathsf{k}\gamma-1)\frac{|\nabla u|^{2}}{u }+2\frac{\gamma}{u}\langle\nabla u,\Sigma_{x}(t,x,u)\rangle\] \[+2\frac{\gamma}{u}|\nabla u|^{2}\Sigma_{u}(t,x,u)-[\log(D/u)] \Sigma(t,x,u)-\gamma\frac{|\nabla u|^{2}}{u^{2}}\Sigma(t,x,u)\] \[\leq 2\frac{\gamma|\nabla u|^{2}}{u}\Sigma^{\prime}(u)-\frac{ \gamma|\nabla u|^{2}}{u^{2}}\Sigma(u)-\log(D/u)\Sigma(u)\leq 0.\]
Next by an easy inspection \(\mathscr{F}_{\gamma}[u](x,0)\leq 0\) for all \(x\in M\). Furthermore as seen above \(\square\mathscr{F}_{\gamma}[u]=(\partial_{t}-\Delta_{f})\mathscr{F}_{\gamma}[u]\leq 0\). Thus, since \(M\) is closed, an application of maximum principle gives \(\mathscr{F}_{\gamma}[u](x,t)\leq 0\) for all \((x,t)\) in \(M\times[0,T]\) from which the desired estimate (2.27) follows. Next to prove (2.28) set \(\mathsf{U}(x,t)=\log[eD/u(x,t)]\). Then by (2.27),
\[\left|\nabla\sqrt{\mathsf{U}}\right|=\left|\frac{\nabla u/u}{\sqrt{4\mathsf{U }}}\right|\leq\frac{\sqrt{1+2\mathsf{k}t}}{\sqrt{4t}}. \tag{8.12}\]
Integrating the above along a minimising geodesic joining a pair of points \(x_{1},x_{2}\) in \(M\) then gives
\[\sqrt{\log(eD/u(y,t))}-\sqrt{\log(eD/u(x,t))}\leq d(x,y;t)\frac{\sqrt{1+2 \mathsf{k}t}}{\sqrt{4t}}. \tag{8.13}\]
For any \(s>0\) thus \(\log(eD/u(y,t))\leq(1+s)[\log(eD/u(x,t))+d^{2}(x,y;t)(1+2\mathsf{k}t)/(4st)]\). Exponentiating and rearranging yields the desired inequality (2.28).
### Acknowledgement
The authors gratefully acknowledge support from EPSRC.
|
2305.08070 | A Survey of Federated Evaluation in Federated Learning | In traditional machine learning, it is trivial to conduct model evaluation
since all data samples are managed centrally by a server. However, model
evaluation becomes a challenging problem in federated learning (FL), which is
called federated evaluation in this work. This is because clients do not expose
their original data to preserve data privacy. Federated evaluation plays a
vital role in client selection, incentive mechanism design, malicious attack
detection, etc. In this paper, we provide the first comprehensive survey of
existing federated evaluation methods. Moreover, we explore various
applications of federated evaluation for enhancing FL performance and finally
present future research directions by envisioning some challenges. | Behnaz Soltani, Yipeng Zhou, Venus Haghighi, John C. S. Lui | 2023-05-14T04:55:13Z | http://arxiv.org/abs/2305.08070v2 | # A Survey of Federated Evaluation in Federated Learning
###### Abstract
In traditional machine learning, it is trivial to conduct model evaluation since all data samples are managed centrally by a server. However, model evaluation becomes a challenging problem in federated learning (FL), which is called _federated evaluation_ in this work. This is because clients do not expose their original data to preserve data privacy. Federated evaluation plays a vital role in client selection, incentive mechanism design, malicious attack detection, etc. In this paper, we provide the first comprehensive survey of existing federated evaluation methods. Moreover, we explore various applications of federated evaluation for enhancing FL performance and finally present future research directions by envisioning some challenges.
## 1 Introduction
Recently, federated learning (FL) has emerged as a privacy-preserving framework, in which clients collaboratively train shared machine learning models without exposing their own local data during the training process [16]. FL can extensively exploit massive data samples scattered on decentralized clients such as Internet-of-Things (IoTs) and mobile devices for model training [22]. With FL, clients only expose model information rather than original data samples for training models. More specifically, the FL server distributes the global model to selected clients; participants train local models iteratively on their own data and send their local models to the server; the server aggregates the local models to generate the updated global model. The above steps are repeated for a certain number of iterations.
In traditional machine learning, it is trivial to conduct model evaluation with centrally collected data samples from clients. Yet, the model evaluation problem becomes very challenging in FL since all data samples are owned and privately retained by clients. Without owning any data, the server cannot manipulate data for model evaluation.
In FL, model evaluation plays a significant role in model training, which is much more complicated than traditional machine learning. On the one hand, evaluating a model accurately in FL is essential for designing incentive mechanisms by reasonably rewarding each participating client [18, 1, 13], devising efficient client selection strategies [11, 12], detecting malicious attacks [14, 15] and deriving personalized models [16, 17] based on evaluation results. On the other hand, each FL client has a local model trained on their own local data implying that each individual FL client can be evaluated independently. However, without the knowledge about clients' data, it is a challenging problem to evaluate the importance of clients.
To make model evaluation feasible in FL, tremendous efforts have been dedicated by existing works. We propose two different ways to categorize existing methods. Firstly, its architecture can be categorized as: centralized federated evaluation and decentralized federated evaluation. The former one assumes that a single FL server or task owner evaluates the quality of FL models. The latter one recruits a number of independent clients to conduct federated evaluation of models in a distributed fashion. Secondly, federated evaluation methods can be categorized based on their evaluation approaches such as data-level evaluation, utility-based approach, Shapley values approach, and statistical metric-based approach.
To the best of our knowledge, there are no existing works that explore federated evaluation in different scenarios. To bridge this gap, we review existing methods, survey the applications of federated evaluation results, discuss the challenges of federated evaluation and envision potential future work.
## 2 Federated Evaluation Architecture
In this section, we briefly introduce the workflow of FL and discuss two kinds of federated evaluation architectures: centralized architectures and decentralized architectures.
### FL System
In a FL system, there are typically multiple decentralized clients that participate in the training process. Each client owns a training dataset and a test dataset. The objective of FL clients is to collaboratively train a shared model. FL is usually conducted for multiple global iterations (a.k.a. rounds). At the beginning of each global iteration, a global model is
distributed by the server to participating clients. On each participating client, the global model is updated with their local dataset to obtain a local model. Then, each client returns model information (e.g. model parameters and model gradients) to the server. The server aggregates collected models from participating clients to update the global model.
In Fig. 1, we present a snapshot of FL for a particular global iteration. It is worth noting that:
* In FL, each individual client contributes a local model trained based on private local data. It implies that each model can be evaluated independently.
* The FL server cannot touch data samples owned by clients, and hence the server is unable to directly conduct evaluation of models. To overcome this shortcoming, the FL server can exploit its auxiliary data or employ statistical methods to evaluate clients.
Based on Fig. 1, we broadly discuss how existing works conduct federated evaluation to evaluate models including both the global model and local models in a FL system.
### Centralized Federated Evaluation
Clients can be evaluated using a single FL server or task owner based on statistical information uploaded by clients. The most straightforward way is to evaluate clients based on their data sizes [14]. The server can evaluate clients more accurately if clients can share label distributions to the server [15]. More complicated statistical metrics can be designed to evaluate local models on the server side based on model information (e.g. model parameters and model gradients), which are uploaded from clients to the server [13, 1].
If test data is available at the server, the server can evaluate local models using test data directly, though the assumption that the server holds test data is strong and impracticable in many scenarios. A number of centralized evaluation methods are introduced as follows. In [13], local models are evaluated using the validation set on the server. A key idea here is that if a high-quality local model participates in the aggregation process, the loss value of the global model should be decreased. FIFL [1] defines the marginal test loss to detect malicious clients in FL. It uses the first Taylor's first-order expansion to simplify its calculation. Therefore, the similarity distance between the local gradients and the server gradient obtained from the test dataset at the server is computed for detecting abnormal local models. In [13], local models are evaluated based on the difference between the average loss value of the global model on the test dataset and the average training loss value of the local models. The evaluation process also takes historical records into account. It allocates larger weights to recent records due to their higher level of informativeness. Clients are selected with the aim of maximizing the sum of evaluated qualities subject to a budget. The test performance difference with or without a local model is another metric to evaluate the importance of clients [12].
Even if the test data is not available on the server, clients can still be centrally evaluated using their local model information. For example, the difference between local model parameters before and after a training round is considered as a quality metric evaluated by the server [13].
### Decentralized Federated Evaluation
Federated evaluation can be performed on multiple decentralized clients or third parties instead of a single server because of two main reasons: serverless FL and decentrally distributed test data across clients.
In decentralized FL (DFL), there is no dedicated server to conduct centralized evaluation of models [20]. For example, blockchain-based FL consists of miners and devices without relying on a central server [17]. Miners, possibly from clients or third parties' devices such as base stations, are responsible for evaluating local models in a decentralized manner to exclude malicious attackers. Only local models that are verified by miners can be recorded in a generated block with a consensus algorithm. A hierarchical blockchain-based architecture is proposed by [12] which utilizes multiple consortium blockchains as subchains to conduct decentralized model evaluation based on model accuracy. A committee-based serverless FL is designed by [1], in which honest clients are selected as committee members to decentrally evaluate local models based on the difference between the local gradients and the committee gradients. Similarly, Refiner [13] selects a committee of randomly selected validators to evaluate local models based on loss values.
When test data is privately distributed on decentralized clients, model evaluation using test data must be performed in a distributed manner on clients. In this case, it is common to select clients to evaluate model performance using their own test data. The test results can be collected by the server for other purposes. For example, Oort [1] selects participants to serve the developer-specified criteria on testing data. FedFomo [13] and L2C [1] are personalized FL algorithms, which locally evaluate models collected from other clients to locally customize aggregation weights so as to pursue personalized models.
Figure 1: An overview of the federated learning process.
Federated Evaluation Approaches
Due to the inaccessibility of local data in FL, it is challenging to directly evaluate local models. Therefore, we introduce various approaches for indirect evaluation of models.
### Data-level Evaluation
Although accessing original data is forbidden in FL, it is still possible to obtain data quantity information from FL clients. To a certain extent, the local model quality can be evaluated by data quantity information.
In original FL, FedAvg simply uses the local dataset size to determine the aggregation weight of a particular client's local model [15]. However, the non-IID (Identical and Independently Distributed) distribution of data across clients can degrade the utility of the model. In [13], the server negotiates with clients about the sizes of their data, and in return, clients receive rewards based on their data sizes. The goal is to maximize the total amount of training data in order to achieve higher learning accuracy.
Later on, more advanced methods are proposed that evaluate the quality of clients based on their data distribution. In [14], prior to the training process, the server quantifies the intersection between the label sets of clients and the target label set. Clients with an intersection higher than a specified threshold are considered as relevant clients. To preserve privacy, the calculation of intersection is performed using a private set intersection (PSI) method [1]. Next, the server selects clients for training based on their high statistical homogeneity and content diversity. Statistical homogeneity is evaluated using the similarity between a uniform distribution generated on the server and clients' distributions based on homomorphic encryption. Content diversity is evaluated by computing the similarity of clients' data using a noisy content sketch, which is obtained as follows. Each client generates a content embedding vector for each sample using a general deep learning model. The client's data is encoded into a low-dimensional vector based on JL-transformation [15] as a content sketch.
Label quantity information can also be utilized to evaluate clients. In a grouping-based mechanism, clients are divided into multiple groups based on their label quantity information shared with the server. Only clients in the same group are selected for training [16]. This approach introduces a new metric named Group Earth Mover's Distance (GEMD) inspired by Earth Mover's Distance (EMD) [11] to evaluate the difference between the global data distribution and the selected local distributions. A smaller GEMD implies that the data distribution is closer to IID. A pair-wise grouping mechanism is proposed in which each client is initially considered as a separate group. Based on GEMD, each group is merged into a pair iteratively to complement missing labels. The objective is to make the aggregated data distribution close to the global distribution.
### Model Utility
Similar to traditional machine learning, the quality of a local model can be evaluated based on its utility, which can be measured in terms of the loss value or model accuracy.
A new client selection framework called Oort has been introduced in [14], which tries to select the most significant clients for training in each global iteration. Its metric to evaluate a client's importance is the loss value obtained by training the model with local data on each client. Based on the aggregate training loss across all data samples, Oort can dedicatedly select the most important clients to participate in FL. Similarly, FedSAE [14] evaluates the importance of each client based on the local training loss and the number of local samples. According to the importance values, the server determines the selection probability of each client per global iteration.
In [21], an optimal aggregation mechanism is proposed to reduce overall data heterogeneity by excluding adverse local models with the objective of enlarging the expected decrement of the global loss. The proposed method iteratively removes a local model and compares the expected inner product between the local gradients and the global gradient before and after excluding the local model by assuming that the inner product implicitly represents the difference between local data distributions and global data distribution. To ensure that excluding local models leads to a faster convergence, global losses are measured for both global models (i.e. with and without the aforementioned local model) using a test dataset. Based on the change of loss values, the decision is made for removing a local model. A tier-based FL system is proposed in [14], which divides clients into tiers based on their response latencies. Clients from the same tier are selected to participate in FL in case that the training process is slowed down by a slow client. However, a tier-based client selection can incur training bias since a faster tier is prone to be selected with a higher probability. To eliminate bias, the global model is evaluated by each tier to estimate the importance of that tier. The selection probability of a tier is adjusted based on the test accuracy obtained at different rounds.
Blockchain-based FL is proposed to make the FL process traceable and tamper-proofing without relying on a single server [13, 20]. Blockchain-based FL widely adopts model utility to evaluate the quality of models contributed by clients. Refiner [11] introduces a FL system implemented upon Ethereum, a public blockchain to deal with self-interested and malicious devices. A committee of randomly selected validators are employed to evaluate local models and prevent corrupted local models from participating in the aggregation process. Local models are evaluated by computing the loss function on the validation dataset provided by the FL task owner. If the loss values of the local models are lower than a specified threshold, they will be considered qualified and included in the aggregated global model. A hierarchical blockchain framework has been introduced in [17], which consists of a public blockchain as a main blockchain, and multiple consortium blockchains as subchains to store local models for model quality evaluation. In a subchain, the miners evaluate the quality of local models by evaluating their accuracy on a test dataset provided by the FL task owner. Local models are qualified if their accuracy is higher than a defined threshold, which will be recorded in a pending block later.
### Shapley Values
FL can be regarded as a cooperative game played by multiple FL clients. It is proposed that Shapley values (SV) [14, 1], a method in cooperative game theory, can efficiently evaluate the merit of each FL client. The computation of SV is based on the average contribution (in terms of model utility) of a data source to every subset of data sources. In the computation of SV, it is unnecessary to consider the order of data sources during training. However, the computation complexity of SV is exponential, which makes it unaffordable in reality. Different variants have been devised to approximately compute SV in FL.
A variant named federated SV has been introduced in [13] which computes SV from local models without extra communication costs. It also captures the effect of the order of participation because data sources employed earlier have more impact on the model performance compared to those used at the end of the training. The federated SV is estimated using algorithms such as permutation sampling-based approximation and group testing-based approximation. However, federated SV may lead to unfairness on a large scale since only a subset of clients are selected in each round and non-selected ones receive zero credit in the corresponding global iteration. Therefore, clients with identical local data may receive different credits. Completed federated SV has been introduced in [10] to address the aforementioned challenges. It introduces a utility matrix that consists of the contributions of all possible subsets of clients across all training iterations. Due to the partially observed utility matrix caused by the partial selection of clients in each round, the goal is to complete missing entries in the utility matrix. To achieve this, a low-rank matrix completion problem is designed. A group-based SV computation for blockchain-based FL is proposed in [15]. It divides clients into several groups according to a permutation sample, and the aggregated global model is obtained for each group. A new model is generated by aggregating different group models. Finally, the SV of each group is estimated based on model utility and assigned to its client members.
### Statistical Metric
Models can be indirectly evaluated based on statistical metrics. The most widely used one is model distance metrics such as distances between model parameters, gradients, or model performances.
[17] evaluates a local model using the model parameter difference metric before and after a training round. Clients providing higher model parameter divergence are considered to have higher quality, as clients with IID data have larger parameter differences than those with non-IID data. However, clients with large amounts of data can prolong the duration of a round, causing other clients to wait for those slow clients to complete their training. Therefore, a size-related ratio is added to the divergence metric to take into account both data distribution and data size. [17] further proposes to use the model parameter divergence between a local model and a model trained on an auxiliary IID dataset residing on the server to evaluate the degree of non-IID datasets. Clients with a lower degree of non-IID data lead to a lower divergence, which can accelerate the FL convergence.
The quality-aware framework proposed in [14] designs a mechanism to remove unreliable local models from the aggregation process. To evaluate the model quality, the framework measures the median, mean, and standard deviation of the cosine similarity between the local model parameters and the global model parameter. Since clients are selected based on the loss reduction during the learning process, the majority of the received updates are expected to be of high quality. Therefore, If the mean is greater than the median, then the similarity values of low-quality models are higher than the median; otherwise, they are lower. The distance between local and global gradients is computed in [1] using the square of the Euclidean norm to evaluate the contribution of clients. To differentiate between positive and negative contributions, a threshold is set based on the gradient distance. Clients with a gradient distance above the threshold are considered to have a negative contribution, while those below it have a positive contribution.
FOCUS [1] is proposed to evaluate local models by evaluating the quality of local data labels. Each client evaluates the performance of the global model on its local dataset and sends the evaluation results to the server. The server evaluates each local model on its benchmark dataset and calculates the cross entropy between the two sets of evaluation results to measure the quality of the clients' local labels. In [13], a deletion method is introduced where data samples from each client are deleted, the model is retrained, and the difference in prediction results between the new global model and the original one is computed to determine the contribution of each client.
A reputation system can be maintained to evaluate the reliability and quality of local models in FL. A client's reputation can be determined using the combination of a direct reputation (i.e. the reputation evaluated by the task requester) and an indirect reputation (i.e. the reputation evaluated by other requesters). Reputation values are used to conduct client selection in [10, 11], where the reputation of each client is calculated using the multiweight subjective logic model [10]. The model considers three weights: interaction effects (i.e. positive or negative interaction evaluated by quality measurement), interaction timeliness, and interaction frequency. To evaluate the quality of local models, attack detection mechanisms such as Reject on Negative Influence (RONI) [15] and FoolsGold [12] are used for IID and non-IID data distributions, respectively. RONI evaluates local models by computing the difference between the performance with and without a local model on a dataset specified by the task publisher. The corresponding local model is discarded from the aggregation process if the performance difference falls below a certain threshold. FoolsGold evaluates clients based on the gradient diversity of their local models. Clients uploading similar gradients in each round are identified as unreliable workers which may contribute unreliable models, and are excluded from the aggregation process. In the collaborative FL proposed in [17], each individual client can be a task requester or a participant. To evaluate the contribu
tion of local models, all local models and the global model are recorded at each global iteration. A local model that moves more towards the target model has a higher contribution to the global model, leading to a faster convergence speed. To measure the contribution of each client in each round, first, the direction vector is obtained between the initial global model and the final global model. Then, the local model is projected onto the direction vector and multiplied by the absolute value of the cosine of the angle between the local model and the direction vector. Eventually, the sum of the contributions of each client's local model across multiple rounds is calculated to obtain the overall contribution of clients.
In serverless FL, local models can also be evaluated based on statistical metrics. A committee-based serverless FL framework is proposed in [3] where honest clients are selected as committee members to filter local gradients for defending against Byzantine attacks or accelerating FL convergence. To exclude attackers, the local gradients close to the committee gradients are selected for model aggregation based on the Euclidean distance. This is because the Euclidean distance between a malicious gradient and an honest gradient is larger than the distance between two honest gradients. However, this strategy may degrade the performance of FL since honest clients with large gradient differences have less opportunity to participate in aggregation. Therefore, to accelerate FL convergence in a non-attack scenario, clients with different local updates are accepted. To obtain the final score of each client calculated using the Euclidean distance, the committee members broadcast their evaluation scores to each other. To reach a consensus, a client is selected randomly as the primary client, which sends a request to the other committee members to confirm the correctness of its aggregation set. If so, the aggregation process is performed on the committee clients, and if the result is consistent with the request, it is sent back to the primary client. If the primary client receives a sufficient number of consistent results, a consensus is reached. Otherwise, the primary committee member is reallocated and the process is repeated.
In FL, it is possible that there are multiple learning tasks such as personalization. The challenge for training multiple tasks is to customize the training progress for each individual task based on federated evaluation results. A personalized FL framework is introduced in [10] where clients have separate personalized target models for learning at the server. It investigates pair-wise collaborations among clients. Specifically, the server collects local personalized model parameters from clients to update the model for each client by a weighted convex combination of received model parameters. The weights (i.e. contributions) of clients for each personalized target model are evaluated by the similarity between the model parameters of two clients. Therefore, clients with similar model parameters have more contributions to each other. Each client can request its respective model parameters from the server to optimize its local personalized model using their private data. A similar personalized FL architecture (i.e. a personalized target model for each client) has been introduced in [11] which assigns different weights for model layers when aggregating personalized models. More specifically, local features are more related to shallow model layers while global features correspond to deeper model layers. The proposed method employs layer-wise aggregation to achieve higher performance for personalized model training. A hypernetwork [1] for each client is employed on the server to generate the aggregation weight for each layer of different clients. Clients with a similar data distribution have higher weights for aggregation. FedDist algorithm [2] uses distances between neurons to measure how different neurons are between the global and local models. First, the server performs weighted averaging which is similar to the generation of the initial aggregated model in FedAvg. Then, the pairwise Euclidean distance is computed for each neuron in a layer between the local models and the aggregated model to identify diverging neurons. If the Euclidean distance between a specific neuron in any local model and the aggregated model is above a predetermined threshold the neuron is added to the aggregated model in order to improve model generalization. Thus, neurons that are specific to clients are incorporated into the aggregated model.
A personalized FL algorithm based on local memorization has been proposed in [14]. It combines an aggregated global model with a k-nearest neighbors (kNN) model on each client. The global model is employed to compute the shared representation used by the local kNN. Each client computes and stores a representation-label pair for each sample. At inference time, the client queries the representation-label pair to obtain its k-nearest neighbors based on the model distance. Finally, the personalized model for a sample is obtained by interpolating the nearest neighbor distribution with the global model.
Mutual Information (MI) between model parameters or gradients is another kind of useful statistical metric for evaluating local models. In [13], a novel FL mechanism is introduced that exploits MI for both the client-side weight update and the server-side aggregation. The clients' model weights are updated by minimizing the MI between their local models and the aggregated global model. To extract distinct information, the correlation between two models must be minimized, which leads to minimized MI between them. For the model aggregation step, clients send MI values between their local models and the global model to the server. The server defines local models that are either similar to other models (with too high MI values) or significantly different from others (with too low MI values) as outliers. The FL server ranks the uploaded MI values to select the top useful local models for aggregation. Model-contrastive FL (MOON) [12] utilizes contrastive learning at the model level. MOON is built upon FedAvg which incorporates modifications in the local training phase. Its local objective is to decrease the distance between the representation learned by each local model and the global model while increasing the distance between the representation learned by the current local model and the previous local model.
Class imbalance in FL has been investigated in [15] without having access to raw data in order to evaluate the importance of local models. The class distribution of clients can be obtained using their updated gradients, assuming that a balanced auxiliary dataset exists on the server for the classification problem. The correlation between the gra
dients with respect to the corresponding classes brought by auxiliary data on the server and class distribution [1] allows for the calculation of the class imbalance of each client using the Kullback-Leibler (KL) divergence. The statistics of class distributions are learned using Combinatorial Multi-Armed Bandit (CMAB) [1], and the client selection process is considered as a CMAB problem in order to identify clients with minimal class imbalance.
## 4 Applications of Federated Evaluation
Federated evaluation can enhance FL performance from multiple aspects. In this section, we discuss the applications of federated evaluation results in FL to illustrate its importance.
### Understanding Global Model
It is vital to evaluate the global model performance on test datasets during training to understand the performance of FL and determine the cut-off accuracy.
In the ideal scenario, the server holds the test dataset and can centrally evaluate the global model. However, in most cases, the test data is not available on the FL server. As a result, the server resorts to conducting federated evaluation on selected clients. However, randomly selecting testing clients can lead to data deviations from the target distribution and may result in biased testing results. In [11], a method is proposed to evaluate data deviations from the global distribution before selecting clients to test the global model. If data characteristics are not available, the proposed method estimates the number of participants in a way that bounds the deviation. If data characteristics are provided, the method iteratively select clients with the most number of samples until pre-defined conditions are met.
### Incentive Mechanism Design
FL is unsealed in the sense that clients can depart the system at any time. Without centrally owning data by the server, it is difficult to force clients to contribute their models. Due to computation and communication costs for model training, clients are inherently reluctant to contribute to FL altruistically without any rewards. Thus, incentive mechanisms are indispensable to motivate clients to participate in FL. Pioneering works have designed incentive mechanisms to prevent free-riders and encourage the contribution of high-quality models from clients by allocating rewards to clients in accordance with their contributions [15, 16, 17, 18].
A critical challenge in designing incentive mechanisms is how to allocate rewards among clients based on their contributions in a reasonable manner. The evaluation results of local models can exactly guide the allocation of rewards to incentivize clients. There are multiple works that have investigated incentive mechanisms in FL. In a simple way, data-level approaches simply consider the data size to determine the reward of a client [14]. More complicated methods can employ model distance metrics to design incentive mechanisms. Fair [15] establishes a reverse auction mechanism in which clients submit their bids to the server for participating in FL. It formulates a learning quality maximization problem (LQM) to maximize the sum of the qualities of all selected participants within the learning budget and uses a greedy algorithm to solve this problem. Another incentive mechanism based on reputation and reverse auction is proposed in [17]. Participants are rewarded by combining their bids and reputation scores. The reputation of clients is evaluated based on a model distance metric. FIFL [1] designs an incentive mechanism that rewards clients based on their contributions and reputations. It uses a model distance metric based on gradient similarity to evaluate clients' contributions. Some existing works use Shapley values to evaluate the contribution of clients and determine reward allocation [19, 16, 15]. In Refiner [17], clients are rewarded based on their contributions which are evaluated using both a model distance metric and marginal performance loss.
### Client Selection
Due to the limited processing capacity of the server in FL, only a limited number of clients can be selected to participate in FL at each global iteration. How to select clients to participate in FL is a crucial problem that can significantly influence the model utility [18, 1, 19, 16]. Advanced client selection schemes can be devised based on federated evaluation results. More specifically, clients that can contribute more valuable models should be selected with higher priority. A well-designed client selection scheme can not only improve the final model utility but also shorten the training time by expediting the convergence of FL [13]. For example, Oort [11] selects clients based on their local model utility evaluated on clients when making client selection decisions. Similarly, FedSAE [1] considers local training loss as a utility metric to select clients. In a data-level approach, the server can select clients based on statistical homogeneity and content diversity of their data [1]. The grouping-based scheduling method proposed in [16], divides clients into several groups to complement their missing labels. Clients within the same group are selected for training. Some existing works develop model distance metrics for designing client selection schemes. Fair [15] employs the loss reduction to select participants in a way that maximizes the sum of the qualities of all participants. Methods proposed in [17, 18] utilize the divergence of model parameters to evaluate the quality of local models.
### Malicious Attack Detection
In FL, malicious clients may easily launch attacks such as poisoning attacks by tampering with data labels or model gradients to deteriorate model utility. Since data is not exposed by FL clients, malicious attack detection algorithms designed for conventional machine learning are not applicable for FL.
Malicious attackers in FL can be identified and excluded from the model aggregation through accurate and efficient evaluation of local models. FIFL [1] employs
gradient similarity as a model distance metric to detect abnormal gradients and malicious attackers. In [12, 13], a client contributing a very low-quality model is considered as a malicious attacker and excluded from the aggregation process based on a model distance metric, i.e. the performance difference for IID datasets and gradient diversity of local models for non-IID datasets.
### Personalized Federated Learning
It is well-known that data distribution across different clients is often non-IID, which can lead to poor generalization performance of the global model on all local data distributions [11, 12]. Personalized federated learning (PFL) is proposed to address this problem [12]. For PFL, each individual client has a learning objective slightly different from other clients. Thus, each individual client seeks to closely collaborate with clients owning a more similar data distribution. How to learn the similarity between data distributions can be accomplished using federated evaluation. For example, in [10], the distance between models is used as a metric to evaluate the similarity of data distributions on different clients. A personalized model aggregation algorithm is devised which enables each client to assign higher weights to more similar clients when aggregating models. Similarly, in a layer-wised personalized FL [13], each layer from different clients is assigned different aggregation weights based on the similarity between the data distribution of clients.
## 5 Summary and Challenges
Federated evaluation is indispensable for FL to achieve high-performance models without accessing clients' data. In Table 1, we summarize a number of existing federated evaluation studies, which are categorized based on their approaches. Their applications, evaluation architectures, and the key idea of each work are briefly illustrated.
In spite of tremendous efforts made by existing works, there are several challenges calling for more significant novel contributions, which are summarized as follows.
1. Differentially private FL, which injects zero-mean noises to obfuscate exposed information [23], can greatly complicate federated evaluation. Differentially private (DP) noises will disturb the accurate evaluation of model quality. Meanwhile, the noise scale will be amplified by the number of model exposure times, which considerably restricts the number of times to evaluate a local model.
2. If federated evaluation tasks are offloaded to clients or third parties, it is difficult to guarantee that these evaluators will return genuine and accurate evaluation results. They can easily attack federated evaluation by returning falsified evaluation results.
3. For fully decentralized FL (DFL), clients contact each other in an ad hoc manner to exchange model parameters [14]. Without the coordination of a server, federated evaluation becomes more difficult since the collected information of each DFL client is very limited to fully support evaluation of models.
4. In online FL, data is continuously collected and generated by clients [1]. To conduct federated evaluation in online FL, it is required to continuously track the change of model evaluation results with the arrival of new data.
## Acknowledgment
This work was supported by ARC DP210101723.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Evaluation Methods} & Ref. & Application & \multirow{2}{*}{Key Ideas} & Architecture \\ \cline{3-4} \cline{6-6} & [14, 15] & Income Mechanism & & Contained \\ \hline \hline \multirow{4}{*}{\begin{tabular}{c} Data-level \\ Evaluation \\ \end{tabular} } & [14, 15] & Client Selection & Filtering clients based on their labels and settings clients with high statistical homogeneity and content diversity & Centralized \\ \cline{2-6} & [14, 15] & Client Selection & & \\ \cline{2-6} & [14, 15] & User Selection & & \\ \hline \multirow{4}{*}{Model Utility} & [14, 15] & Understanding Global Model & & \\ \cline{2-6} & [14, 15] & Client Selection & & \\ \cline{2-6} & [14, 15] & Client Selection & & \\ \cline{2-6} & [14, 15] & Client Selection & & \\ \cline{2-6} & [14, 15] & Client Selection & & \\ \cline{2-6} & [14, 15] & Income Mechanism & & \\ \cline{2-6} & attack Detection & & & \\ \cline{2-6} & [14, 15] & Attack Detection & & \\ \cline{2-6} & [14, 15] & Attack Detection & & \\ \cline{2-6} & [14, 15] & Attack Detection & & \\ \cline{2-6} & [14, 15] & Attack Detection & & \\ \hline \multirow{4}{*}{Shapley Values} & [14, 15] & Attack Detection & A Post-of-Verifying consensus scheme to verify the quality of local models on test data & Decentralized \\ \cline{2-6} & [14, 15] & Income Mechanism & & \\ \cline{2-6} & [14, 15] & & & \\ \cline{2-6} & [14, 15] & & & \\ \hline \multirow{4}{*}{
\begin{tabular}{c} Statistical Metric \\ \end{tabular} } & [14, 15] & Client Selection & & & \\ \cline{2-6} & [14, 15] & Attack Detection & & \\ \cline{2-6} & [14, 15] & The measurement of the distance between the local and benchmark gradients, and the local and global gradients & Centralized \\ \cline{2-6} & [14, 15] |
2305.17150 | ModelFLOWs-app: data-driven post-processing and reduced order modelling
tools | This article presents an innovative open-source software named
ModelFLOWs-app, written in Python, which has been created and tested to
generate precise and robust hybrid reduced order models (ROMs) fully
data-driven. By integrating modal decomposition and deep learning methods in
diverse ways, the software uncovers the fundamental patterns in dynamic
systems. This acquired knowledge is then employed to enrich the comprehension
of the underlying physics, reconstruct databases from limited measurements, and
forecast the progression of system dynamics. These hybrid models combine
experimental and numerical database, and serve as accurate alternatives to
numerical simulations, effectively diminishing computational expenses, and also
as tools for optimization and control. The ModelFLOWs-app software has
demonstrated in a wide range of applications its great capability to develop
reliable data-driven hybrid ROMs, highlighting its potential in understanding
complex non-linear dynamical systems and offering valuable insights into
various applications. This article presents the mathematical background, review
some examples of applications and introduces a short tutorial of
ModelFLOWs-app. | A. Hetherington, A. Corrochano, R. Abadía-Heredia, E. Lazpita, E. Muñoz, P. Díaz, E. Moira, M. López-Martín, S. Le Clainche | 2023-05-26T08:29:12Z | http://arxiv.org/abs/2305.17150v1 | # ModelFLOWs-app: data-driven post-processing and reduced order modelling tools
###### Abstract
This article presents an innovative open-source software named ModelFLOWs-app1, written in Python, which has been created and tested to generate precise and robust hybrid reduced order models (ROMs) fully data-driven. By integrating modal decomposition and deep learning methods in diverse ways, the software uncovers the fundamental patterns in dynamic systems. This acquired knowledge is then employed to enrich the comprehension of the underlying physics, reconstruct databases from limited measurements, and forecast the progression of system dynamics. These hybrid models combine experimental and numerical database, and serve as accurate alternatives to numerical simulations, effectively diminishing computational expenses, and also as tools for optimization and control. The ModelFLOWs-app software has demonstrated in a wide range of applications its great capability to develop reliable data-driven hybrid ROMs, highlighting its potential in understanding complex non-linear dynamical systems and offering valuable insights into various applications. This article presents the mathematical background, review some examples of applications and introduces a short tutorial of ModelFLOWs-app.
Footnote 1: The website of the software is available at [https://modelflows.github.io/modelflowsapp/](https://modelflows.github.io/modelflowsapp/)
+
keywords: open-source software, reduced order models, data analysis, deep-learning, patterns identification, data-driven methods +
Footnote †: journal: Journal of Computational Physics
## 1 Introduction
The availability of high-quality data has sparked a revolution in machine learning and reduced order modeling. Data-driven equation-free models offer a promising approach to understanding complex non-linear dynamical systems, even without prior knowledge of the governing equations. Machine learning tools allow us to extract knowledge directly from the data, rather than relying solely on theoretical principles. This shift encourages the use of data to uncover new hypothesis and models. While these techniques present challenges, they also provide significant opportunities for advancing our understanding of such systems, with applications in various industries including aerospace, automotive, construction, pharmaceuticals, chemicals, manufacturing, and more.
There are two primary approaches to data-driven modeling. The first approach involves data forecasting models, which focus on predicting future data using machine learning techniques like deep neural networks. These models do not incorporate explicit physical insights into their construction. The second approach is represented by reduced order models (ROMs) with physical insights. These hybrid models incorporate physical understanding and utilize pattern identification techniques such as proper orthogonal decomposition [1] or dynamic mode decomposition [2] to extract relevant spatio-temporal information from the data.
By employing hybrid data-driven ROMs in non-linear dynamical systems, several advantages emerge over relying solely on deep neural networks. These ROMs enable the identification of key instabilities and mechanisms within the studied database by capturing important information about the underlying physics. Furthermore, they facilitate the development of powerful tools for optimization and control. Having a deeper physical understanding of the problem allows for enhanced system phase prediction, adoption of more controlled and robust strategies, reduction in computational costs for numerical simulations, and streamlined information collection in experiments.
This article introduces the methodology and some relevant resutls obtained with a totally novel software, named as ModelFLOWs-app, which has been tested suitable to develop accurate and robust fully data-driven hybrid
ROMs. Modal decomposition and deep learning strategies are combined in several ways to reveal the main patterns in dynamical systems, and uses this information to understand the physics of the problem under study, to reconstruct databases from sparse measurements or to predict the evolution of the system dynamics, providing accurate alternatives to numerical simulations, reducing in this way related computational cost.
The origin of this tool was for the analysis of complex flows, where ModelFLOWs-app has been tested in a wide range of applications, for instance, (i) for temporal forecasting in reactive flows [3], wind turbines [4], compressible flows with buffeting [5], etc., (ii) for identification of flow instabilities and patterns in wall bounded turbulent flows [6; 7], synthetic jets [8], urban flows [9], etc., (iii) for data reconstruction in experiments modelling turbulent or complex flows [10; 11; 12], etc. Additionally, due to the robustness and exceptional properties presented by the models and algorithms composing ModelFLOWs-app, the software has been tested and extended to a wide range of industrial applications solving problems in complex nonlinear dynamical systems. Some examples of the multiple applications of this tool include, patterns identification and reconstruction in medical imaging [13; 14], wind velocity predictions with LiDAR experimental measurements [15], identification of cross-flow instabilitites [4], prediction of flutter instability in flight test experiments [16; 17], to name a few. We invite the readers to review carefully the applications presented in this article and to complement their knowledge with the tutorials and videos presented in ModelFLOWs-app website [18].
The article is organized as follows. A general description of the methodological framework behind ModelFLOWs-app is introduced in SS2, and the methodology connected to the two big modules forming this software is shown in SS3 and SS4. A review of some of the main results of ModelFLOWs-app is presented in SS5. Finally, SS6 explain the main conclusions.
## 2 Methodology
ModelFLOWs-app methodological framework is formed by two big modules: Module 1 uses modal decomposition methods, and Module 2 is formed by hybrid machine learning tools, which combine modal decomposition with deep learning architectures. Each module solves three different applications: (1) patterns identification, suitable to study the physics behind the data analysed; (2) data reconstruction, capable to reconstruct two- or three- di
mensional databases from a set of selected points, using data from sensors, or repairing missing data; (3) data forecasting, which builds reduced order models (ROMs) to predict the spatio-temporal evolution of the signal analysed. Figure 1 shows an sketch with the general organization of the software.
This section introduces the mathematical principles behind each Module, and also provides some examples of application. A detailed guide on how to treat and analyse the selected databases to exploit the properties of this software is presented in the ModelFLOWs-app website [18]. The software is presented in two different modalities. The first one is the application-web, which is mounted on an interface over _streamlit_[19]. This modality is suitable to understand what are the possibilities of ModelFLOWs-app, what are the setting parameters to run each application, what are the input and output files and to understand the capabilities of the software. The second one is formed by the main code, written in Python [20], which calls the modules and applications. This modality is suitable to be used in complex problems and big databases. The codes are open to the general public, so it can be locally modified and adapted to the user needs.
### Data organization
ModelFLOWs-app is a fully data-driven software, so the user should first provide the desired databases to analyze and next select the application to
Figure 1: General distribution of ModelFLOWs-app software.
solve.
In matrix form, a database is formed by a set of \(K\) snapshots \(\mathbf{v}_{k}=\mathbf{v}(t_{k})\), where \(t_{k}\) is the time measured at instant \(k\), that for convenience, are collected in the following _snapshot matrix_
\[\mathbf{V}_{1}^{K}=[\mathbf{v}_{1},\mathbf{v}_{2},\ldots,\mathbf{v}_{k},\mathbf{v}_{k+1},\ldots, \mathbf{v}_{K-1},\mathbf{v}_{K}]. \tag{2.1}\]
For some applications, it is more convenient to re-organize the database into tensor form, called as _snapshot tensor_. In that case, the various components that constitute the database are separated and re-organized into different tensor components, and similarly, the different spatial coordinates are also separated too. Generally, the database components are formed by velocity components (especially in fluid dynamics applications), although it is possible to consider any type of variable depending on the application solved. For instance, in combustion databases the components are formed by the different species [3], to identify flutter instability in flight test, the components are formed by the signal given by an array of accelerometers [17], in atmospheric boundary layer flows, in addition to the velocity components, the pressure is included [12], etcetera.
In the snapshot tensor, the matrix snapshots fit a multidimensional array, which depends on more than two indexes. The _fibers_ of the tensor are formed by the corresponding matrix columns and rows. Fig.2 shows an example of a third order tensor, where the tensor fibers are identified.
Generally, the algorithms presented in ModelFLOWs-app consider fourth and fifth order tensors for the analysis of two- and three-dimensional databases, respectively. For instance, let us consider a two-dimensional database (plane) formed by two velocity components (although, as mentioned before, the type and number of components are dependent on the database studied), streamwise and normal velocity components \(v_{x}\) and \(v_{y}\), of the in-plane velocity \(\mathbf{v}\) in
Figure 2: The fibers of a third order tensor.
a Cartesian coordinate system with dimension \(J_{2}\times J_{3}\), as
\[\mathbf{v}(x_{j_{2}},y_{j_{3}},t_{k})\quad\text{for }j_{2}=1,\ldots,J_{2},\quad j_{3}=1, \ldots,J_{3},\quad k=1\ldots,K. \tag{2.2}\]
The snapshots can be re-organized in a fourth-order \(J_{1}\times J_{2}\times J_{3}\times K\)-tensor \(\mathbf{V}\), whose components \(V_{j_{1}j_{2}j_{3}k}\) are defined as
\[V_{1j_{2}j_{3}k}=v_{x}(x_{j_{2}},y_{j_{3}},t_{k}),\quad V_{2j_{2}j_{3}k}=v_{y}( x_{j_{2}},y_{j_{3}},t_{k}). \tag{2.3}\]
The indexes \(j_{1}\), \(j_{2}\), \(j_{3}\) and \(k\) labels the velocity components (in this particular case \(j_{1}=1,2\), where \(J_{1}=2\)), the discrete values of the two spatial coordinates, and the values of time. Note that, although we present a particular case for simplicity, the snapshot tensor \(V_{j_{1}j_{2}j_{3}k}\), can be used with different number of components \(J_{1}\), satisfying the need of the problem understudy.
For three-dimensional time-dependent databases, the database is organized in a fifth-order \(J_{1}\times J_{2}\times J_{3}\times J_{4}\times K\)-tensor \(\mathbf{V}\), whose components \(V_{j_{1}j_{2}j_{3}j_{4}k}\) are defined as
\[\begin{split} V_{1j_{2}j_{3}k}&=v_{1}(x_{j_{2}},y_ {j_{3}},z_{j_{4}},t_{k}),\\ V_{2j_{2}j_{3}k}&=v_{2}(x_{j_{2}},y_{j_{3}},z_{j_{4} },t_{k}),\\ \cdots\\ V_{j_{1}j_{2}j_{3}k}&=v_{2}(x_{j_{2}},y_{j_{3}},z_{j_ {4}},t_{k}),\\ \cdots\\ V_{J_{1}j_{2}j_{3}k}&=v_{J_{1}}(x_{j_{2}},y_{j_{3}}, z_{j_{4}},t_{k}).\end{split} \tag{2.4}\]
\(J_{1}\) is the number of selected components forming the database (i.e., three velocity components), the indexes \(j_{2}\), \(j_{3}\) and \(j_{4}\) correspond to the discrete values of the three spatial coordinates, \(x\), \(y\) and \(z\), which are streamwise, normal and spanwise components, and \(k\) is the index representing the time instant.
Let us note that the simple connection between the snapshot matrix, presented in eq. (2.1), and the present snapshot tensor formulation. In the snapshot matrix, the tensor indexes \(j_{1}\), \(j_{2}\), \(j_{3}\) (and \(j_{4}\) for three-dimensional databases) are folded together into a single index \(j\). So, \(\mathbf{V}_{1}^{K}\in J\times K\), where \(J=J_{1}\times J_{2}\times J_{3}(\times J_{4})\). Hence, data can be easily transformed from matrix to tensor and tensor to matrix just by re-shaping the database, using the _reshape_ function found in Numpy in Python[20], as it is done in ModelFLOWs-app.
### Relative root mean square error (RRMSE)
The relative root mean square error (RRMSE) is computed to measure the quality of the ROMs developed in each one of the modules as
\[RRMSE=\sqrt{\frac{\sum_{k=1}^{K}||\mathbf{v}_{k}-\mathbf{v}_{k}^{approx.}||^{2}}{ \sum_{k=1}^{K}||\mathbf{v}_{k}||^{2}}}, \tag{2.5}\]
where \(||\cdot||\) is the usual Euclidean norm and vectors \(\mathbf{v}_{k}\) and \(\mathbf{v}_{k}^{approx.}\) correspond to the real and approximated solution.
## 3 Module 1: modal decomposition
### Singular value decomposition (SVD) and proper orthogonal decomposition (POD)
Lumley[21] introduced Proper Orthogonal Decomposition (POD) as a mathematical approach for extracting coherent structures from turbulent flows. The primary objective of POD is to decompose data into modes that optimize the mean square of a field variable under analysis. The classical method to calculate POD modal expansion is based on the covariance of a state vector that changes over time, with the size of the state vector being based on the spatial degrees of freedom of the data. This method becomes extremely computationally expensive for large two-dimensional or three-dimensional problems. In such cases, a different technique, singular value decomposition (SVD) is used to obtain the POD modes, introduced by Sirovich [1].
In fluid dynamics, specifically in the study of turbulent flows, SVD is a widely used factorization technique. SVD captures the directions of a matrix which represent the vectors that can either shrink or grow. These directions are determined by the eigenvalues and eigenvectors of a rectangular matrix. In addition to fluid dynamics, SVD has found a wide range of applications, particularly in low-rank matrix approximations. This approach is beneficial because it reduces the size of the data being analyzed, removes noise and filters spatial redundancies [6].
It is remarkable that the literature often uses the terms SVD and POD interchangeably, but SVD is one of the two possible techniques that can applied to obtain a POD decomposition.
SVD (or POD) decomposes a collection of spatio-temporal data \(\boldsymbol{v}(x,y,z,t)\) into a set of proper orthogonal spatial modes, also known as SVD or POD
modes, represented by \(\mathbf{\Phi}_{n}(x,y,z)\), which are weighted by the temporal coefficients \(\mathbf{c}_{n}(t)\), as
\[\mathbf{v}(x,y,z,t)\simeq\sum_{n=1}^{N}\mathbf{c}_{n}(t)\mathbf{\Phi}_{n}(x,y,z). \tag{3.1}\]
SVD algorithm factorizes the snapshot matrix \(\mathbf{V}_{1}^{k}\), eq. (2.1), which is decomposed into the spatial orthogonal modes \(\mathbf{W}\), the SVD or POD modes, temporal modes \(\mathbf{T}\) and singular values \(\mathbf{\Sigma}\) as
\[\mathbf{V}_{1}^{K}\simeq\mathbf{W}\,\mathbf{\Sigma}\,\mathbf{T}^{\top}. \tag{3.2}\]
where \(()^{\top}\) denotes the matrix transpose. The diagonal of matrix \(\mathbf{\Sigma}\) contains the singular values \(\sigma_{1},\cdots,\sigma_{K}\), \(\mathbf{W}^{\top}\mathbf{W}=\mathbf{T}^{\top}\mathbf{T}=\) the \(N\times N-\)unit matrix, being \(N\) the number of SVD modes retained. This parameter is also called _spatial complexity_, and will be referred in the following sections, when DMD-based methods will be introduced. It is remarkable the difference between \(J\), the _spatial dimension_ of the database, and the spatial complexity \(N\), where \(N\leq min(J,K)\).
SVD modes are ranked in descending order based on their singular values. Typically, the modes with the highest singular values embody the system's general dynamics, representing coherent structures or patterns, meanwhile, the modes with the smallest singular values may be omitted from the approximation, assuming a certain level of error. These modes could be related to noise, especially in the case of experimental databases, with spatial redundancies or, in fluid dynamics, they are generally connected (although not always) to small size flow scales.
The number of \(N\) SVD modes to be retained in the approximation, to construct the expansion eq. (3.1), can be determined using different criteria, as discussed in Ref. [22] (in this context, the POD or SVD approaches are also referred to as _Principal Component Analysis_ - PCA). In the present context, the number of retained modes are estimated for a certain tolerance (tunable), \(\varepsilon_{svd}\), which could be comparable to the level of noise (in the case of experimental results), could be connected to the size of the flow structures (in turbulent flows), et cetera, defined as
\[\sigma_{N+1}/\sigma_{1}\leq\varepsilon_{svd}. \tag{3.3}\]
### High order singular value decomposition (HOSVD)
Introduced by Tucker in 1966 [23], the HOSVD algorithm has gained popularity in recent years, particularly due to its implementation by de Lath
auwer et al. [24; 25]. This algorithm has proven to be effective in diverse fields such as aeronautic database generation [26], database compression [27], conceptual aeronautic design [28], and real-time control of automotive engines [29].
HOSVD decomposes databases organized in tensor form, where SVD is applied to each one of the fibers of the tensor. For instance, HOSVD of the fifth order tensor defined in eq. (2.4) is presented as
\[V_{j_{1}j_{2}j_{3}j_{4}k}\simeq\sum_{p_{1}=1}^{P_{1}}\sum_{p_{2}=1}^{P_{2}} \sum_{p_{3}=1}^{P_{3}}\sum_{p_{4}=1}^{P_{4}}\sum_{n=1}^{N}S_{p_{1}p_{2}p_{3}p_{ 4}n}W_{j_{1}p_{1}}^{(1)}W_{j_{2}p_{2}}^{(2)}W_{j_{3}p_{3}}^{(3)}W_{j_{4}p_{4}}^ {(4)}T_{kn}, \tag{3.4}\]
where \(\mathbf{S}_{p_{1}p_{2}p_{3}p_{4}n}\) is the _core tensor_, another fifth-order tensor, and the columns of the matrices \(\mathbf{W}^{(1)}\), \(\mathbf{W}^{(2)}\), \(\mathbf{W}^{(3)}\), \(\mathbf{W}^{(4)}\) are \(\mathbf{T}\) are known as the _modes_ of the decomposition.
The first set of modes (i.e, the columns of the matrices \(\mathbf{W}^{(l)}\) for \(l=\)1,2,3 and 4) correspond to the number of components of the database and the spatial variables, so they are known the spatial HOSVD modes, while the columns of the matrix \(\mathbf{T}\) correspond to the time variable, these are the temporal HOSVD modes.
The singular values of the decomposition is now formed by five sets of values,
\[\sigma_{p_{1}}^{(1)},\quad\sigma_{p_{2}}^{(2)},\quad\sigma_{p_{3}}^{(3)},\quad \sigma_{p_{4}}^{(4)},\quad\text{and }\sigma_{n}^{t}, \tag{3.5}\]
which are also sorted in decreasing order.
Similarly to SVD, without truncation the HOSVD (3.4) is exact. Nevertheless, truncation is advised to filtering noise, spurious artifacts, or reducing the data-dimensionality depending on the need of our application. As in SVD, the number of modes retained for each case generally depends on a tolerance (tunable) as
\[\begin{split}\sigma_{P_{1}+1}/\sigma_{1}&\leq \varepsilon_{svd_{1}},\\ \sigma_{P_{2}+1}/\sigma_{1}&\leq\varepsilon_{svd_{2} },\\ \sigma_{P_{3}+1}/\sigma_{1}&\leq\varepsilon_{svd_{3} },\\ \sigma_{P_{4}+1}/\sigma_{1}&\leq\varepsilon_{svd_{4} },\\ \sigma_{N+1}/\sigma_{1}&\leq\varepsilon_{svd_{5} }.\end{split} \tag{3.6}\]
Generally, the tolerance is set the same for all the cases, so \(\varepsilon_{svd_{l}}=\varepsilon_{svd}\), for \(l=1,2,3,4,5\). Although, for highly complex dynamics, these tolerances
should be set different, depending on the database studied. This is one of the main advantages of HOSVD algorithm compared to SVD, where the dimensionality of all the directions and components of the database is reduced similarly. Using HOSVD, it is possible to distinguish between different noise levels or components magniture.
After truncation, HOSVD (3.4) is written as
\[V_{j_{1}j_{2}j_{3}j_{4}k}\simeq\sum_{n=1}^{N}W_{j_{1}j_{2}j_{3}j_{4}n}\hat{V}_{ kn}, \tag{3.7}\]
where \(W_{j_{1}j_{2}j_{3}j_{4}n}\) and \(V_{kn}\) are the spatial and temporal modes, and \(N\) is the spatial complexity defined above. The spatial modes are defined as
\[W_{j_{1}j_{2}j_{3}J_{4}n}=\sum_{p_{1}=1}^{P_{1}}\sum_{p_{2}=1}^{P_ {2}}\sum_{p_{3}=1}^{P_{3}}S_{p_{1}p_{2}p_{3}p_{4}n}W_{j_{1}p_{1}}^{(1)}W_{j_{2} p_{2}}^{(2)}W_{j_{3}p_{3}}^{(3)}W_{j_{4}p_{4}}^{(4)}/\sigma_{r}^{t},\quad\hat{V}_{ kn}=\sigma_{r}^{t}T_{kn}. \tag{3.8}\]
### Gappy SVD and gappy HOSVD for data repairing and resolution enhancement
Repairing databases with corrupted or incomplete information, enlarging the dimension of the original database, and increasing its spatial resolution are some of the most relevant applications of SVD. In addition to the classical applications of SVD, patterns identification and reducing data dimensionality, when used properly, SVD is also a very useful tool for the post-processing and treatment of databases. The algorithm for this application is very simple and is based on the properties of the decomposition, which re-organizes the SVD modes as function of their contribution into the reconstruction of the original database.
Starting from the sapshot matrix eq. (2.1), with dimension \(J\times K\), the algorithms for data repairing or resolution enhacement are presented below. For simplicity, the SVD-based algorithms introduced are particularized for a single snapshot, two-dimensional, organized in matrix form, while for three-dimensional databases and/or temporal variations, the algorithms uses HOSVD and the snapshot tensor (2.4), or its corresponding version adapted to the number of components composing the tensor.
#### 3.3.1 Gappy SVD
Gappy SVD, also known as Gappy POD, uses SVD iteratively to repair and reconstruct incomplete or corrupted databases. The database analysed is particularized for a single snapshot vector \(\mathbf{v}_{k}\) with dimension \(J\times\mathbf{1}\), which is re-organized in matrix form, named as \(\widehat{\mathbf{V}}^{0}\) with dimension \(\widehat{N}_{1}\times\widehat{N}_{2}=J\), being \(\widehat{N}_{1}\) and \(\widehat{N}_{2}\) the dimensions associated to the streamwise and normal directions. The initial database contains some information that is corrupted, for instance, given by \(NaN\) (_Not a Number_) information. To repair the database, Gappy SVD algorithm is as follows:
1. Initialize the database, \(\widehat{\mathbf{V}}^{0}\), giving the points \(NaN\) an initial value, which can be zero or can be calculated as the mean or as the linear or non-linear interpolation between the surrounded points.
2. Apply SVD to the previous matrix \(\widehat{\mathbf{V}}^{i}=\mathbf{W}^{i}\mathbf{\Sigma}^{i}(\mathbf{T}^{i})^{T}\), for \(i=0\) in the initial iteration, and reduce the previous matrix dimensions by retaining \(P^{\prime}\) singular values (tunable).
3. Reconstruct the new reduced snapshot matrix \[\widehat{\mathbf{V}}^{i+1}=\mathbf{W}^{i}_{P^{\prime}}\mathbf{\Sigma}^{i}_{P^{\prime}}( \mathbf{T}^{i}_{P^{\prime}})^{T},\] (3.9) with \(\mathbf{W}^{i}_{P^{\prime}}\in\mathbb{R}^{\widehat{N}_{1}\times P^{\prime}}\), \(\mathbf{\Sigma}^{i}_{P^{\prime}}\in\mathbb{R}^{P^{\prime}\times P^{\prime}}\), and \(\mathbf{T}^{i}_{P^{\prime}}\in\mathbb{R}^{\widehat{N}_{2}\times P^{\prime}}\).
4. Update the gaps with the values of \(\widehat{\mathbf{V}}^{i+1}_{k}\).
5. Calculate the MSE at the gaps between iterations as \[MSE_{gaps}=\frac{1}{N_{gaps}}\sqrt{\sum_{n=1}^{N_{gaps}}|\widehat{\mathbf{V}}^{i} -\widehat{\mathbf{V}}^{i-1}|},\] (3.10) where \(N_{gaps}\) is the number of gaps in the database. While \(MSE_{gaps}<10^{-6}\), update \(i=i+1\) and repeat steps 2-4 in the new matrix from Step 4.
The final reconstructed matrix \(\widehat{\mathbf{V}}^{i+1}_{k}\) (for iteration \(i\)) is the new repaired matrix. Figure 3 shows an sketch representing the general methodology.
For three-dimensional data, the algorithm uses databases organized in tensor form, which particularized for a single snapshot has dimension
\(\widehat{N}_{2}\times\widehat{N}_{3}\), with \(\widehat{N}_{3}\) as the spanwise component. The previous algorithm is then the same, but SVD is replaced by HOSVD to properly repair all the components of the database. This algorithm is also valid for larger dimension databases, or it can be connected to other components of the database, different to the spatial dimension. See more details of the algorithm in Refs. [30; 31; 32].
#### 3.3.2 Increasing resolution using modal decomposition
SVD can be used to increase the resolution of the database. The method uses SVD iteratively, and it also makes profit of the properties of SVD to re-organize SVD modes as function of their contribution into the original database. The algorithm is particularized for a single snapshot \(\mathbf{v}_{k}\), which is organized in matrix form \(\mathbf{V}^{DS,i}\) (\(i\) is the iteration number of the algorithm) with (down-sampled) dimension \(N_{1}\times N_{2}<J\), being \(N_{1}\) and \(N_{2}\) the dimensions associated to the streamwise and normal directions. The main goal of this algorithm is to get a new database better resolved in space, with dimension \(\widehat{N}_{1}\times\widehat{N}_{2}=J\). The algorithm is as follows:
**Step 1.**: Apply SVD to the initial under-resolved or downsampled database, and set the number of singular values to \(P^{\prime}\) (tunable), as
\[\mathbf{V}^{DS,i}\simeq\mathbf{W}^{DS,i}_{P^{\prime}}\Sigma^{DS,i}_{P^{\prime}}(\mathbf{T} ^{DS,i}_{P^{\prime}})^{T}, \tag{3.11}\]
with \(\mathbf{W}^{DS,i}_{P^{\prime}}\in\mathbb{R}^{N_{1}\times P^{\prime}}\), \(\mathbf{\Sigma}^{DS,i}_{P^{\prime}}\in\mathbb{R}^{P^{\prime}\times P^{\prime}}\) and \(\mathbf{T}^{DS,i}_{P^{\prime}}\in\mathbb{R}^{N_{2}\times P^{\prime}}\), for \(i=0\) in the first iteration.
**Step 2.**: Enlarge the dimension of the matrices from the previous decomposition using a linear (or non-linear) interpolation as: \(\mathbf{W}^{DS,i+1}_{P^{\prime}}\in\mathbb{R}^{2N_{1}\times P^{\prime}}\) and
Figure 3: Gappy SVD: sketch summarizing the methodology.
\(\mathbf{T}_{P^{\prime}}^{DS,i+1}\in\mathbb{R}^{2N_{2}\times P^{\prime}}\) interpolating between two points, and reconstruct the new enlarged matrix \[\mathbf{V}^{DS,i+1}\simeq\mathbf{W}_{P^{\prime}}^{DS,i+1}\Sigma_{P^{\prime}}^{DS,i}( \mathbf{T}_{P^{\prime}}^{DS,i+1})^{T},\] (3.12)
**Step 3.**: Update the iteration number as \(i=i+1\).
**Step 4.**: Repeat steps 1 - 3 \(i=s\) times, until \(2^{s}\times N_{1}=\widehat{N}_{1}\) and \(2^{s}\times N_{2}=\widehat{N}_{2}\).
The new matrix \(\mathbf{V}^{DS,s+1}\) is the matrix with enhanced resolution.
When working with three-dimensional data, the algorithm employs tensor databases that are organized in a specific way. For a single snapshot, the database dimension is \(\widehat{N}_{1}\times\widehat{N}_{2}\times\widehat{N}_{3}\), where \(\widehat{N}_{3}\) represents the spanwise component. To refine this process further, the previous algorithm is modified by replacing SVD with HOSVD. Also, when dealing with temporal components, HOSVD is used to enhance the resolution of the databases, and the associated temporal information (tunable) can either remain constant withing the algorithm, only enhancing the resolution of the spatial components, or may also be enlarged, interpolating to time instants that were not included in the original dataset.
See more details regarding this algorithm and some applications in Refs. [33; 14]
### Higher order dynamic mode decomposition (HODMD)
Higher order dynamic mode decomposition (HODMD) [34] is an extension of dynamic mode decomposition (DMD) [2] introduced for the analysis of complex flows and highly non-linear dynamical systems [35].
HODMD decomposes spatio-temporal data \(\mathbf{v}_{k}\), as an expansion of \(M\) DMD modes \(\mathbf{u}_{m}\), weighted by their corresponding amplitude \(a_{m}\) as
\[\mathbf{v}(x,y,z,t_{k})\simeq\sum_{m=1}^{M}a_{m}\mathbf{u}_{m}(x,y,z)e^{(\delta_{m}+i \omega_{m})t_{k}}, \tag{3.13}\]
for \(k=1,\ldots,K\), where \(\omega_{m}\) is the oscillation frequency and \(\delta_{m}\) corresponds to the growth rate, representing the mode temporal growing, decaying or showing when the mode remains neutral in time.
HODMD algorithm is briefly introduced here, where the method is summarized in two main steps. A detailed description of the algorithm can be found in Ref. [34]. We recommend the book by Vega & Le Clainche [35],
where it is possible to find a wide range of examples and applications, as well as the implementation of the algorithms in Matlab [36].
The algorithm is presented as follows.
* **Step 1: Dimension reduction via SVD.** SVD is applied to the snapshot matrix (2.1) to remove noise or spatial redundancies and reduce data dimensionality from the spatial dimension \(J\) to the spatial complexity \(N\) (number of SVD modes retained). At this step, tolerance \(\varepsilon_{svd}\) is fixed to select \(N\). Starting from eq. (3.2), it is possible to define the _reduced snapshot matrix_ as \[\widehat{\mathbf{V}}_{1}^{K}=\mathbf{\Sigma}\,\mathbf{T}^{T},\] (3.14) with \(\mathbf{V}_{1}^{K}=\mathbf{W}\widehat{\mathbf{V}}_{1}^{K}\) and \(\widehat{\mathbf{V}}_{1}^{K}\) with dimension \(N\times K\).
* **Step 2: The DMD-d algorithm.** The _high order Koopman assumption_ is applied to the reduced snapshot matrix as \[\widehat{\mathbf{V}}_{d+1}^{K}\simeq\widehat{\mathbf{R}}_{1}\widehat{\mathbf{V}}_{1}^{K-d} +\widehat{\mathbf{R}}_{2}\widehat{\mathbf{V}}_{2}^{K-d+1}+\ldots+\widehat{\mathbf{R}}_{d} \widehat{\mathbf{V}}_{d}^{K-1},\] (3.15) where \(\widehat{\mathbf{R}}_{k}=\mathbf{W}^{T}\mathbf{R}_{k}\mathbf{W}\) for \(k=1,\ldots,d\) are the Koopman operators, linear, which contain the system dynamics. The snapshot matrix eq. (2.1) is then divided into \(d\) blocks formed by \(K-d\) time-delayed snapshots each. For \(d=1\), the method is similar as standard DMD [2]. The previous equation is reorganized in the following way \[\left[\begin{array}{c}\widehat{\mathbf{V}}_{2}^{K-d+1}\\ \ldots\\ \widehat{\mathbf{V}}_{d}^{K-1}\\ \widehat{\mathbf{V}}_{d+1}^{K}\end{array}\right]=\left[\begin{array}{cccccc}\bm {0}&\mathbf{I}&\mathbf{0}&\ldots&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\mathbf{I}&\ldots&\mathbf{0}&\mathbf{0}\\ \ldots&\ldots&\ldots&\ldots&\ldots&\ldots\\ \mathbf{0}&\mathbf{0}&\mathbf{0}&\ldots&\mathbf{I}&\mathbf{0}\\ \widehat{\mathbf{R}}_{1}&\widehat{\mathbf{R}}_{2}&\widehat{\mathbf{R}}_{3}&\ldots& \widehat{\mathbf{R}}_{d-1}&\widehat{\mathbf{R}}_{d}\end{array}\right].\left[\begin{array} []{c}\widehat{\mathbf{V}}_{1}^{K-d}\\ \widehat{\mathbf{V}}_{2}^{K-d+1}\\ \ldots\\ \widehat{\mathbf{V}}_{d}^{K-1}\end{array}\right].\] (3.16) The previous expression can also be represented as \[\tilde{\mathbf{V}}_{2}^{K-d+1}=\tilde{\mathbf{R}}\,\tilde{\mathbf{V}}_{1}^{K-d},\] (3.17) where \(\tilde{\mathbf{V}}_{1}^{K-d+1}\) and \(\tilde{\mathbf{R}}\) correspond to the _modified snapshot matrix_ and _modified Koopman matrix_, respectively. SVD is again applied to this
matrix to remove expected spatial redundancies, where the tolerance \(\varepsilon_{svd}\) is again used to retain \(N^{\prime}>N\) SVD modes, as
\[\tilde{\sigma}_{N^{\prime}+1}/\tilde{\sigma}_{1}<\varepsilon_{svd}. \tag{3.18}\]
It is remarkable that the key of the good performance of HODMD algorithm lies in matrix \(\tilde{\mathbf{R}}\), which increases the spatial complexity of the data from \(N\) to \(N^{\prime}\).
The next step calculates the DMD modes to form the DMD expansion eq. (3.13). These modes are obtained via solving an eigenvalue problem in the new reduced version of the modified Koopman matrix, whose eigenvalues and eigenvectors are adapted to represent the DMD frequencies and growth rates, and the DMD modes, respectively. Finally, the mode amplitudes are calculated via least-squares fitting, following the optimized DMD method [37]. It is remarkable that depending on the way the amplitudes are calculated, the mode weight can be different. See for instance HODMD with criterion (HODMDc) [5], where the modes also consider the contribution of the growth rates in the amplitude calculations. A detailed description on different ways to calculate the DMD mode amplitudes can be found in the review paper Ref. [38].
Finally, sorting the mode amplitudes in decreasing order, the DMD modes are reordered. The \(M\leq N^{\prime}\) retained DMD modes are calculated based on a second tolerance \(\varepsilon_{a}\) (tunable) as
\[a_{M+1}/a_{1}<\varepsilon_{a}. \tag{3.19}\]
\(M=min(K,N^{\prime})\) is known as the _spectral complexity_, different from the _spectral dimension_\(K\). For a sufficiently large number of snapshots \(K\) (which is the common case), in cases in which the spatial complexity \(N\) is smaller than the spectral complexity \(M\) (for \(K>N\)), the high order Koopman assumption completes the lack of spatial information (reduced to \(N\)) and ensures the good performance of the DMD method. When using standard DMD, it is possible to find some cases in which \(N<M\), hence the algorithm fails. HODMD in contrast, overcome such limitation, extending the range of application of the standard algorithm. HODMD method has shown potential in the study of various flow types such as transition to turbulent [39; 40] and turbulent flows
[6; 7], identification of flow features from experimental data with noise [39; 41; 16], in the analysis of data obtained from limited spatial locations like field measurements [16; 42], or even in medical imaging [13].
### Multi-dimensional iterative higher order dynamic mode decomposition (mdHODMD-it)
The multi-dimensional HODMD (mdHODMD) algorithm uses the snapshot tensor given in equation (2.4). So, instead of SVD, HOSVD is applied as first step to reduce data-dimensionality. This method considers the variables and components of the database separately, so different modes can be selected to better represent each fiber forming the tensor.
The algorithm is summarized in two steps:
* **Step 1:** Application of HOSVD. 1.1 Perform HOSVD on the snapshot tensor \(\mathbf{V}\), eq. (2.4), without truncation. The ranks of the matrices, whose columns are the fibers, determine the number of modes in the tensor \(\mathbf{V}\). These ranks are \(P_{1}=\min J_{1},J_{2}J_{3}J_{4}K\), \(P_{2}=\min J_{2},J_{1}J_{3}J_{4}K\), \(P_{3}=\min J_{3},J_{1}J_{2}J_{4}K\), and \(N=\min K,J_{1}J_{2}J_{3}J_{4}\). The resulting singular values are selected as shown in equation (3.5). 1.2 Choose spatial and temporal tolerances, denoted as \(\varepsilon_{svd}\) and \(\varepsilon_{a}\) respectively, to determine the number of modes retained in each direction. These numbers are denoted by \(P_{1}\), \(P_{2}\), \(P_{3}\), \(P_{4}\) and \(N\). The smallest values of \(P_{1}\), \(P_{2}\), \(P_{3}\), \(P_{4}\) and \(N\) that satisfy the condition given by equation (3.6) are chosen. 1.3 Perform truncated HOSVD with the numbers of modes determined in the previous step. This gives the core tensor and the modes that define the truncated HOSVD on the right-hand side of equation (3.4). 1.4 Compute the spatial and temporal modes as defined in equation (3.8).
* **Step 2:** Calculation of the multidimensional DMD expansion. Step 2 in the HODMD algorithm described in SS3.4 is applied to the reduced snapshot matrix defined by the temporal modes \(\hat{V}_{kr}\) calculated in the previous step item 1.4. The mode amplitudes, frequencies, growth rates and DMD modes are calculated at this step.
When the data are too noisy or too complex, mdHODMD can be applied iteratively, and the algorithm is called as multi-dimensional iterative HODMD (mdHODMD-it). More specifically, as firs step the mdHODMD technique is reapplied to the reconstructed snapshots with the same tolerances as in the first application, resulting in cleaner data. Next, the algorithm is applied to the newly reconstructed snapshots, and this process repeats until the number of HOSVD modes is maintained after two consecutive iterations. With each iteration, the algorithm recalculates and orders both HOSVD and DMD modes based on their new corresponding amplitudes, ultimately improving the quality of the DMD reconstructions. The primary benefit of the iterative approach is the removal of irrelevant or inconsistent HOSVD modes based on the tolerance \(\varepsilon_{svd}\).
## 4 Module 2: hybrid machine learning tools
Module 2 is formed by a group of deep learning algorithms that are combined with the modal decomposition methods presented in Module 1. Modal decomposition are suitable to identify patterns that contain relevant information about the physics of the dynamical system. As previously mentioned, these techniques have excellent properties to reduce the data-dimensionality, which is very convenient in fluid dynamics, other complex problems and industrial applications. The dimensionality of the original database is reduced from hundred thousands or even millions degrees of freedom to a few modes, POD or DMD, generally varying from dozens to hundred, depending on the problem understudy. These modes represent the dynamics of the system, which is represented by modal decomposition with physical interpretability. As second step, the reduced database is combined with different algorithms of deep learning. ModelFLOWs-app in particular offers two possibilities, convolutional neural networks (CNNs) and recurrent neural networks (RNNs), although the algorithm is opened to be combined with other types of architectures. This so-called hybrid ROM, can be used for several applications, including data repairing and reconstruction (Module 2 - application 2) and temporal forecasting (Module 2 - application 3), which is used to predict the non-linear dynamics in complex systems. This application is able to predict saturated (converged or statistically steady) solutions from transient stages in numerical simulations, which generally results in a notorious reduction of computational times to generate new databases. In the examples presented,
the computational cost is reduced from hundred or even thousands of computational hours to a few seconds [43].
Module 2 - application 1 also explores the possibility of using deep learning architectures to identify patterns in the dynamical system understudy. For such aim, several architectures of autoencoders have been tested, and compared with results obtained with modal decompositions. This application completes the Deep Learning Module, which is complementary to the modal decomposition module. Module 1 and Module 2 solves similar applications using classical (modal decomposition) and more modern (neural networks) tools, respectively.
### Dimensionality reduction and via autoencoders (AEs)
Autoencoders (AEs) are unsupervised neural networks designed to learn a compressed representation of a database. The encoder is responsible for projecting the data onto a low-dimensional nonlinear manifold, \(\mathbf{v}\mapsto\mathbf{r}\), while the decoder reconstructs the data from the latent space back to the reference space and reduces the reconstruction error, See Fig. 4
By training the autoencoder model, it acquires the ability to identify the crucial features in the data necessary for reconstruction. This is accomplished
Figure 4: Basic architecture of autoencoders.
by optimizing the model parameters \(\mathbf{w}\) to minimize the reconstruction loss \(\mathcal{L}_{\text{rec}}\). For \(\mathcal{E}\) and \(\mathcal{D}\) being the encoder and decoder respectively, this can be expressed as,
\[\mathcal{F}=\mathcal{D}\circ\mathcal{E}, \tag{4.1a}\] \[\widehat{\mathbf{v}}=\mathcal{F}(\mathbf{v};\mathbf{w}),\] (4.1b) \[\mathcal{L}_{\text{rec}}=\epsilon(\mathbf{v},\tilde{\mathbf{v}}), \tag{4.1c}\]
being \(\epsilon\), \(\mathbf{v}\) and \(\tilde{\mathbf{v}}\) the loss function, the input data, and the reconstruction of the input data, respectively.
The simplest form of AEs is formed by non-recurrent, feed-forward neural networks that comprise an input layer, one or more hidden layers, and an output layer with the same number of neurons as the input layer. Their primary objective is to reconstruct the input data, minimizing the difference between input and output through a loss function that can accommodate regularization and sparsity terms. ModelFLOWs-app uses this type of AEs to identify patterns in the data analysed. Different types of activation functions, number of layers, or compression rates, among other parameters, can be selected. Snapshot matrix eq. (2.1) is the input of the AE architecture used, and the output provides the reconstructed matrix using a specific (tunable) number of AEs, as well as the patterns collected in each AE. These AE are re-organized in decreasing order as function of their highest influence in the reconstructed field, minimising the RRMSE reconstruction error eq. (2.5).
It is important to note that a shallow AE with linear activation functions is equivalent to principal component analysis (PCA), an extension of POD. Hence, in this case, similarities are found between AE modes and POD (or SVD) modes. Nevertheless, in the context of modal decomposition, the autoencoder architecture offers an appealing framework that can effectively incorporate non-linearity in mappings by utilizing non-linear activation functions [44].
### Hybrid reduced order models
Similarly to HODMD algorithm section 3.4, hybrid ROMs are defined by two main steps. The first step starts from eq. (3.2), for simplicity repeated here as
\[\mathbf{V}_{1}^{K}\simeq\mathbf{W}\,\mathbf{\Sigma}\,\mathbf{T}^{\top}, \tag{4.2}\]
to define the _reduced snapshot matrix_ (with dimension \(N\times K\)) as
\[\widehat{\mathbf{V}}_{1}^{K}=\mathbf{\Sigma}\,\mathbf{T}^{T}, \tag{4.3}\]
with \(\mathbf{V}_{1}^{K}=\mathbf{W}\widehat{\mathbf{V}}_{1}^{K}\). The second steps applies deep learning architectures into this reduced matrix. Depending on the type of architecture, it is possible to predict temporal evolution of a signal or to reconstruct two- or even three-dimensional databases from sensors, as presented in sections 4.2.1 and 4.2.2, respectively. Figure 5 shows an sketch summarizing the hybrid ROM methodology for temporal foreccasting, where \(p\) snapshots are predicted. So, initial snapshot matrix \(\mathbf{V}_{1}^{K}\) eq. (2.1) is then transformed to the new snapshot matrix containing \(p\) additional snapshots \(\mathbf{V}_{1}^{K+p}\).
It is remarkable that using this method, one-dimensional architectures are required, since they are directly applied to the reduced snapshot matrix. In contrast, two- or even three-dimensional architectures should be applied to the original snapshot matrix, which also contains relevant spatial information of the database. Hence, the reduction in computational time and memory of this hybrid methodology is one of the key points of the algorithms presented.
The performance of the architectures presented below is optimal for standard fluid dynamics problems. However, when the complexity of the database increases, it is necessary to take into account the magnitude and variance of the variables analysed. For instance, in reactive flows, where a large number of variables are involved represeting the multiple species of the flow (more than 80), it is necessary to combine the previous methodology with other pre-processing techniques, such as centering and scaling. These two methods are applied to each one of the multiple variables of the database before forming the snapshot matrix \(\mathbf{V}_{1}^{K}\) eq. (2.1). Centering removes the temporal
Figure 5: Steps involved in the architecture of the hybrid reduced order models.
mean in each variable, in this way the analysis only considers the fluctuation field. Scaling normalizes all the variables so they can be compared on the same bases. There are several possibilities to scale the data. The following equation represents centering and scaling of the data,
\[\tilde{\mathbf{v}}_{j}(t_{k})=\frac{\mathbf{v}_{j}(t_{k})-\bar{v}_{j}}{c_{j}}, \tag{4.4}\]
where \(\mathbf{v}_{j}\) is the \(j\)-_th_ variable, \(\bar{v}_{j}\) is the mean averaged in time, \(c_{j}\) is the scaling factor used and \(\tilde{\mathbf{v}}_{j}\) is the scaled variable. Three scaling techniques have been implemented in ModelFLOWs-app:
1. _Range_ scaling: The difference between the maximum and minimum value of each variable is used as the scaling factor.
2. _Auto_ scaling: the standard deviation of the \(j\)-th variable (\(\sigma_{j}\)) is used as the scaling factor.
3. _Pareto_ scaling: the square root of the standard deviation of the \(j\)-th variable (\(\sqrt{\sigma_{j}}\)) is used as the scaling factor.
Several studies in the literature [22; 45; 46] have examined the impact of centering and scaling on modal decomposition techniques like PCA and HODMD in the context of combustion applications.
4.2.1 Time-series forecasting models using convolutional and recurrent neural networks (CNN and RNN)
The proposed architectures have been chosen to best approximate temporal evolution of databases with high-dimensional nature of time series driven by strongly sequential dynamics. The architectures are fit to predict the longest data sequence as possible, intending to minimize the quantity of training data. In this way, the predictive hybrid ROM can be used to predict new databases in numerical simulations, generally associated to large computational costs. Also, this strategy avoids overfitting.
Fig. 6 shows an sketch showing this predictive deep learning models. Each column of matrix \(\widehat{\mathbf{V}}_{k+1}\) represents the temporal dynamics of each snapshot. The model predicts the snapshot \(k+1\), given at time \(t_{k+1}\) in the reduced matrix, \(\widehat{\mathbf{V}}_{k+1}\), by utilizing data from the \(q\) previous snapshots, defined as \(\widehat{\mathbf{V}}_{k}\), \(\widehat{\mathbf{V}}_{k-1}\), \(\cdots\), \(\widehat{\mathbf{V}}_{k-q+1}\).
The model using CNN architectures is composed by one-dimensional convolutional (Conv 1D) layers followed by FC layers. Conv 1D applies a one dimensional kernel, with no padding and a stride of 1 [47]. Flatten function is included in thee intermediate layers that bridge the gap between convolutional and FC layers, to transform the matrix structure employed by Conv 1D into a vector structure compatible with the FC layers.
The RNN model is formed by short-term memory (LSTM) layers and FC layers [48]. Different dimensions of the output space (number of units) for the LSTM layers have been considered to form this model. Although, in ModelFLOWs-app, different architecture details can be considered for this application as function of the problem analysed. Details about the number and type of layers, activation function, layer dimensions,..., are presented in some examples in section 5.6. Hyper-parameter approach to automatically set these parameters to get the best predictions have also been implemented in ModelFLOWs-app. This function also set the optimization [49] and learning rate parameters, batch size and number of epoch, used with early stopping method.
Figure 6: Hybrid predictive ROM. Recurrent and convolutional neural networks to predict a time-ahead snapshot, \(\widehat{\mathbf{V}}_{k+1}\), based on the previous \(q\) snapshots. N is the number of POD modes.
Before entering the deep learning models, the reduced snapshot matrix \(\widehat{\mathbf{V}}_{1}^{K}\) can be scaled to improve the performance of the neural network. Some of them have been already mentioned in the previous section. Apart from them, a new scaling method (_Max per Mode (MpM)_) has been implemented, in which each column of \(\widehat{\mathbf{V}}_{1}^{K}\) (\(\mathbf{v}_{j}\)) is scaled with the sum of the maximum values of all columns, as
\[\hat{\mathbf{v}}_{j}=\frac{\mathbf{v}_{j}}{\sum_{j=1}^{N}\max\lvert\mathbf{v}_{j}\rvert}. \tag{4.5}\]
This scaling method has been proved suitable for the prediction of non-periodic temporal modes [50; 3].
For the training and validation, the columns \(K\) of matrix \(\widehat{\mathbf{V}}\) are separated in three blocks with dimension \(N\times K_{training}\), \(N\times K_{validation}\) and \(N\times K_{test}\), for the training, validation and test set, respectively, where \(K=K_{training}+K_{validation}+K_{test}\). An sketch is presented in Fig. 7.
Similar to HODMD algorithm (section 3.4), a rolling-window method is used. The method generates one output, the model prediction, for \(q\) inputs (model information input) organized in data batches. As seen in the sketch of Fig. 8, 1 is the offset considered between the successive rolling windows.
Figure 7: Sketch with the training, validation and test set distribution for the deep learning module.
Mean Squared Error Loss (\(MSE_{Loss}\)) comparing the real and predicted databases (\(\widehat{\mathbf{V}}_{t}^{real}\), \(\widehat{\mathbf{V}}_{t}^{predicted}\)) is minimized using batch stochastic gradient descent algorithm. The global loss (\(MSE_{Loss}\)) is based on averaging in time the local loss, which is calculated for each prediction as
\[MSE_{Loss}(t)=\frac{1}{N}||\widehat{\mathbf{V}}_{t}^{predicted}-\widehat{\mathbf{V}}_{ t}^{real}||^{2}, \tag{4.6}\]
being \(N\) the number of singular values. The local error is calculated at the end of each epoch, and early stopping method is used in the validation set to get the best network parameters. See more details of this architecture in Ref. [43].
Finally, the RRMSE eq. (2.5) is also computed comparing the predictions with the real solution in the neural networks.
#### 4.2.2 Increasing resolution using hybrid deep learning models
This application reconstructs two- or three-dimensional databases using the information contained in a few sensors. As previously mentioned, the first step of the algorithm applies SVD to the snapshot matrix as defined in eq. (4.2). In this case, the snapshot matrix \(\mathbf{V}_{1}^{K}\) eq. (2.1), is now called as \(\mathbf{V}^{DS}\), as it represents a database under-resolved, only formed by a few points where temporal information about velocity vector, pressure field or other quantities are collected in each one of such points. For for each temporal snapshot, the dimensions of the _down-sampled_ snapshot matrix \(\mathbf{V}^{DS}\) are \(N_{1}\times N_{2}\), being \(N_{1}\times N_{2}<J\). For two-dimensional databases, \(N_{1}\) and \(N_{2}\) correspond to the streamwise and normal components, respectively, while
Figure 8: Rolling window method calculating 1 output from \(q\) inputs.
for three-dimensional databases, \(N_{2}\) collects the information of the normal and spanwise components (although the three spatial components could be re-organized differently into \(N_{1}\) and \(N_{2}\) based on our needs of filling gaps into the databases). Eq. (4.2) particularized for a single snapshot, re-organizes the information into the down-sample matrix and can be re-written as
\[\mathbf{V}^{DS}\simeq\mathbf{W}^{DS}\,\mathbf{\Sigma}^{DS}\,(\mathbf{T}^{DS})^{\top}. \tag{4.7}\]
The number of singular values retained is \(P^{\prime}\), hence \(\mathbf{\Sigma}^{DS}\in\mathbb{R}^{[P^{\prime},P^{\prime}]}\). Matrices \(\mathbf{W}^{DS}\in\mathbb{R}^{[N_{1},P^{\prime}]}\) and \(\mathbf{T}^{DS}\in\mathbb{R}^{[N_{2},P^{\prime}]}\), are introduced into a deep learning architecture to enlarge the dimension of \(N_{1}\) and \(N_{2}\) as in the original database, to \(\widehat{N}_{1}\) and \(\widehat{N}_{2}\), with \(\widehat{N}_{1}\times\widehat{N}_{2}=J\), being the dimension of these new enlarged matrices as \(\mathbf{W}\in\mathbb{R}^{\widehat{N}_{1},P^{\prime}}\) and \(\mathbf{T}\in\mathbb{R}^{\widehat{N}_{2},P^{\prime}}\). Once these new matrices have been modelled, they are again combined to re-construct the database with spatial dimension \(J\) for each snapshot \(K\), as
\[\mathbf{V}_{k}\simeq\mathbf{W}\,\mathbf{\Sigma}^{DS}\,(\mathbf{T})^{\top}, \tag{4.8}\]
where \(\mathbf{V}_{k}\in\mathbb{R}^{[\widehat{N}_{1},\widehat{N}_{2}]}\). The deep learning architectures gives this reconstructed matrix as an output. This architecture is formed by two groups of neural networks, defined as decoders (the decoding part of an autoencoder) working in parallel that meets in the output layer, where the reconstructed solution is calculated as in eq. (4.8). Hence, the reconstruction error (RRMSE, see eq. (2.5)) used to improve the weights is calculated by comparing the reconstructed database with the original solution. Fig. 9 shows an sketch of the present architecture, formed by 5 layers, although this is a tunable parameter of ModelFLOWs-app.
To organize the database, the strategy followed divides the total number of available snapshots into training, validation and test, as in Fig. 7. It is also notorious that this architecture is able to predict in time the reconstruction of databases.
See more details about this architecture and the model previsouly described in Ref. [12].
## 5 Results
### Module 1 - application 1: patterns identification
HODMD algorithm (Sec. 3.4) has been applied to identify temporal patterns in complex flows. The good performance of HODMD and mdHODMD- it algorithms depends on the selection of the parameter \(d\), which defines the number and sizes of the window selecting sub-areas of the analyzed matrix, and the tolerances \(\varepsilon_{svd}\) and \(\varepsilon_{a}\). As explained in Ref. [34], as a reference \(d\) should be set in the interval \(K/10<d<K/2\). There is not only a single optimal value for \(d\). Generally, robust results are obtained for several values. Also, in saturated flows or converged signals, robust results should be obtained using similar values of \(d\) applying the algorithm in different time intervals. The optimal values of \(d\) also varies with the tolerances \(\varepsilon_{svd}\) and
Figure 9: Reconstruction of databases combining SVD and deep learning. Sketch of the methodology.
\(\varepsilon_{a}\), the time interval \(\Delta t\) and the number of snapshots \(K\). Also \(d\) scales with the total number of snapshots, so if the \(K\) is doubled, \(d\) should also be multiplied by 2. Finally, the tolerances \(\varepsilon_{svd}\) and \(\varepsilon_{a}\) should be comparable to the expected uncertainty of the measurements. If the level of noise of the database is known, \(\varepsilon_{svd}\) should be set similar or slightly smaller (when the iterative method is applied) to such level. More details on the calibration of the algorithm can be found in Refs. [39, 35, 6].
This first example reviews an application of patterns identification using HODMD that has been published in Ref. [9]. The problem presented studies a turbulent flow in a simplified urban environment consisting of two buildings, modelled by wall-mounted obstacles, separated by a certain distance. The main goal of this study was to identify the main patterns connected to high concentration of pollution levels. More specifically, it is known that the arch vortex (see Fig. 10 a), a pattern characteristic of this type of flows which forms downstream buildings, plays a crucial role in the dispersion of pollutants in urban areas. HODMD was applied to analyze the flow to identify the main patterns and frequencies leading flow dynamics connected to the presence of the arch vortex. The findings presented in this study are of utmost significance for urban sustainability, as they shed light on the critical factors that contribute to the concentration of pollutants in urban environments. So, we encourage to read the original article Lazpita _et al._[9] and other related articles for more information [51, 52, 53].
The database for this problem was obtained through numerical simulation applied to a simplified urban environment consisting of two wall-mounted obstacles, where the distance between buildings was varied to obtain different regimes. According to Oke [54], these regimes are called skimming flow, wake interference, and isolated roughness, ordered from shortest to longest distance between buildings. In the case of skimming flow, the spatial dimension of the computational domain is defined for the streamwise, normal and spanwise direction in the interval \(x\in[-1,5]\), \(y\in[0,2]\) and \(z\in[-1.5,1.5]\). respectively. The temporal interval between snapshots is \(\Delta t=0.35\). The database used to obtain the results is composed of a spatial grid formed by \(100\times 125\times 50\) points (streamwise, normal and spanwise directions, respectively), and 224 snapshots. Therefore, the spatial dimension of the problem is \(J=1,875,000\), and the temporal dimension is \(K=224\).
HODMD was applied to analyse this database using the parameters listed in Table 1, selected based on the criteria discussed earlier.
From this analysis, two types of modes were identified: the so-called _generator modes_ associated with low frequencies in the computed DMD mode spectrum, which were connected to the presence of the main vortical structures found in the flow (i.e., arch vortex [55; 56]), and the so-called _breaker modes_ that were high frequency modes and were connected to the wake formed behind the buildings, connected to flow dispersion. These two types of modes are presented in Fig. 10 b and c, showing the streamwise velocity component. The remaining modes forming the spectrum are combination (formed by non-linear interaction) of these two types of modes.
### Module 1 - application 2: data reconstruction
SVD algorithm has been applied to repair and enhance the resolution of images complex flows (Sec. 3.3). These two applications are detailed below.
#### 5.2.1 Application 2.1: data repairing
Gappy SVD (and HOSVD for databases in tensor form), as introduced in Section 3.3.1, depends on two main parameters: the type of initial reconstruction of the database to fill the gaps generally identified with _NaN_ values (Step 1 of the algorithm), and the number of \(P^{\prime}\) modes that are retained after
\begin{table}
\begin{tabular}{|l l l|} \hline
**Parameter** & **Symbol** & **Value** \\ \hline \hline Number of windows & \(d\) & 50 \\ SVD tolerance & \(\varepsilon_{svd}\) & \(10^{-3}\) \\ DMD tolerance & \(\varepsilon_{a}\) & \(10^{-3}\) \\ \hline \end{tabular}
\end{table}
Table 1: Values of the parameters to choose on the HODMD algorithm for the database presented in this section.
Figure 10: Flow patterns in skimming flow regime in a simplified urban environment. (a) Velocity streamlines representing the arch vortex, (b) and (c) real part of the streamwise velocity of DMD modes: Iso-surfaces normalized with the \(L_{\infty}\)-norm for (b) the generator mode (\(\omega_{m}=0.11\)), and (c) the breaker mode (\(\omega_{m}=1.1\)). The values used are given by \(c_{max}\max(U)\) (yellow) and \(c_{min}\min(U)\) (blue)
applying SVD to the reconstructed database (Step 2). On the one hand, in the initial reconstruction, the gaps can be replaced by (i) 0 values, (ii) the mean value between consecute points or (iii) using a linear (or non-linear) interpolation exploiting the information of the surrounding points. On the other hand, regarding the number of SVD modes \(P^{\prime}\), which are used in the reconstruction of the database (Step 3), using a large number could increase the reconstruction error, since some of these modes could contain information related to noise or spatial redundancies. On the contrary, using a small number of them could imply loosing relevant information related to the dynamics of the system. Hence, calibration is crucial to ensure the proper performance of the method.
Gappy SVD has been applied to repair a numerical database modelling the two-dimensional wake past a circular cylinder at Reynolds number (computed with the cylinder diameter) 100. Details about the numerical simulations and the generation of this database can be found in Ref. [35]. The spatial dimension of the computational domain is defined for the streamwise direction in the interval \(x\in[-1,8]d\) and for the normal direction in the interval \(y\in[-2,2]d\), being d the diameter of the cylinder. The dataset analysed is composed by the two velocity components, 449 points in the streamwise direction and 199 in the normal direction and 150 snapshots, equi-distant with time interval \(\Delta t=0.2\). Therefore, the data can be organized into a four-order tensor with dimension \(2\times 449\times 199\times 100\). From this database, the \(\sim 63\%\) of the values are selected randomly and are removed to obtain gaps.
Gappy SVD is applied using 0 as initial reconstruction value (replaicing Nan), and the number of retained SVD modes selected is 10. Figure 11 shows the initial database, the inital reconstruction carried out by the algorithm (iteration i=1) and compares the real solution with the final reconstruction. The RRMSE eq. (2.5) of the reconstruction is smaller than 2%.
For reconstruction of datasets considering the temporal component, similar results are obtained, using the algorithm Gappy HOSVD instead. See details in Ref. [18].
#### 5.2.2 Application 2.2: superresolution
This second type of application of SVD algorithm is tested to enhance the resolution of a database. The only parameter necessary to set is the final resolution that is desired to obtain in the super-resolved database (see details in Section 3.3.2). As described in the methodology, the resolution
of the database is doubled in each iteration the algorithm is applied, so the final resolution will be a number that is calculated as the power o 2 times the initial resolution.
As in the previous section, the method is tested to enhance the resolution of a database modelling the two-dimensional wake behind a circular cylinder at Reynolds number 100. The dimension of the database, organized in tensor form, is \(2\times 449\times 199\times 150\), corresponding to the velocity components, grid points along the streamwise and normal directions, respectively, and the number of snapshots. For this application, the dimension of the database is reduced to \(2\times 63\times 63\times 100\), so the spatial dimensions have been downsampled by a factor or \(2^{3}=8\) (only 1 every 8 points are reained in the reduced database. In ModelFLOWs-app then it would be necessary to include the downsampled database and the parameter 3, to enhance the resolution to the original dimension.
Figure 12 shows the downsampled database and compares the resolution enhanced database with the original one, where the RRMSE eq. (2.5) computed is smaller than 3%.
For reconstruction of datasets considering the temporal component, similar results are obtained, using the algorithm HOSVD instead. See details in Ref. [18].
Figure 11: Gappy SVD algortihm to reconstruct the two-dimensional wake behind a circular cylinder. NaN is found in the 63% of the points forming the initial database. From left to right and top to bottom: the corrupted snapshot, initial reconstruction, final reconstruction and real database.
### Module 1 - application 3: predictive models
Let us examine a specific data-oriented ROM developed using HODMD (Sec. 3.4) for temporal prediction. This example is presented in detail in Refs. [57; 35].The data-driven predictive ROM is developed to expedite the computational efficiency of Nek5000 [58], an open source spectral element code employed for solving the incompressible continuity and Navier-Stokes equations [59], thereby enabling the computation of the system's final attractor for temporal forecasting. Our focus lies on the three-dimensional wake of a circular cylinder in the context of incompressible fluid dynamics, a well-established benchmark problem [60; 61]. For this problem, we define the Reynolds number (with the cylinder diameter) as 210. The spanwise length of the computational geometry studied corresponds to a spanwise wavenumber of 4. Descriptions of the computational domain, the mesh, and the analysis of the accuracy of the numerical simulations can be found in [57]. To construct the data-driven ROM, we first perform numerical simulations and then we use a collection of 500 snapshots, equi-distant with time interval \(\Delta t=1\), obtained from the saturated regime of such simulations, specifically within the time interval \(575\leq t\leq 825\). These snapshots serve as the basis for developing the data-driven model.
\begin{table}
\begin{tabular}{|l l l|} \hline
**Parameter** & **Symbol** & **Value** \\ \hline \hline Number of windows & \(d\) & 250 \\ SVD tolerance & \(\varepsilon_{svd}\) & \(10^{-4}\) \\ DMD tolerance & \(\varepsilon_{a}\) & \(3\cdot 10^{-3}\) \\ \hline \end{tabular}
\end{table}
Table 2: Values of the parameters to choose on the predictive HODMD algorithm for the database presented in this section.
Figure 12: Application of the superresolution algorithm to a database extracted from the flow past a cylinder. From left to right: the downsampled snapshot snapshot, the solution gave by the algorithm and the real snapshot extracted from the simulation.
HODMD is then applied to analyse the previous dataset, where the calibration of the method involved a careful selection of values, presented in Tab. 2, ensuring consistency and robustness in the obtained results [39]. Within our analysis, we have distinguished two distinct types of DMD modes: transient modes characterized by \(\delta<0\) and permanent modes with \(\delta\simeq 0\) (see the DMD expansion eq. (3.13)). Visual representations of the relationships between damping rates, mode amplitudes, and retained frequencies can be found in Fig. 13.
To construct the ROM, permanent modes are selected based on a condition for their growth rate: \(|\delta_{m}|<\varepsilon=10^{-3}\). The HODMD method is then employed to extrapolate the results, excluding transient modes, to the attractor for \(t\geq 1900\). For such application, ModelFLOWs-app follows the next steps: (i) the original dataset is reconstructed utilizing the DMD expansion eq. (3.13) only using the selected DMD modes, (ii) the growth rate of the selected DMD modes is set to \(0\) (although it is also possible not to change the growth rate of the modes, as function of the needs of the ROM), (iii) the temporal term of the DMD expansion is set as \(t\geq 1900\). The ROM predicts then the temporal evolution of the flow for temporal instants \(\geq 1900\).
The predicted solution exhibits a RRMSE error eq. (2.5) of \(\sim 6\%\) when compared to the real solution. Figure 14 shows a representative snapshot from the attractor, comparing the outcomes of the data-driven ROM with the original data. Notably, the ROM approximation performs worse in the
Figure 13: Frequencies vs. damping rates (left) and mode amplitudes (right) of the DMD modes calculated in the transient of a numerical simulation modelling the three-dimensional wake behind a circular cylinder. Red: modes with smaller damping rates, retained to form the ROM. Black: remaining modes. Fundamental frequencies are indicated by vertical lines.
spanwise velocity component compared to the other two components, mainly due to its significantly smaller magnitude. However, it can be inferred that the spatio-temporal symmetry in the spanwise velocity component is essentially preserved. The seep-up factor of this ROM compared to the numerical simulations is larger than 100.
### Module 2 - application 1: patterns identification
Autoencoders algorithm (Sec. 4.1) has been applied to identify temporal patterns in complex flows. More specifically, we applied this technique on the flow generated by two planar synthetic jets. These devices are characterized by the periodic movement of a membrane or piston inside a cavity. Each periodic cycle consists of an injection and suction phases in which the fluid is ejected and reintroduced into the cavity through a jet nozzle, respectively.
Figure 14: Predictive ROM using HODMD tested in the three-dimensional way of a circular cylinder. Streamwise (left), normal (center), and spanwise (right) velocity components in the mid \(x-z\) plane for a representative snapshot of the attractor at \(t=2900\). The data has been normalized using the maximum velocity value from the original data.
The database analysed model two synthetic jets working synchronously at Strouhal number (defined with the jet diameter) \(0.03\) and Reynolds number (defined with the jet diameter) \(100\) (from Ref. [62]). The spatial dimension of the computational domain is defined for the streamwise and normal direction in the interval \(x\in[0.9,35.6]D\) and \(y\in[-10,5.7]D\), respectively, where \(D\) refers to the diameter of the nozzle and the axis center is the center of upper jet exit. The temporal interval between snapshots is \(\Delta t=5.34\)e-\(02\)\(U/D\), where \(U\) is the characteristic velocity of the flow.
The main patters of the flow are analysed using POD and HODMD methods (both from ModelFLOWs-app module 1), and this section shows a new application where also AEs are used to extract the main patterns connected to the flow dynamics. More details about the results presented in this section can be found in Ref. [63].
Figure 15 compares the dominant mode identified by HODMD, AEs and POD methods. In previous research [62], HODMD relates this mode with the oscillation frequency driving the synthetic jet, Strouhal number \(0.03\). In the results obtained, these three modes present some differences, as expected since we use three different methodologies. Nevertheless, the high intensity areas found in the three modes are similar: the three modes recognize two high intensity regions in form of two ovals at the jets exit. A region with still high (but less than before) intensity extends further downstream the two jet exits. This region is more strengthned in AEs. The difference found in the shape of the modes could be connected to the non-orthogonality behind AEs calculations. More results and a detailed explanation can be found in Ref. [63]. Also Ref. [44] shows a detailed comparison of differet type of AEs and SVD modes, as well as their properties for reduced order modelling in turbulent urban flows.
The calibration of AEs depends on the charac
Figure 15: a) AEs, b) HODMD and c) POD first contributing mode on two synthetic jets. Arrows represent the jets’ input. Legend from c) applies to all pictures.
analyzed and its spatial and temporal dimensions, \(J\) and \(K\), respectively. The parameters to choose in ModelFLOWs-app are the training percentage, \(\%_{train}\), that splits the data into the training and test size (\(K_{training}=\%_{train}K\), \(K_{test}=(1-\%_{train})K\), respectively) and it is recommended to be smaller or equal to \(80\%\); the batch size, \(N_{batch}\in[1,K_{training}]\) is the size of the packages used for the training, and it is recommended to be a power of 2, being usually 32 and 64 the best values[64]; the number of AE modes retained, \(M<K_{training}\), also called as encoding dimension; and the maximum number of epochs, \(N_{epochs}\), is the maximum number of passes the algorithm takes around the training data, and possible examples of its value are 100, 200, 500,\(\cdots\). We propose high numbers for this parameter because we have implemented early stopping in the code. Therefore, if \(N_{epochs}\) is high, the early stopping will stop the training when the convergence is achieved, and it is too low, the training could stop before without assuring convergence. Finally, it is notorious that the number of AE modes retained is the parameter that will most affect the results. If the main application of AEs is to observe the main dynamics of the flow, we advise retaining a small number (i.e., 5, 10, 20), but to obtain small reconstruction errors, the number of modes retained should be larger. Based on these recommendations, the parameters chosen in the database analysed in this section, with \(J\)= 1980 and \(K\)=4369, are stored in Tab. 3.
### Module 2 - application 2: data reconstruction
Hybrid ROMs combining SVD with deep learning architectures (Sec. 4.2.2) has been applied to reconstruct downsampled databases in complex flows. This application is explained in detail in Ref. [12], where the authors propose a hybrid model, combination of SVD with neural networks. The neural network considered consists on combining two autoencoders that work in parallel and are joined into the last layer. The architecture has been proved
\begin{table}
\begin{tabular}{|l l l|} \hline
**Parameter** & **Symbol** & **Value** \\ \hline \hline Training percentage & \(\%_{train}\) & 80 \(\%\) \\ Batch size & \(N_{batch}\) & 32 \\ \# modes (autoencoder) & \(M\) & 10 \\ Epoch number & \(N_{epochs}\) & 200 \\ \hline \end{tabular}
\end{table}
Table 3: Calibration in AEs algorithm for patters indentification in two plana synthetic jets. Database with dimension \(J\)= 1980 and \(K\)=4369.
to be robust and generalizable, which allows reconstructing databases with a low reconstruction error, from databases containint information from a few sensors (as it is the case in experimental measurements).
As in the example presented in module 1 (Sec. 5.2.1), the case analysed is the saturated regime modelling the two-dimensional wake behind a circular cylinder at Reynolds number (defined with the cylinder diameter) 100. The database analysed has been obtained numerically, where details about the performance of the numerical simulation can be found in Ref. [35]. The spatial dimension of the computational domain is defined for the streamwise direction in the interval \(x\in[-1,8]d\) and for the normal direction in the interval \(y\in[-2,2]d\), being d the diameter of the cylinder. The database analyised is formed by 3 variables, the two velocity components (streamwise and normal) and the spanwise vorticity component, 449 spatial points along the streamwise direction and 199 points along the normal direction, and 150 snapshots equispaced in time with an interval of 0.2. From this database, a downsampled tensor is created, taking one every 30 points in both spatial directions, resulting in a tensor with dimensions \(3\times 15\times 7\times 150\).
Figure 16 shows the downsampled database and compares the reconstruction of a representative snapshot of the database, carried out using the present methodology, with the original solution. The RRMSE eq. (2.5) in the reconstruction is smaller than 8% in the whole database analysed. The algorithm has also been successfully tested with initial databases composed by a very small number of grid points (i.e. 8 in each spatial direction). More details can be found in in Ref. [12].
The calibration of the method depends on several parameters, which are listed in Tab. 4. If selected, the database can be pre-processed with the parameter \(1^{st}Scaling\), as well as the matrices that will enter the deep learning model, in that case the parameter is called as \(2^{nd}Scaling\). The parameter \(\%_{train}\) refers to the data used in the training process, which is recommended to be less than the 80% of the samples. The batch size \(N_{batch}\) can be also selected, as in the previous section, as well as the number of epoch (\(N_{epoch}\)).
For the deep learning model, ModelFLOWs-app offers the possibility to selecting and using the optimal model hyperparameters of the deep learning architecture: we use _RandomSearch Keras_ tuner. Additionally, these parameters can be directly selected by the user. These are: (i) the number of neurons \(N_{neurons}\), the activation function \(AF\), the loss function \(l_{f}\) and the learning rate \(l_{r}\).
### Module 2 - application 3: time-series forecasting models
Time-series forecasting models based on deep learning (Sec. 4.2.1) have been applied to two diferent problems modelling different types of complex flows. The main goal of this application is to extract data from the transient region of a numerical simulation and to predict the evolution of the simulation, so these ROMs have been developed to speed-up numerical simulations.
\begin{table}
\begin{tabular}{|l l l|} \hline
**Parameter** & **Symbol** & **Value** \\ \hline \hline First Scaling & \(1^{st}Scaling\) & \(No\) \\ Second Scaling & \(2^{nd}Scaling\) & \(No\) \\ Training size & \(\%_{test}\) & 0.8 \\ Batch size & \(N_{batch}\) & 23 \\ Epoch number & \(N_{epochs}\) & 500 \\ Hyper parameterization & \(Hyper\) & No \\ Activation functions (only for \(Hyper\) = No) & \(AF\) & ReLU \\ \# Neurons (only for \(Hyper\) = No) & \(N_{neurons}\) & 13 \\ Learning rate (only for \(Hyper\) = No) & \(l_{r}\) & 0.002 \\ Loss function (only for \(Hyper\) = No) & \(l_{f}\) & \(mse\) \\ \hline \end{tabular}
\end{table}
Table 4: Parameters to choose on the data reconstruction algorithm and the values selected for the database presented in this section.
Figure 16: Reconstruction of a representative snapshot of the flow past a cylinder using ModelFLOWs-app. From left to right: original snapshot, the downsampled snapshot (input of the neural network) and the reconstructed snapshot (output of the algorithm). From top to bottom: Streamwise and normal velocity components.
To show the good properties and robustness of the models proposed, two different methodologies have been followed. We first apply the hybrid framework (HybridDL) presented in Refs. [43; 3], where the proposed models are a combination of SVD and deep learning architectures. Secondly we follow a fully deep leaning framework (FullyDL), as in Ref. [65], where the proposed models are fully based on deep learning architectures. It is worth to mention that for this second application, in Ref. [65], dimensionality reduction was also carried out by HODMD, instead of SVD. Nevertheless, the present article only shows the application of the deep learning architecture itself. As seen in this section, the two methodologies available in ModelFLOWs-app to develop predictive ROMs can be applied to different kind of problems.
As mentioned before, HybridDL models first apply SVD to the database. This pre-processing allows to reduce the number of trainable parameters required by the models, simplifying the training. But, at the cost of loosing part of the spatial information, since the spatial dimension tensor is flattened into a single vector before applying SVD, forming the snapshot matrix eq. (2.1). This methodology shows good performance in several examples: the three-dimensional wake of a circular cylinder, synthetic jets [43], and reactive flows (axisymmetric, time varying, non-premixed, co-flow nitrogen-diluted methane flame [3]). Results about the latter case are shown. More information about the numerical simulacion of the laminar flame and the numerical set-up can be found in Refs. [66; 67]. From this detailed simulation, a database is extracted, composed by 10 variables (the temperature and 9 chemical species). The spatial dimension of the computational domain is defined for the axial direction: 100 points in the interval \(z\in[0,0.12]\); and for the radial direction: 75 points in the interval \(r\in[0,0.02]\), respectively. The number of snapshots extracted is \(K=999\), equidistant in time with \(\Delta t=2.5\times 10^{-4}\). Figure 17 shows the reactive flow test case. It compares the real solution with the predictions carried out using RNN and CNN architectures. The number of SVD modes retained in the first step of the model is 18. The figure shows a representative snapshot of the \(CO_{2}\) mass fraction and the evolution in time of temperature field extracted at two representative points from the computational domain. The RRMSE eq. (2.5) in the predictions is smaller than 4% and 3% for the CNN and RNN architectures, respectively, and the speed-up factor in the numerical simulations is higher than 100 in both cases. See more details about this example in Ref. [3].
The second example uses the methodology FullyDL, which in contrast to HybridDL, any dimensionality reduction is carried out in the original database. This increases the number of trainable parameters and, therefore, the complexity of the neural network training. However, we keep the spatial information contained in the snapshots, i.e., the flow structures that define the flow dynamics. This methodology has been successfully tested in several problems modelling complex flows, as explained in Refs. [68, 65]. This section presents the results of applying these models to predict the evolution of a two-phase flow concentric jet. The flow comprises two liquid jets, consisting of two incompressible, viscous, and immiscible fluids, with Reynolds numbers of \(Re_{1}=30\) and \(Re_{2}=200\) for each phase, respectively, and a Weber number of \(We=80\). Both jets arise from two nozzles separated by a gap, whose length can be close to zero. The spatial dimension of the computational domain is defined for the streamwise direction in the interval \(x\in[0,16]\) and \(y\in[0,8]\), respectively. The temporal interval between snapshots is \(\Delta t=0.005\). Details about the numerical simulations carried out to solve this problem can be found in Ref. [65]. The database used to obtain the results in Figure 18 is composed of a spatial grid formed by \(100\times 100\) points
Figure 17: Predictive hybrid ROM to predict reactive flows. Representative snapshot showing concentration of \(CO_{2}\). Predictions carried out using RNN (left) and CNN (right) architectures. Right and left parts of each contour show the original snapshot the prediction. Mid par of the figure: temporal evolution of temperature at two characteristic points of the computational domain. Points extracted where the triangle and square are located in the figure contour.
for the streamwise velocity, and 301 snapshots. Therefore, in this case the spatial dimension of the problem is \(J=100\times 100\) (in this framework we do not flatten the spatial dimension), and the temporal dimension is \(K=301\). Figure 18 compares two representative snapshot of the predictions carried out using the CNN and RNN models. The RRMSE eq. (2.5) computed in these predictions is 0.044 and 0.1 for the CNN and RNN architectures, respectively. The speed-up in the numerical simulations is 9.88 and 9.03 for the CNN and RNN architectures, respectively.
In each methodology, we developed two different models: CNN and RNN, which is based on the Long Short-Term Memory (LSTM) architecture [69]. It is important to note that the LSTM architecture only uses a vector structure, which is not a problem in HybridDL because the spatial dimension was already flattened to apply SVD. However, the FullyDL framework takes the original snapshots as input, which means that the spatial dimension needs to be flattened. Due to this, the RNN model exhibits significantly better performance in the HybridDL framework compared to FullyDL. On the other hand, the CNN model shows better performance in the FullyDL framework than in HybridDL. This is because Convolutional Neural Networks were developed for analyzing snapshots, particularly in the field of computer vision[70]. Therefore, this architecture is capable of better capturing the spatial structures present in the snapshots.
As seen in Tabs. 5, 6 the calibration of the RNN and CNN architectures differ depending on the framework: HybridDL or FullyDL. Details are
Figure 18: Predictions carried out using models insisde FullyDL framework. Streamwise velocity in the original solution (left), and in the predictions using CNN (middle) and RNN (right) architectures.
presented below.
#### 5.6.1 Calibration of models in HybridDL framework
Both RNN and CNN models share the same hyperparameters for calibration, listed in Table 5. Note this calibration depends on the database. From the mentioned case, the reactive flow, presented in Ref. [3], the database analysed is formed by: the temperature and the most important 9 chemical species, 100 points on the streamwise direction, 75 on the normal direction and 999 snapshots. This gives a fourth-order tensor of dimensions \(10\times 100\times 75\times 999\).
The hyperparameter \(1^{st}Scaling\) indicates the general scaling to be applied to the database. This step is necessary in combustion databases, as the variables can differ significantly in magnitude among them. In this case, \(autoscaling\) has been used, which utilizes the standard deviation of each variable, ensuring that all variables have equal importance [22]. After this scaling step, the tensor is reshaped into a matrix, grouping all dimensions
\begin{table}
\begin{tabular}{|l l l|} \hline
**Parameter** & **Symbol** & **Value** \\ \hline \hline \# modes (SVD) & \(N\) & 18 \\ First scaling & \(1^{st}Scaling\) & Auto \\ Second scaling & \(2^{nd}Scaling\) & MpM \\ Testing size & \%_{test}\) & 0.2 \\ Validation size & \%_{val}\) & 0.15 \\ Batch size & \(N_{batch}\) & 12 \\ \# epochs & \(N_{epochs}\) & 400 \\ \# input samples & \(k\) & 10 \\ \# time ahead predictions & \(p\) & 6 \\ Hyper parameterization & \(Hyper\) & No \\ Hidden activation functions (only for \(Hyper\) = No) & \(HiddenAF\) & ELU \\ Output activation functions (only for \(Hyper\) = No) & \(OutAF\) & Tanh \\ \# Neurons (only for \(Hyper\) = No) & \(N_{neurons}\) & 100 \\ Shared dimension (only for \(Hyper\) = No) & \(SharedDim\) & 80 \\ Learning rate (only for \(Hyper\) = No) & \(l_{r}\) & 0.005 \\ Loss function (only for \(Hyper\) = No) & \(l_{f}\) & custom\_loss \\ \hline \end{tabular}
\end{table}
Table 5: Hyperparameters of models belonging to the HybridDL framework. The symbol # means number and the Value column shows the configuration used in Ref. [3].
except the temporal one into a single dimension. This results in a matrix with dimensions of \(75000\times 999\). Subsequently, SVD is applied, and the first \(N\) modes are retained, which contain the large scales.
The term \(2^{nd}Scaling\) refers to the scaling applied to the temporal coefficients of the SVD, specifically for the reduced database (\(\widehat{\mathbf{V}}_{1}^{K}\)) used for training the models. In our case, \(MpM\) scaling has been employed, as it has been demonstrated to be suitable when dealing with non-periodic databases [3].
The hyperparameters \(\%_{val}\) and \(\%_{test}\) refer to the percentage of the total number of samples (\(K\)) that is used repectively for validation and testing, the remaining are used for training. \(N_{batch}\) is used to select the batch size and \(N_{epochs}\) to select the number of epochs to use for training, i.e., the training length. When using the rolling window technique (previous Fig. 8), it is necessary to specify the number of input samples for each window \(k\) and the number of target samples (time ahead predictions) \(p\).
Both the RNN and CNN models have a feature called \(Hyper\) that allows for automatic hyperparameterization of the remaining features. The \(HiddenAF\) parameter sets the activation function of the hidden layers, while \(OutAF\) sets the activation function of the output layer. In the case of the RNN model, the \(N_{neurons}\) parameter specifies the number of neurons to use in the LSTM layer. Additionally, the \(SharedDim\) hyperparameter modifies the number of neurons in the models. The \(lr\) parameter specifies the learning rate, and \(lf\) determines the loss function used for training the models.
For the \(lf\) parameter, a new physics-aware loss function called \(PA-MSE\) has been implemented specifically for reacting flows databases. This loss function aims to achieve a good reconstruction while ensuring that the sum of the species is equal to 1. In ModelFLOWs-app, this loss function can be selected as \(custom_{l}oss\).
If the \(Hyper\) parameter is set to \(Yes\), these last six hyperparameters can be automated. Otherwise, they must be explicitly specified.
See more details about this application in Ref. [3].
#### 5.6.2 Calibration of models in framework FullydDL
Similar to the previous subsection, in this one we summarize the hyperparameters available, listed on Tab. 6, for both models, RNN and CNN, in the FullyDL framework. Some of them share the same meaning as in previous subsection, these are \(N_{batch}\), \(N_{epochs}\), \(k\), \(p\), \(Hyper\), \(HiddenAF\), \(OutAF\), \(SharedDim\) and \(lr\). Hyperparameters \(\%_{train}\) and \(\%_{val}\) specify which percentage of samples will be used for training and validation, respectively. While the entire database is used for testing. Also, in this framework the hyperparameter \(Model_{T}\) specifies which model to use: RNN or CNN. Finally, \(N_{neurons}\) shows the number of neurons used in the LSTM layer, inside the RNN model. Similar to HybridDL framework, the four bottom hyperparameters in Tab. 6 can be automated if \(Hyper=\) Yes, otherwise they must be specified.
## 6 Conclusions
This article introduces the ModelFLOWs-app, a novel software tool that has proven to be effective in developing accurate and robust fully data-driven hybrid reduced order models. The software combines modal decomposition and deep learning strategies to uncover key patterns in dynamical systems,
\begin{table}
\begin{tabular}{|l l l|} \hline
**Parameter** & **Symbol** & **Value** \\ \hline \hline Training size & \(\%_{train}\) & 0.8 \\ Validation size & \(\%_{val}\) & 0.2 \\ Batch size & \(N_{batch}\) & 5 \\ Type of model & \(Model_{T}\) & rnn \\ \# epochs & \(N_{epochs}\) & 140 \\ \# input samples & \(k\) & 10 \\ \# time ahead predictions & \(p\) & 2 \\ Hyper parameterization & \(Hyper\) & No \\ Hidden activation functions (only for \(Hyper=\) No) & \(HiddenAF\) & Dense \\ Output activation functions (only for \(Hyper=\) No) & \(OutAF\) & None \\ \# Neurons (only for RNN and when \(Hyper=\) No) & \(N_{neurons}\) & 400 \\ Shared dimension (only for \(Hyper=\) No) & \(SharedDim\) & 80 \\ Learning rate (only for \(Hyper=\) No) & \(lr\) & 0.005 \\ \hline \end{tabular}
\end{table}
Table 6: Hyperparameters of models belonging to the FullyDL framework. The symbol # means number and the Value column shows the configuration used in Ref. [65].
enabling a deeper understanding of the underlying physics. It offers valuable capabilities such as database reconstruction from sparse measurements and accurate prediction of system dynamics, providing an alternative to computationally expensive numerical simulations.
Although initially developed for analyzing turbulent flows, ModelFLOWs-app has demonstrated its versatility and exceptional properties in a wide range of industrial applications involving complex non-linear dynamical systems. Examples include various applications in fluid dynamics such as temporal forecasting in wind turbines, predictions in reactive flows and future projections in compressible flows with buffeting, identification of flow instabilities in turbulent flows, medical imaging pattern identification and reconstruction, wind velocity prediction using lidar measurements, identification of flutter instability in flight tests, et cetera.
Readers are encouraged to thoroughly explore the applications presented in this article and further enhance their knowledge by accessing the tutorials and videos available on the ModelFLOWs-app website [18].
## 7 Acknowledgements
The authors would like to acknowledge the collaboration of the following researchers, who have contributed by sharing databases, writing articles, providing new ideas, and engaging in fruitful discussions. The collective work carried out with these researchers over the past years has greatly enhanced the robustness of the current codes. These researchers are: Prof. J.M. Vega (UPM), Dr. R. Vinuesa (KTH), Prof. A. Parente (ULB), Prof. L. Brant (KTH), Dr. M. Rosti (OIST), Prof. O. Tammisola (KTH), and Prof. J. Soria (Monash Uni.). The authors would also like to express their gratitude to the research group ModelFLOWs for their valuable discussions, assistance in generating new databases, and for their support in testing some of the developed tools. S.L.C., A.C. and S.R.A. acknowledge the grant PID2020-114173RB-I00 funded by MCIN/AEI/ 10.13039/501100011033 and the support of Comunidad de Madrid through the call Research Grants for Young Investigators from Universidad Politecnica de Madrid. A.C. also acknowledges the support of Universidad Politecnica de Madrid, under the programme 'Programa Propio'. E.L. and S.L.C. acknowledge the support provided by Grant No. TED2021-129774B-C21 and by Grant No. PLEC2022-009235, funded by MCIN/AEI/10.13039/501100011033 and by the European Union "NextGenerationEU"/PRTR. |
2310.13180 | Note on the group of vertical diffeomorphisms of a principal bundle, and
its relation to the Frölicher-Nijenhuis bracket | The group of vertical diffeomorphisms of a principal bundle forms the
generalised action Lie groupoid associated to the bundle. The former is
generated by the group of maps with value in the structure group, which is also
the group of bisections of the groupoid. The corresponding Lie algebra of
general vertical vector fields is generated by maps with value in the Lie
algebra of the structure group. The bracket on these maps, induces by the
bracket of vertical vector fields, is an ``extended" bracket on gauge
parameters: it has been introduced heuristically in physics, notably in the
study asymptotic symmetries of gravity. Seeing the set of Lie algebra-valued
maps as sections of the action Lie algebroid associated to the bundle, the
extended bracket is understood to be a Lie algebroid bracket on those sections.
Here, we highlight that this bracket can also be seen to arise from the
Fr\"olicher-Nijenhuis bracket of vector-valued differential forms. The benefit
of this viewpoint is to insert this extended bracket within the general
framework of derivations of forms on the bundle. Identities relating it to
usual operations -- inner product, exterior and (Nijenhuis-) Lie derivative --
are immediately read as special cases of general results. We also look at the
generalised gauge transformations induced by vertical diffeomorphisms, and
discuss their peculiar features. In particular, locally, and contrary to
standard gauge transformations arising from vertical bundle automorphisms, they
are distinguishable from local gluings when iterated. Yet, the gauge principle
still holds. | Jordan François | 2023-10-19T22:27:40Z | http://arxiv.org/abs/2310.13180v2 | # Note on the group of vertical diffeomorphisms
###### Abstract
We consider the group of vertical diffeomorphisms of a principal bundle which are not automorphisms, and the corresponding group of maps with value in the structure group. The associated Lie algebra of general vertical vector fields has a Lie bracket that is but an instance (in degree 0) of the Frolicher-Nijenhuis bracket for vector-valued differential forms. The induces bracket on the maps with value in the Lie algebra of the structure group happens to reproduces an "extended" bracket introduced heuristically in physics to study asymptotic symmetries of gravity. We thus correct the misconception that the latter is a Lie algebroid bracket. We detail the elementary underlying geometry, whose exposition is absent from most standard treatments of bundle geometry, and we discuss its relevance to gauge field theory.
**Keywords** : Bundle geometry, Frolicher-Nijenhuis bracket, Nijenhuis-Lie derivative, gauge field theory.
###### Contents
* 1 Introduction
* 2 Vertical diffeomorphisms of a principal bundle and the Frolicher-Nijenhuis bracket
* 2.1 Groups of vertical and gauge transformations
* 2.2 Linearisation and Lie algebraic structures
* 2.3 The Nijenhuis-Lie derivative and the Frolicher-Nijenhuis bracket on a principal bundle
* 3 General vertical transformations and the Nijenhuis-Lie derivative
* 3.1 Finite vertical and gauge transformations
* 3.2 Discussion
* 3.3 Infinitesimal vertical transformations
* 4 Local structure
* 4.1 Gluings and local active gauge transformations
* 4.2 Linear versions
* 4.3 Generalised local active gauge transformations
* 5 Conclusion
Introduction
In the some of the literature on gauge field theory and gravity, field-dependent gauge parameters or diffeomorphisms are often considered. These are needed, is it often argued, because of gauge-fixing or when boundary conditions are imposed that needs to be preserved by gauge symmetries. Consequently, a bracket extending the Lie algebra bracket of gauge parameters (the bracket of vector fields in the case of diffeomorphisms) is introduced heuristically, so as to take into account their field-dependence. Such a bracket features often e.g. in the covariant phase space literature, for example in investigations of spacetime asymptotic symmetries (BMS and extensions). See e.g. [1, 2, 3, 4, 5] and references therein. Indeed this literature often traces back the introduction of this bracket to a paper by Barnich and Troessaert [6]. While these authors certainly deserve credit for independently coming up with it, the extended bracket was introduced earlier (maybe first) by Bergmann and Komar in [7], and then further used by Salisbury and Sundermeyer in [8]. Its geometric origin remained unclear, few attempts have been made to clarify its mathematical status. Barnich, shortly after re-introducing it, proposed that it be understood as a Lie algebroid bracket [9].
In this note, we show that actually the extended bracket is the simplest instance of the the Frolicher-Nijenhuis (FN) bracket of vector-valued differential forms [10] on a principal bundle: the principal bundle in question being the infinite dimensional field space \(\Phi\) of a gauge theory, whose structure group is the gauge group of the theory. The field space Lie derivative along field-dependent vector fields is then simply the Nijenhuis-Lie derivative.
This observation is quite natural, and can be articulated already for a finite dimensional principal bundle \(P\) with structure group \(H\). Yet it doesn't seem to appear in most available standard treatments of the differential geometry of principal bundles (probably explaining the confusion about the meaning of such extended brackets). So, here we lay the case in some details: We first describe the group \(\operatorname{Diff}_{\nu}(P)\) of vertical diffeomorphisms of \(P\), and characterise the corresponding group of \(H\)-valued maps on \(P\). Then, we consider their respective Lie algebras and show that the bracket of _general_ vertical vector fields is but an instance - in degree \(0\) - of the FN bracket, and that it corresponds to a natural _extended bracket_ on the associated Lie algebra of \(\operatorname{Lie}H\)-valued maps on \(P\). Finally, we consider the related issue of _general vertical transformations_ of forms on \(P\), resulting from the action of \(\operatorname{Diff}_{\nu}(P)\), and we show that the infinitesimal version of those is given by the Nijenhuis-Lie derivative along general vertical vector fields.
As mentioned, this apply with very little adjustments (mainly notational) to the infinite dimensional case of the field space \(\Phi\) of gauge field theories, whose structure group can either be the internal gauge group \(\mathcal{H}\) of the theory (i.e. of \(P\)) or \(\operatorname{Diff}(M)\). We give a detailed account of the later case in a separate work [11], for which this paper can be considered a preparatory technical note.
## 2 Vertical diffeomorphisms of a principal bundle and the Frolicher-Nijenhuis bracket
### Groups of vertical and gauge transformations
Classical gauge gauge field theory is founded on the geometry of connections on fiber bundles, whose central objects are principal bundles. Consider a \(H\)-principal bundle \(P\) over a base manifold \(M\) ("spacetime" in gauge field theory), \(H\) its structure (Lie) group. As a manifold, \(P\) has a group of diffeomorphisms \(\operatorname{Diff}(P)\), but as a bundle its maximal group of transformations is the group of bundle automorphisms \(\operatorname{Aut}(P):=\{\psi\in\operatorname{Diff}(P)\,|\,\psi\circ R_{h}=R_{ h}\circ\psi\}\). The latter, commuting with the right action of \(H\), preserve fibers, and thus project as diffeomorphisms of \(M\).
The subgroup of _vertical_ diffeomorphisms of the \(P\) is \(\operatorname{Diff}_{\nu}(P):=\{\psi\in\operatorname{Diff}(P)\,|\,\pi\circ\psi=\pi\}\). These are diffeos \(\psi\) moving along fibers, therefore there is a unique smooth map \(\gamma:P\to H\) s.t. \(\psi(p)=R_{\gamma(p)}p=p\gamma(p).\) In other words, \(\psi\in\operatorname{Diff}_{\nu}(P)\) corresponds a unique \(\gamma\in C^{\infty}(P,H)\).
This generalises the subgroup of vertical automorphisms \(\operatorname{Aut}_{\nu}(P):=\{\psi\in\operatorname{Aut}(P)\,|\,\pi\circ\psi=\pi\}\), isomorphic to the gauge group \(\mathcal{H}:=\{\gamma:P\to H\,|\,R^{*}\gamma=h^{-1}\gamma h\}\) via \(\psi(p)=R_{\gamma(p)}\,p\): The constraint of \(H\)-equivariance of \(\psi\) defining the group of bundle automorphisms \(\operatorname{Aut}(P)\) is responsible for the specific equivariance required on elements \(\gamma\) of the gauge group. Obviously then, \(\operatorname{Diff}_{\nu}(P)\cap\operatorname{Aut}(P)=\operatorname{Aut}_{\nu}(P)\).
We may also notice that bundle automorphisms belong to the normaliser of vertical diffeomorphisms: Indeed, for \(\varphi\in\operatorname{Aut}(P)\) and \(\psi\in\operatorname{Diff}_{\nu}(P)-\gamma\in C^{\infty}(P,H)\) we have,
\[(\varphi^{-1}\circ\psi\circ\varphi)(p)=\varphi^{-1}\circ\psi(\varphi(p))= \varphi^{-1}(R_{\gamma(\varphi(p))}\,\varphi(p))=R_{\gamma(\varphi(p))}\, \varphi^{-1}\circ\varphi(p)=R_{\gamma(\varphi(p))}\,p.\]
Therefore, \(\pi\circ(\varphi^{-1}\circ\psi\circ\varphi)=\pi\), i.e. \(\varphi^{-1}\circ\psi\circ\varphi\in\mathrm{Diff}_{v}(P)\). Since naturally a group is a subgroup of its normaliser, we have that,
\[N_{\mathrm{Diff}(P)}(\mathrm{Diff}_{v}(P))=\mathrm{Diff}_{v}(P)\cup\mathrm{Aut}(P). \tag{1}\]
As a special case, we have \(N_{\mathrm{Diff}(P)}(\mathrm{Aut}_{v}(P))=\mathrm{Aut}(P)\), i.e. the well-known fact that \(\mathrm{Aut}_{v}(P)\mathbin{\raisebox{0.0pt}{$\bullet$}}\mathrm{Aut}(P)\), which in turn gives us the short exact sequence (SES) of groups characteristic of a principal bundle:
\[\mathrm{id}\to\mathrm{Aut}_{v}(P)\simeq\mathcal{H}\stackrel{{ \iota}}{{\to}}\mathrm{Aut}(P)\xrightarrow{\pi}\mathrm{Diff}(M)\to\mathrm{id} \tag{2}\]
The linearised version is the SES of Lie algebras, a.k.a. the Atiyah Lie algebroid associated to a principal bundle \(P\).
\[0\to\mathrm{aut}_{v}(P)=\Gamma_{H}(VP)\simeq\mathrm{Lie}\mathcal{H} \xrightarrow{\iota}\mathrm{aut}(P)=\Gamma_{H}(TP)\xrightarrow{\pi_{*}} \mathrm{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
### Linearisation and Lie algebraic structures
We now turn to the question of linearisation, i.e. of describing the Lie algebra \(\mathfrak{bif}\mathfrak{f}_{v}(P)\): Consider a 1-parameter element \(\psi_{\tau}\in\mathrm{Diff}_{v}(P)\) s.t. \(\psi_{\tau}(p)=R_{\gamma_{t}(p)}p\) with \(\gamma_{\tau}\in C^{\infty}(P,H)\). We have by definition,
\[\tfrac{d}{d\tau}\gamma_{\tau}(p)\big{|}_{\tau=0} =:X(p)\in\mathrm{Lie}H, \tag{13}\]
and \(X\in C^{\infty}(P,\mathrm{Lie}H)=\Omega^{0}(P,\mathrm{Lie}H)\). By definition, an element of \(\mathfrak{bif}\mathfrak{f}_{v}(P)\) is a _general_ vertical vector field:
\[\tfrac{d}{d\tau}\psi_{\tau}(p)\big{|}_{\tau=0} =:X(p)_{[p}^{v}\in V_{p}P, \tag{14}\] \[=pX(p),\]
and \(X^{v}\in\mathfrak{bif}\mathfrak{f}_{v}(P)\simeq\Omega^{0}(P,VP)\). Now,
\[\tfrac{d}{d\tau}\,\psi_{\tau}^{-1}\circ\psi_{\tau}(p)\big{|}_{ \tau=0} =\tfrac{d}{d\tau}\,\psi_{\tau}(p)\,\tilde{\gamma}_{\tau}(\psi_{ \tau}(p))\big{|}_{\tau=0},\] \[0 =\tfrac{d}{d\tau}\,\psi_{\tau}(p)\big{|}_{\tau=0}\,\tilde{\gamma} _{0}(\psi_{0}(p))+\psi_{0}(p)\,\tfrac{d}{d\tau}\,\tilde{\gamma}_{\tau}(\psi_{ \tau}(p))\big{|}_{\tau=0},\] \[\Rightarrow \tfrac{d}{d\tau}\,\tilde{\gamma}_{\tau}(\psi_{\tau}(p))\big{|}_{ \tau=0} =-X(p)_{[p}^{v}. \tag{15}\]
The bracket of vertical vector fields (14) must be a vertical vector field itself. We want the expression of its generating element in \(\mathcal{C}^{\infty}(P,\mathrm{Lie}H)\). Consider \(X^{v}\) generated by \(\psi_{\tau}\in\mathrm{Diff}_{v}(P)\) with \(X(p)=\tfrac{d}{d\tau}\gamma_{\tau}(p)\big{|}_{\tau=0}\in C^{\infty}(P,\mathrm{ Lie}H)\), and \(Y^{v}\) generated by \(\varphi_{\tau}\in\mathrm{Diff}_{v}(P)\) with \(Y(p)=\tfrac{d}{d\tau}\eta_{\tau}(p)\big{|}_{\tau=0}\in C^{\infty}(P,\mathrm{ Lie}H)\). Their bracket is:
\[[X(p)^{v},Y(p)^{v}]_{[p} :=\tfrac{d}{d\tau}\tfrac{d}{ds}\,\psi_{\tau}^{-1}\circ\psi_{s} \circ\psi_{\tau}(p)\big{|}_{s=0}]_{\tau=0},\] \[=\tfrac{d}{d\tau}\,\Big{\{}\psi_{\tau}(p)\,\underbrace{d}_{s}\, \eta_{s}(\psi_{\tau}(p))\,\big{|}_{s=0}\] \[=\tfrac{d}{d\tau}\Big{\{}\psi_{\tau}(p)\,\underbrace{d}_{s}\, \eta_{s}(\psi_{\tau}(p))\big{|}_{s=0}\] \[=\tfrac{d}{d\tau}\Big{\{}p\,\mathrm{Ad}(\gamma_{\tau}(p))Y(\psi_{ \tau}(p))+\psi_{\tau}(p)\cdot\big{[}Y^{v}(\tilde{\gamma}_{\tau})\big{]}(\psi_ {\tau}(p))\Big{\}}\Big{|}_{\tau=0},\] \[=p\Big{\{}\tfrac{d}{d\tau}\,\mathrm{Ad}(\gamma_{\tau}(p))Y(p) \big{|}_{\tau=0}+\mathrm{Ad}(\underbrace{\gamma_{0}(p)}_{\mathrm{id}_{H}})\, \tfrac{d}{d\tau}\,Y(\psi_{\tau}(p))\big{|}_{\tau=0}\,+\,\tfrac{d}{d\tau}\,[Y^{ v}(\tilde{\gamma}_{\tau})](\psi_{\tau}(p))\big{|}_{\tau=0}\Big{\}},\] \[=p\Big{\{}\mathrm{ad}(X(p))Y(p)+[X^{v}(Y)](p)+[Y^{v}(-X)](p)\Big{\}},\] \[=:[X(p),Y(p)]_{[p}^{v}+[[X^{v}(Y)](p)]_{[p}^{v}-[Y^{v}(X)](p)]_{[p }^{v}.\]
Thus we obtain, for \(X^{v},Y^{v}\in\Omega^{0}(P,VP)\) and \(X,Y\in C^{\infty}(P,\mathrm{Lie}H)\):
\[[X^{v},Y^{v}] =\{X,Y\}^{v}, \tag{16}\] \[\text{with}\quad\{X,Y\}:=[X,Y]_{\mathrm{Lie}H}+X^{v}(Y)-Y^{v}(X).\]
The sought after generating element of the bracket of general vertical vector fields is then \(\{X,Y\}\in C^{\infty}(P,\mathrm{Lie}H)\). This _extended_ bracket \(\{\,,\,\}\), reflecting the recursive nested structure (11)-(12) on the elements of \(C^{\infty}(P,H)\) generating \(\mathrm{Diff}_{v}(P)\), is manifestly antisymmetric in \(X,Y\) and satisfies the Jacobi identity (as is easily proven). Therefore, \(C^{\infty}(P,\mathrm{Lie}H)\) equiped with \(\{\,,\,\}\) is a Lie algebra: We have thus the Lie algebra isomorphism \(\mathfrak{bif}\mathfrak{f}_{v}(P)\simeq C^{\infty}(P,\mathrm{Lie}H)\) (respective brackets tacitly understood). It follows in particular that the Lie derivative an inner product along \(\mathfrak{bif}\mathfrak{f}_{v}(P)\) satisfy:
\[[L_{X^{v}},L_{Y^{v}}] =L_{[X^{v},Y^{v}]}=L_{[X,Y]^{v}}, \tag{17}\] \[=t_{[X^{v},Y^{v}]}=t_{[X,Y]^{v}}.\]
Notice that for \(X,Y\in\mathrm{Lie}\mathcal{H}\), we have \(R_{\mathrm{s}}^{v}Y=\mathrm{Ad}_{\mathrm{d}^{v-1}}Y\), so \(X^{v}(Y)=[Y,X]_{\mathrm{Lie}H}\). Therefore, the extended bracket reduces to \(\{X,Y\}=-[X,Y]_{\mathrm{Lie}H}\) and \([X^{v},Y^{v}]=(-[X,Y]_{\mathrm{Lie}H})^{v}\) as it should. This is the standard result that the "verticality map" \(|v:\mathrm{Lie}\mathcal{H}\to\mathfrak{aut}_{v}(P)\simeq\Gamma_{H}(VP)\), \(X\mapsto X^{v}\) is an anti-isomorphism.
We may also observe that given a connection \(\omega\) on \(P\), satisfying \(\omega(X^{v})=X\), its curvature is a tensorial 2-form that can be expressed via Cartan's structure equation \(\Omega=d\omega+\frac{1}{2}[\omega,\omega]_{\mathbb{R},\omega t}\). And indeed one may check (in the very process of proving that equation) that, on \(X^{v},Y^{v}\in\operatorname{\mathfrak{diff}}_{\mathbb{V}}(P)\):
\[\Omega(X^{v},Y^{v}) =d\omega(X^{v},Y^{v})+[\omega(X^{v}),\omega(Y^{v})]_{\mathbb{R}, \omega t},\] \[=X^{v}\cdot\omega(Y^{v})-Y^{v}\cdot\omega(X^{v})-\omega([X^{v},Y ^{v}])\ +\ [X,Y]_{\mathbb{R},\omega t},\] \[=X^{v}(Y)-Y^{v}(X)-\omega(\{X,Y\}^{v})\ +\ [X,Y]_{\mathbb{R}, \omega t}\equiv 0. \tag{18}\]
As we are now about to show, the extended bracket featuring in (16) is but the degree 0 of the the Frolicher-Nijenhuis bracket of vector-valued forms on \(P\), while the Lie derivative along \(\operatorname{\mathfrak{diff}}_{\mathbb{V}}(P)\) is but a case of the Nijenhuis-Lie derivative.
### The Nijenhuis-Lie derivative and the Frolicher-Nijenhuis bracket on a principal bundle
We refer to [10] (chap. II, section 8) for a systematic presentation of the following notions (on a generic smooth manifold) and for proofs of the relations displayed.
As a manifold, \(P\) has a space of differential forms \(\Omega^{\bullet}(P)\). Its space of derivations forms a graded Lie algebra \(\operatorname{Der}_{\bullet}\left(\Omega^{\bullet}(P)\right)=\bigoplus_{k} \operatorname{Der}_{k}\left(\Omega^{\bullet}(P)\right)\), with graded bracket \([D_{k},D_{l}]=D_{k}\circ D_{l}-(-)^{kl}D_{l}\circ D_{k}\), with \(D_{i}\in\operatorname{Der}_{i}\left(\Omega^{\bullet}(P)\right)\).
The de Rham complex of \(P\) is \((\Omega^{\bullet}(P);d)\) with \(d\in\operatorname{Der}_{1}\) the de Rham (exterior) derivative, which is nilpotent - \(d^{2}=0=\,^{1}\!/_{2}[d,d]\) - and defined via the Koszul formula. Given the exterior product \(\wedge\) defined as usual on scalar-valued forms, we have that \((\Omega^{\bullet}(P,\mathbb{K}),\wedge,d)\) is a differential graded algebra.1
Footnote 1: The exterior product can also be defined on the space \(\Omega^{\bullet}(P,\mathbb{A})\) of variational differential forms with values in an algebra \((\mathbb{A}\cdot)\), using the product in \(\mathbb{A}\) instead of the product in the field \(\mathbb{K}\). So \((\Omega^{\bullet}(P,\mathbb{A}),\wedge,d)\) is a again a differential graded algebra. On the other hand, an exterior product cannot be defined on \(\Omega^{\bullet}(P,\mathbb{V})\) where \(\mathbb{V}\) is merely a vector space.
One may define the subset of vector-field valued differential forms \(\Omega^{\bullet}(P,TP)=\Omega^{\bullet}(P)\otimes TP\). Then, the subalgebra of "_algebraic_" derivations is defined as \(D_{\Omega^{\Omega^{\Omega}(P)}}=0\), they have the form \(\iota_{K}\in\operatorname{Der}_{k-1}\) for \(K\in\Omega^{k}(P,TP)\), with \(\iota\) the inner product. For \(\omega\otimes X\in\Omega^{\bullet}(P,TP)\) we have : \(\iota_{K}(\omega\otimes X):=\iota_{K}\omega\otimes X=\omega\circ K\otimes X\). On \(\Omega^{\bullet}(P,TP)\), the Nijenhuis-Richardson bracket (or _algebraic_ bracket) is defined by:
\[[K,L]_{\mathbb{R}\mathbb{R}}:=\iota_{K}L-(-)^{(k-1)(l-1)}\iota_{L}K. \tag{19}\]
It generalises the inner contraction of a form on a vector field, and it makes the map:
\[\iota:\Omega^{\bullet}(P,TP) \to\operatorname{Der}_{\bullet}\left(\Omega^{\bullet}(P)\right) \tag{20}\] \[K \mapsto\iota_{K}\]
a graded Lie algebra morphism, since:
\[[\iota_{K},\iota_{L}]=\iota_{[K,L]_{\mathbb{R}\mathbb{R}}}. \tag{21}\]
The _Nijenhuis-Lie derivative_ is the map,
\[L:=[\iota,d]:\Omega^{\bullet}(P,TP) \to\operatorname{Der}_{\bullet}\left(\Omega^{\bullet}(P)\right)\] \[K \mapsto L_{K}:=\iota_{K}d-(-)^{k-1}dt_{K}\]
We have \(L_{K}\in\operatorname{Der}_{k}\) for \(K\in\Omega^{k}(P,TP)\). It generalises the Lie derivative along vector fields, \(L_{X}\in\operatorname{Der}_{0}\). It is such that \([L_{K},d]=0\), and it is a morphism of graded Lie algebras:
\[[L_{K},L_{J}]=L_{[K,J]_{\mathbb{R}\mathbb{R}}}, \tag{22}\]
where \([K,J]_{\mathbb{R}\mathbb{R}}\) is the _Frolicher-Nijenhuis bracket_. Explicitely, for \(K=\mathsf{K}\otimes X\in\Omega^{k}(P,TP)\) and \(J=\mathsf{J}\otimes Y\in\Omega^{l}(P,TP)\), it is:
\[[K,J]_{\mathbb{R}\mathbb{R}}:=\mathsf{K}\wedge\mathsf{J}\otimes[X,Y]+\mathsf{ K}\wedge L_{X}\mathsf{J}\otimes Y-L_{Y}\mathsf{K}\wedge J\otimes X+(-)^{k} \big{(}d\mathsf{K}\wedge\iota_{X}\mathsf{J}\otimes Y+\iota_{Y}\mathsf{K} \wedge d\mathsf{J}\otimes X\big{)}. \tag{23}\]
We further have the relations:
\[[L_{K},\iota_{J}]=\iota_{[K,J]_{\rm EN}}-(-)^{k(l-1)}L_{(\iota_{K}J)}, \tag{24}\]
\[\{\iota_{J},\mathbf{L}_{K}\}=L_{(\iota_{K}J)}+(-)^{k}\iota_{[J,K]_{\rm EN}}.\]
The FN bracket (23) reproduces as a special case the extended bracket (16). Indeed, specialising first (19) in degree 0, for \(f={\sf f}\otimes X\) and \(g={\sf g}\otimes Y\in\Omega^{0}(P,TP)\), we have:
\[[f,dg]_{\rm NR}=\iota_{f}dg-(-)^{0}\iota_{df}\mathcal{J}=({\sf f }\wedge\iota_{X}d{\sf g})\otimes Y=({\sf f}\wedge L_{X}{\sf g})\otimes Y, \tag{25}\] \[[df,g]_{\rm NR}=\iota_{df}\mathcal{J}-(-)^{0}\iota_{g}df=-[g,df]_ {\rm NR}=-({\sf g}\wedge L_{Y}{\sf f})\otimes X.\]
So that (23) is:
\[[f,g]_{\rm EN}={\sf f}\wedge{\sf g}\otimes[X,Y]+f\wedge L_{X}{ \sf g}\otimes Y-L_{Y}f\wedge{\sf g}\otimes X, \tag{26}\] \[={\sf f}\wedge{\sf g}\otimes[X,Y]+[f,dg]_{\rm NR}-[g,df]_{\rm NR},\]
and (22)-(24) reduces to:
\[[L_{f},L_{g}]=L_{\{f,g\}_{\rm EN}}, \tag{27}\] \[[L_{f},\iota_{g}]=\iota_{\{f,g\}_{\rm EN}}.\]
Now, the map \([^{v}:C^{\infty}(P,{\rm Lie}H)\to\Gamma(VP)\), \(X\mapsto X^{v}\), allows to think of \(X^{v}\in{\sf i}\mathfrak{if}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f} \mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f} \mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f} \mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f} \mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f} \mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f} \mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f} \mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f} \mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f} \mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f} \mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f} \mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f} \mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f}\mathfrak{f} \
## 3 General vertical transformations and the Nijenhuis-Lie derivative
Standard gauge transformations of a form \(\beta\in\Omega^{\bullet}(P)\) are defined as _vertical transformations given by the action of \(\operatorname{Aut}_{v}(P)\) via pullback_, expressible in terms of the associated elements in the gauge group \(\mathcal{H}\) of \(P\colon\beta^{r}:=\psi^{*}\beta\). Geometrically, this is computed by relying on the duality pullback/pushforward, \(\beta^{r}_{[p}(\mathbf{X}_{[p]}):=\psi^{*}\beta_{[\psi(p)}(\mathbf{X}_{[p]})=\beta_{[ \psi(p)}(\psi_{*}\mathbf{X}_{[p]})\), and one then only needs to find the result of the pushforward \(\psi_{*}\mathbf{X}\) of any vector field \(\mathbf{X}\in\Gamma(TP)\) by a \(\psi\in\operatorname{Aut}_{v}(P)\). This is a standard computation, which turns out not to depend on the \(H\)-equivariance of \(\psi\), so the result holds as well for \(\psi\in\operatorname{Diff}_{v}(P)\). This will thus allow to define _general vertical transformations_ of \(\beta\) as the action of \(\operatorname{Diff}_{v}(P)\) by pullback, expressible in terms of the associated elements of \(C^{\infty}(P,H)\). The linearisation of these is treated next, and seen to be exactly given by the Nijenhuis-Lie derivative.
### Finite vertical and gauge transformations
As a useful lemma, let us first derive the pushforward by the right \(H\)-action of \(X^{v}\in\mathfrak{slift}_{v}(P)\) generated by \(X=\frac{d}{dt}\,\gamma_{r}\,\big{|}_{r=0}\in C^{\infty}(P,\operatorname{Lie}H)\), for \(\gamma_{r}\in C^{\infty}(P,H)\):
\[R_{h*}X(p)_{[p]}^{v} =R_{h*}\tfrac{d}{dt}\,R_{\gamma_{r}(p)}\,p\,\big{|}_{r=0}=\tfrac {d}{dt}\,R_{h}R_{\gamma_{r}(p)}\,p\,\big{|}_{r=0}=\tfrac{d}{dt}\,R_{\gamma_{r}( p)h}\,p\,\big{|}_{r=0}=\tfrac{d}{dt}\,R_{h^{-1}\gamma_{r}(p)h}\,R_{h}\,p\, \big{|}_{r=0}, \tag{31}\] \[=:(\operatorname{Ad}_{h^{-1}}X(p))_{[ph]}^{v}\]
Remark that, for \(X\in\operatorname{Lie}\mathcal{H}\), we have \(R_{h}^{*}X=\operatorname{Ad}_{h^{-1}}X\), so \(R_{h*}X(p)_{[p}^{v}=X(ph)_{[ph]}^{v}\). That is, \(X^{v}\) is a right-invariant vertical vector field, those forming (as already observed) the Lie algebra of vertical automorphisms \(\Gamma_{H}(VP)\simeq\operatorname{aut}_{v}(P)\).
Now, the classic computation: For \(\mathbf{X}\in\Gamma(TP)\) with flow \(\phi_{\tau}\), and given \(\psi\in\operatorname{Diff}_{v}(P)\) to which corresponds \(\gamma\in C^{\infty}(P,H)\), we have,
\[\psi_{*}\mathbf{X}_{[p}:=\tfrac{d}{dt}\,\psi(\phi_{\tau}(p))\,\big{|} _{\tau=0} =\tfrac{d}{dt}\,R_{\gamma(\phi_{\tau}(p))}\,\phi_{\tau}(p)\,\big{|} _{\tau=0},\] \[=\tfrac{d}{dt}\,R_{\gamma(\phi_{\tau}(p))}\,p\,\big{|}_{\tau=0}+ \tfrac{d}{dt}\,R_{\gamma(p)}\,\phi_{\tau}(p)\,\big{|}_{\tau=0},\] \[=\tfrac{d}{dt}\,R_{\gamma(p)^{-1}\gamma(\phi_{\tau}(p))}\,p\, \big{|}_{\tau=0}+R_{\gamma(p)*}\mathbf{X}_{[p},\] \[=\tfrac{d}{dt}\,R_{\gamma(p)^{-1}\gamma(\phi_{\tau}(p))}\,\underbrace {R_{\gamma(p)}\,p}_{\psi(p)}\,\big{|}_{\tau=0}+R_{\gamma(p)*}\mathbf{X}_{[p}.\]
The first term manifestly is a vertical vector field at \(\psi(p)\), one needs only to find a nice way to write the associated Lie algebra element (corresponding to the curve \(\gamma(p)^{-1}\gamma(\phi_{\tau}(p))\) through \(e=\operatorname{id}_{H}\) in \(H\)). Let us first notice that, since \(\gamma:P\to H\), generically \(\tfrac{d}{dt}\,\gamma(\phi_{\tau}(p))\,\big{|}_{\tau=0}=d\gamma_{[p}(\mathbf{X}_{ [p]})=\gamma_{*}(\mathbf{X}_{[p]})\in T_{\gamma(p)}H\). While the Maurer-Cartan form on \(H\) is,
\[\theta_{\operatorname{MC},[h]}:=L_{h^{-1}*}:T_{h}H \to T_{e}H=:\operatorname{Lie}H,\] \[\chi_{[h]} \mapsto L_{h^{-1}*}\chi_{[h]}.\] \[\text{So},\quad\big{[}d\gamma_{[p}(\mathbf{X}_{[p]})\big{]}_{\gamma(p)} \mapsto L_{\gamma(p)^{-1}*}\big{[}d\gamma_{[p}(\mathbf{X}_{[p]})\big{]}_{ \gamma(p)}=\big{[}\gamma(p)^{-1}d\gamma_{[p}(\mathbf{X}_{[p]})\big{]}_{\epsilon^{ \prime}},\] \[L_{\gamma(p)^{-1}}\tfrac{d}{dt}\,\gamma(\phi_{\tau}(p))\,\big{|}_{ \tau=0}=\tfrac{d}{dt}\,\gamma(p)^{-1}\gamma(\phi_{\tau}(p))\,\big{|}_{\tau=0}.\]
We thus finally obtain, using the equivariance for vertical vector fields (31):
\[\psi_{*}\mathbf{X}_{[p]} =R_{\gamma(p)*}\mathbf{X}_{[p}+\big{[}\gamma(p)^{-1}d\gamma_{[p}(\mathbf{X} _{[p]})\big{]}_{[\psi(p)}^{v},\] \[=R_{\gamma(p)*}\big{(}\mathbf{X}_{[p]}+[d\gamma_{[p}(\mathbf{X}_{[p]}) \gamma(p)^{-1}]_{[p}^{v}\big{)}\,\big{]}. \tag{32}\]
Obviously, iterations of generalised vertical transformations would rely on the same formula: e.g. for \(\psi^{*}\varphi^{*}=(\varphi\circ\psi)^{*}\) to which corresponds \(\gamma\,(\eta\circ R_{\gamma})\) by (7), one has,
\[(\varphi\circ\psi)^{*}\mathbf{X}=R_{\gamma\,(\eta\circ R_{\gamma})*}\left(\mathbf{X}+ \big{[}d[\gamma\,(\eta\circ R_{\gamma})](\mathbf{X})\,[\gamma\,(\eta\circ R_{ \gamma})]^{-1}\big{]}^{v}\right). \tag{33}\]
The case of \(k\)-iterations is obvious, though complicated, relying on (12).
To illustrate, consider the general vertical transformations of a (principal) connection \(\omega\in\mathcal{C}\) and of a tensorial form \(\alpha\in\Omega^{\bullet}_{\textit{tens}}(P,\rho)\) - for \(\rho:H\to GL(V)\) a representation of \(H\). By definition of a connection, \(R^{*}_{h}\omega_{|ph}=\mathrm{Ad}_{h^{-1}}\omega_{|p}\) and \(\omega_{|p}(X^{v}_{|p^{\prime}})=X\in\mathrm{Lie}H\), so one has:
\[\omega^{\gamma}|_{[p}(X_{|p}):=\psi^{*}\omega_{|\psi(p)}(X_{|p}) =\omega_{|\psi(p)}(\psi_{*}X_{|p}),\] \[=\omega_{|\psi(p)}\left(R_{\gamma(p)*}X_{|p}+\left[\gamma(p)^{-1} d\gamma_{|p}(X_{|p})\right]^{v}_{|\psi(p)}\right),\] \[=R^{*}_{\gamma(p)}\omega_{|\psi(p)}(X_{|p})+\omega_{|\psi(p)} \left([\gamma(p)^{-1}d\gamma_{|p}(X_{|p})]^{v}_{|\psi(p)}\right).\] \[=\mathrm{Ad}_{\gamma(p)^{-1}}\omega_{|p}(X_{|p})+\gamma(p)^{-1} d\gamma_{|p}(X_{|p}).\]
The generalised vertical transformation of a connection is thus,
\[\omega^{\gamma}=\mathrm{Ad}_{\gamma^{-1}}\omega+\gamma^{-1}d\gamma. \tag{34}\]
It is all but identical to a standard gauge transformation under \(\psi\in\mathrm{Aut}_{v}(P)\sim\gamma\in\mathcal{H}\). But, doing the same using (33), one obtains the less familiar result for two consecutive general gauge transformations:
\[(\omega^{\eta})^{\gamma}:=\psi^{*}(\varphi^{*}\omega)=(\varphi \circ\psi)^{*}\omega =\mathrm{Ad}_{[\gamma(\eta\circ R_{\gamma})]^{-1}\omega}+[\gamma (\eta\circ R_{\gamma})]^{-1}d[\gamma(\eta\circ R_{\gamma})], \tag{35}\] \[=\mathrm{Ad}_{(\eta\circ R_{\gamma})^{-1}}\omega^{\gamma}+(\eta \circ R_{\gamma})^{-1}d(\eta\circ R_{\gamma}).\]
For \(\varphi,\psi\in\mathrm{Aut}_{v}(P)\sim\eta,\gamma\in\mathcal{H}\) we have \(\gamma(\eta\circ R_{\gamma})=\eta\gamma\), so that we get the standard result \((\omega^{\eta})^{\gamma}=\omega^{\eta\gamma}\), expressing the well-known fact that the gauge group \(\mathcal{H}\) acts on the right on the space of connections \(\mathcal{C}\) - Hence the fact that the latter can be seen (under proper restrictions) as a principal bundle \(\Phi=\mathcal{C}\) with structure group \(\mathcal{H}\) (see e.g. [12; 13], [14; 15], also [1]).
By definition of a tensorial form \(\alpha\in\Omega^{\bullet}_{\textit{tens}}(P,\rho)\): \(R^{*}_{h}\alpha_{|ph}=\rho(h^{-1})\alpha_{|p}\) and \(\alpha_{|p}(X^{v}_{|p})=0\). By the same method, one finds the general vertical transformations,
\[\alpha^{\gamma}=\rho(\gamma^{-1})\alpha,\qquad(\alpha^{\eta})^{\gamma}=\rho( \gamma\left(\eta\circ R_{\gamma}\right))^{-1}\alpha,=\rho(\eta\circ R_{\gamma} )^{-1}\alpha^{\gamma}. \tag{36}\]
As a special case, this gives the general vertical transformation of the curvature \(\Omega\in\Omega^{2}_{\textit{tens}}(P,\mathrm{Ad})\) of \(\omega\). Naturally, (36) generalises the well-known gauge transformations of \(\alpha\) under \(\mathrm{Aut}_{v}(P)\simeq\mathcal{H}\), \((\alpha^{\eta})^{\gamma}=\alpha^{\eta\gamma}\), showing that \(\mathcal{H}\) acts on the right on \(\Omega^{\bullet}_{\textit{tens}}(P,\rho)\) as it does on \(\mathcal{C}\).
### Discussion
At this point, it should be highlighted that a priori \(\omega^{\gamma}\not\in\mathcal{C}\) and \(\alpha^{\gamma}\not\in\Omega_{\textit{tens}}(P,\rho)\). To show this, let us first derive the following lemma: the special case of (32) for \(X=X^{v}\) is,
\[\psi_{*}X^{v}_{|p} =R_{\gamma(p)*}X^{v}_{|p}+[\gamma(p)^{-1}d\gamma_{|p}(X^{v}_{|p}) ]^{v}_{|\psi(p)},\] \[=(\mathrm{Ad}_{\gamma(p)^{-1}}X)^{v}_{|\psi(p)}+[\gamma(p)^{-1}[X ^{v}(\gamma)](p))\big{]}^{v}_{|\psi(p)},\quad\text{using \eqref{eq:def},}\] \[=\big{(}\mathrm{Ad}_{\gamma(p)^{-1}}X+\gamma(p)^{-1}[X^{v}(\gamma )](p)\big{)}^{v}_{|\psi(p)}\,. \tag{37}\]
The definition of \(\mathrm{Diff}_{v}(P)\simeq C^{\infty}(P,H)\) leaves the H-equivariance of \(\psi\sim\gamma\) unspecified, so \(X^{v}(\gamma)\) remains a priori unknown. Yet, there are two interesting special cases worth emphasizing:
\[\psi_{*}X^{v}_{|p}=\left\{\begin{array}{ll}X^{v}_{|\psi(p)}&\text{for $\psi\in \mathrm{Aut}_{v}(P)\sim\gamma\in\mathcal{H}$ (gauge transformation),}\\ 0&\text{for $\psi(p)=f(p):=pu(p)$ with $\gamma=u:P\to H$ s.t. $R^{*}_{h}u=h^{-1}u$ (dressing field).}\end{array}\right. \tag{38}\]
The second case involves the _dressing map_\(f:P\to P\), \(p\mapsto pu(p)\), associated to the _dressing field_\(u\), satisfying \(f\circ R_{h}=f\) and thus also \(f\circ\psi=f\) for \(\psi\in\mathrm{Diff}_{v}(P)\). It is key to the "dressing field method" of gauge symmetry reduction, a tool to build gauge-invariants in gauge field theory, see e.g. [11; 14; 15]. The map \(f\) is used to "_dress_" forms on \(P\), acting via \(f^{*}\), turning them into _basic_ forms (a.k.a "dressed/composite fields").
Indeed, for some \(\beta\in\Omega^{\bullet}(P)\) (horizontality and equivariance unspecified), we define its dressing by \(\beta^{u}:=f^{*}\beta\), satisfying:
\[\beta^{u}(X^{v},\ldots) :=f^{*}\beta(X^{v}\ldots)=\beta(f_{*}X^{v},\ldots)=\beta(0,\ldots)=0,\] \[R^{*}_{h}\beta^{u} :=R^{*}_{h}\,f^{*}\beta=(f\circ R_{h})^{*}\beta=f^{*}\beta=\beta^{ u}. \tag{39}\]
That is \(\beta^{u}\in\Omega^{\bullet}_{\rm basic}(P)\). For example, a dressed connection is \(\omega^{u}:=f^{*}\omega={\rm Ad}_{u^{-1}}\omega u+u^{-1}du\), while a dressed tensorial form is \(\alpha^{u}=\rho(u^{-1})\alpha\) - e.g. the dressed curvature is \(\Omega^{u}=u^{-1}\Omega u=d\omega^{u}+\sfrac{1}{2}[\omega^{u},\omega^{u}]\). Any such dressed form is invariant under \({\rm Diff}_{v}(P)\propto C^{\infty}(P,H)\), since
\[(\beta^{u})^{\gamma}:=\psi^{*}(\beta^{u})=\psi^{*}(f^{*}\beta)=(f\circ\psi)^ {*}\beta=f^{*}\beta:=\beta^{u}, \tag{40}\]
as can also be found via (32) and (39). Thus, dressed objects are in particular gauge invariant, i.e. invariant under \({\rm Aut}_{v}(P)\approx\mathcal{H}\): hence their physical interest, as they may encode physical degrees of freedom (d.o.f.).
Returning to the generic case, (37), for \(\omega\in\mathcal{C}\) we have:
\[\omega^{\gamma}(X^{v})=\psi^{*}\omega\left(X^{v}\right)=\omega( \psi_{*}X^{v})={\rm Ad}_{\gamma^{-1}}X+\gamma^{-1}X^{v}(\gamma),\qquad\text{ and} \tag{41}\] \[R^{*}_{h}\omega^{\gamma}=R^{*}_{h}\psi^{*}\omega=(\psi\circ R_{ h})^{*}\omega. \tag{42}\]
In general then, \(\omega^{\gamma}=\psi^{*}\omega\notin\mathcal{C}\) for \(\psi\in{\rm Diff}_{v}(P)\sim\gamma\in C^{\infty}(P,H)\). But per (38), we see the especially significant role played by the groups \({\rm Aut}_{v}(P)\simeq\mathcal{H}\), since only for them do we have \(\psi\circ R_{h}=R_{h}\circ\psi\) and \(X^{v}(\gamma)=[\gamma,X]\), so that \(\omega^{\gamma}(X^{v})=X\) and \(R^{*}_{h}\omega^{\gamma}={\rm Ad}_{h^{-1}}\omega^{\gamma}\). That is, \({\rm Aut}_{v}(P)\simeq\mathcal{H}\) is the only subgroup of vertical transformations \({\rm Diff}_{v}(P)\simeq C^{\infty}(P,H)\) preserving the space of connections \(\mathcal{C}\) or \(P\).2 From above, it is obvious that the second special case in (38) gives us \(\omega^{u}\not\in\mathcal{C}\): i.e. a dressed connection is not a connection.
Footnote 2: This is to be expected as \({\rm Aut}(P)\subset{\rm Diff}(P)\) is the largest natural transformation group of a bundle \(P\), preserving its fibration structure, and \({\rm Aut}(P)\cap{\rm Diff}_{v}(P)={\rm Aut}_{v}(P)\).
In the same way, for \(\alpha\in\Omega^{\bullet}_{\rm tens}(P,\rho)\) we have:
\[\alpha^{\gamma}(X^{v}) =\psi^{*}\alpha\left(X^{v}\right)=\alpha(\psi_{*}X^{v})=0,\qquad \text{and} \tag{43}\] \[R^{*}_{h}\alpha^{\gamma} =R^{*}_{h}\psi^{*}\alpha=(\psi\circ R_{h})^{*}\alpha. \tag{44}\]
So, if horizontality is preserved by the action of \({\rm Diff}_{v}(P)\simeq C^{\infty}(P,H)\), this is a priori not the case of \(H\)-equivariance. Again, only for \(\psi\in{\rm Aut}_{v}(P)\) do we get indeed \(R^{*}_{h}\alpha^{\gamma}=\rho(h^{-1})\,\alpha^{\gamma}\): i.e. \({\rm Aut}_{v}(P)\simeq\mathcal{H}\) is the only subgroup of vertical transformations \({\rm Diff}_{v}(P)\simeq C^{\infty}(P,H)\) preserving the space of tensorial forms \(\Omega^{\bullet}_{\rm tens}(P,\rho)\). Again, from above it is obvious that the second special case in (38) gives \(\alpha^{u}\not\in\Omega^{\bullet}_{\rm tens}(P,\rho)\).
This shows the importance of being mindful of the equivariance of \(\psi\sim\gamma\), if one wants to keep track of the mathematical space in which one evolves: in particular one must keep in mind the clear distinction between gauge transformations and dressing operations, as stressed by (38).
Connections \(\omega\in\mathcal{C}\) can be understood as necessary to obtain covariant derivatives preserving tensorial forms, i.e. \(D=d+\rho_{*}(\omega):\Omega^{\bullet}_{\rm tens}(P,\rho)\to\Omega^{\bullet+1}_ {\rm tens}(P,\rho)\) - which the de Rham derivative \(d\) does not (\(d\alpha\) fails to be horizontal). This implies that \(\alpha\) and \(D\alpha\) have the same gauge transformations: this is a way to phrase physics' _Gauge Principle_. But it also means that \(\alpha\) and \(D\alpha\) have the same generalised vertical transformations under \({\rm Diff}_{v}(P)\simeq C^{\infty}(P,H)\):
\[D\alpha\in\Omega^{\bullet+1}_{\rm tens}(P,\rho)\quad\Rightarrow\quad(D\alpha)^{ \gamma}=\rho(\gamma^{-1})D\alpha. \tag{45}\]
Yet, it is also the case that:
\[(D\alpha)^{\gamma}=\psi^{*}D\alpha=\psi^{*}(d\alpha+\rho_{*}( \omega)\alpha) =d\psi^{*}\alpha+\rho_{*}(\psi^{*}\omega)\psi^{*}\alpha, \tag{46}\] \[=:d\alpha^{\gamma}+\rho_{*}(\omega^{\gamma})\alpha^{\gamma}.\]
This goes to show that the covariant derivative \(D\) is compatible with the action of \({\rm Diff}_{v}(P)\simeq C^{\infty}(P,H)\), even if under its action it may be that \(\omega^{\gamma}\not\in\mathcal{C}\) and \(\alpha^{\gamma}\not\in\Omega^{\bullet}_{\rm tens}(P,\rho)\): In this case, if \(D^{\gamma}:=d+\rho_{*}(\omega^{\gamma})\) is not a covariant derivative in the geometrical sense of preserving \(\Omega^{\bullet}_{\rm tens}(P,\rho)\), it is in the algebraic sense - dear to physicists - of preserving the form of the \({\rm Diff}_{v}(P)\simeq C^{\infty}(P,H)\) transformation of \(\alpha^{\gamma}\) - by (35)-(36). If this transformation is instead a dressing operation, we have in particular that: \((D\alpha)^{u}=d\alpha^{u}+\rho_{*}(\omega^{u})\alpha^{u}\).
### Infinitesimal vertical transformations
Naturally, the infinitesimal version of the general vertical transformation \(\beta^{\gamma}=\psi^{*}\beta\) for \(\psi\in\mathrm{Diff}_{\nu}(P)\sim\gamma\in C^{\infty}(P,H)\) is given by the Lie derivative along the associated general vertical vector field \(X^{\nu}\in\mathfrak{hif}_{\nu}(P)\sim X\in C^{\infty}(P,\mathrm{Lie}H)\), (13)-(14):
\[L_{X^{\nu}}\,\beta:=\tfrac{d}{d\tau}\,\beta^{\gamma_{\tau}}\big{|}_{\tau=0}= \tfrac{d}{d\tau}\,\psi^{*}_{\tau}\,\beta\,\big{|}_{\tau=0}. \tag{47}\]
The Lie derivative belongs to the Lie algebra of derivations of \(\Omega^{\bullet}(P)\), it is the case that it is defined by \(L_{X^{\nu}}=[\iota_{X^{\nu}},d]\) (Cartan's formula). Since, \(X^{\nu}\) can be seen as an element of \(\Omega^{0}(P,VP)\), \(L_{X^{\nu}}\) is to be understood as a special case of the Nijenhuis-Lie derivative along the vertical vector-valued \(0\)-form \(X^{\nu}\). Naturally then, by (30), the commutator of Nijenhuis-Lie derivatives involves the Frolicher-Nijenhhuis brachet on \(\Omega^{0}(P,VP)\), i.e. the extended bracket \(\{\,,\,\}\) on \(C^{\infty}(P,\mathrm{Lie}H)\) (16)-(23):
\[[L_{X^{\nu}},L_{Y^{\nu}}]\,\beta=L_{\{X^{\nu},Y^{\nu}\}_{\mathrm{ FN}}}\,\beta=L_{\{X,Y^{\nu}\}}\,\beta. \tag{48}\]
To illustrate, let us have the \(\mathfrak{hif}_{\nu}(P)\simeq\Omega^{0}(P,VP)\sim C^{\infty}(P,\mathrm{Lie}H)\) transformations of \(\omega\in\mathcal{C}\) and \(\alpha\in\Omega^{\bullet}_{tens}(P,\rho)\). For a connection, using (34), we get:
\[L_{X^{\nu}}\omega:=\tfrac{d}{d\tau}\,\psi^{*}_{\tau}\,\omega\, \big{|}_{\tau=0}=\tfrac{d}{d\tau}\,\mathrm{Ad}_{\gamma_{\tau}^{1}}\omega+ \gamma_{\tau}^{-1}d\gamma_{\tau}\,\big{|}_{\tau=0}=-\mathrm{ad}_{X}\omega+dX= DX. \tag{49}\]
This is cross-checked via Cartan's formula, using Cartan's structure equation for the curvature, and the latter's tensoriality: For \(\mathbf{X}\in\Gamma(TP)\), we have in general
\[L_{X}\omega=\iota_{\mathbf{X}}d\omega+d(\iota_{\mathbf{X}}\omega)=\iota_{\mathbf{X}}( \Omega-\tfrac{1}{2}[\omega,\omega])+d(\omega(\mathbf{X}))=\iota_{\mathbf{X}}\Omega+d( \omega(\mathbf{X}))+[\omega,\omega(\mathbf{X})]. \tag{50}\]
Thus, for \(\mathbf{X}=X^{\nu}\) we get \(L_{X^{\nu}}\omega=dX+[\omega,X]=DX\). We must remain mindful that if \(X\notin\Omega^{0}_{\mathrm{tens}}(P,\mathrm{Ad})=\mathrm{Lie}\mathcal{H}\) then \(DX=dX+[\omega,X]\notin\Omega^{1}_{\mathrm{tens}}(P,\mathrm{Ad})\): only for \(X^{\nu}\in\mathrm{aut}_{\nu}(P)\sim X\in\mathrm{Lie}\mathcal{H}\) does \(DX\) answers the geometric definition of the covariant derivative on tensorial forms. This echoes the observation made earlier that \(\omega^{\gamma}=\psi^{*}\omega\notin\mathcal{C}\) for a generic \(\psi\in\mathrm{Diff}_{\nu}(P)\sim\gamma\in C^{\infty}(P,H)\).
As \(\mathcal{C}\) is an affine space modeled on \(\Omega^{1}_{\mathrm{tens}}(P,\mathrm{Ad})\), for \(\omega^{\prime},\omega\in C\) it must be that \(\omega^{\prime}-\omega\in\Omega^{1}_{\mathrm{tens}}(P,\mathrm{Ad})\). So, in particular the linear action of a transformation group of \(\mathcal{C}\) would result in an element of \(\Omega^{1}_{\mathrm{tens}}(P,\mathrm{Ad})\). As a matter of fact, \(\mathrm{Aut}(P)\) is the maximal such group: Indeed, for \(\psi\in\mathrm{Aut}(P)\) it is the case that,
\[\psi_{*}X(p)_{p}^{\nu}:=\tfrac{d}{d\tau}\,\psi(\phi_{\tau}(p))\, \big{|}_{\tau=0}=\tfrac{d}{d\tau}\,\psi(R_{\gamma_{\tau}(p)}\,p)\,\big{|}_{\tau= 0}=\tfrac{d}{d\tau}\,R_{\gamma_{\tau}(p)}\psi(p)\,\big{|}_{\tau=0}=:X(p)_{|\psi (\rho)}^{\nu}. \tag{51}\]
So we have first \(\psi^{*}\omega_{p}(X_{p}^{\nu})=\omega_{\psi(p)}(X_{|\psi(\rho)}^{\nu})=X\), and second \(R_{h}^{*}\psi^{*}\omega=\psi^{*}R_{h}^{*}\omega=\psi^{*}\mathrm{Ad}_{h^{-1}} \omega=\mathrm{Ad}_{h^{-1}}\psi^{*}\omega\): i.e. \(\psi^{*}\omega\in\mathcal{C}\). It follows indeed that \(L_{\mathbf{X}}\omega\in\Omega^{1}_{\mathrm{tens}}(P,\mathrm{Ad})\) for \(\mathbf{X}\in\mathrm{aut}(P)\), thus also for the special case \(\mathbf{X}=X^{\nu}\in\mathrm{aut}_{\nu}(P)\) with explicit result \(L_{X^{\nu}}\omega=DX\in\Omega^{1}_{\mathrm{tens}}(P,\mathrm{Ad})\).
Applying a second Nijenhuis-Lie derivative we get, using the fact that \([L_{\mathbf{X}},d]=0\):
\[L_{Y^{\nu}}L_{X^{\nu}}\omega=d(L_{Y^{\nu}}X)+[L_{Y^{\nu}}\omega,X] +[\omega,L_{Y^{\nu}}X] =d(Y^{\nu}(X))+[DY,X]+[\omega,Y^{\nu}(X)],\] \[=D(Y^{\nu}(X))+[dY,X]+[[\omega,Y],X]\]
From this follows that,
\[[L_{X^{\nu}},L_{Y^{\nu}}]\omega =D(X^{\nu}(Y))+[dX,Y]+[[\omega,X],Y]-D(Y^{\nu}(X))-[dY,X]-[[\omega, Y],X],\] \[=D(X^{\nu}(Y))-D(Y^{\nu}(X))+d([X,Y])+[[\omega,X],Y]-[[\omega,Y],X],\] \[=D(X^{\nu}(Y))-D(Y^{\nu}(X))+d([X,Y])+[\omega,[X,Y]],\] \[=D((X,Y))=L_{\{X,Y\}^{\nu}}\omega=L_{\{X^{\nu},Y\}_{\mathrm{ FN}}}\,\omega, \tag{52}\]
by Jacobi identity in \(\mathrm{Lie}H\) from the second to third line, and by definition of the extended/FN bracket (16)-(29) from the third to the last. This result is as expected from (48).
Similarly for a \(\alpha\in\Omega^{\bullet}_{tens}(P,\rho)\) we have,
\[L_{X^{\nu}}\alpha:=\tfrac{d}{d\tau}\,\psi^{*}_{\tau}\,\alpha\, \big{|}_{\tau=0}=\tfrac{d}{d\tau}\,\rho(\gamma_{\tau}^{-1})\alpha\,\big{|}_{\tau= 0}=-\rho_{*}(X)\alpha. \tag{53}\]
This can be cross-checked via Cartan's formula: For \(X\in\Gamma(TP)\), we have in general
\[L_{X}\alpha =\iota_{X}d\alpha+d(\iota_{X}\alpha)=\iota_{X}(D\alpha-\rho_{*}( \omega)\alpha+d(\iota_{X}\alpha)=\iota_{X}D\alpha-\rho_{*}(\iota_{X}\omega) \alpha+\rho_{*}(\omega)\iota_{X}\alpha+d(\iota_{X}\alpha)\] \[=\iota_{X}D\alpha+D(\iota_{X}\alpha)-\rho_{*}(\iota_{X}\omega)\alpha. \tag{54}\]
Thus, for \(X=X^{v}\) we get \(L_{X^{v}}\alpha=-\rho_{*}(X)\alpha\). Here again, we notice that since generically \(X\not\in\Omega^{0}_{\rm tens}(P,\mathrm{Ad})\), generically \(L_{X^{v}}\alpha\notin\Omega^{\bullet}_{\rm tens}(P,\rho)\). Which echoes the earlier observation that \(\alpha^{\gamma}=\psi^{*}\alpha\notin\Omega^{\bullet}_{\rm tens}(P,\rho)\) for a generic \(\psi\in\mathrm{Diff}_{v}(P)\sim\gamma\in C^{\infty}(P,H)\). Upon applying the Nijenhuis-Lie derivative again we get,
\[L_{Y^{v}}L_{X^{v}}\alpha=-\rho_{*}(L_{Y^{v}}X)\alpha-\rho_{*}(X)L_{Y^{v}}\alpha =-\rho_{*}(Y^{v}(X))\alpha+\rho_{*}(X)\rho_{*}(Y)\alpha.\]
Then it follows that,
\[[L_{X^{v}},L_{Y^{v}}]\alpha =-\rho_{*}(X^{v}(Y))\alpha+\rho_{*}(Y)\rho_{*}(X)\alpha+\rho_{*}( Y^{v}(X))\alpha-\rho_{*}(X)\rho_{*}(Y)\alpha,\] \[=-\rho_{*}([X,Y]+X^{v}(Y)-Y^{v}(X))\alpha,\] \[=-\rho_{*}(\{X,Y\})\alpha=L_{\{X,Y\}^{v}}\alpha=L_{\{X^{v},Y^{v} \}_{\rm IN}}\alpha. \tag{55}\]
Which is as expected from (48).
In the next and final section, we show how the global structures on \(P\) exposed up to now descend as local structures on the base space \(M\). These are relevant to physics: e.g. vertical transformations generalise the usual gauge transformations familiar in gauge field theory.
## 4 Local structure
We first summarize the standard local structure, and extend it to general vertical transformations in the next section.
### Gluings and local active gauge transformations
Given a open subset \(U\subset M\) and a local section \(\sigma:U\to P_{|U}\), a local representative of \(\beta\in\Omega^{\bullet}(P)\) is \(b:=\sigma^{*}\beta\in\Omega^{\bullet}(U)\). Any other local section is \(s^{\prime}=sg\) with \(g:U\to H\) a transition function of the bundle (encoding its topology, from a local viewpoint), so another local representative is \(b^{\prime}:=\sigma^{\prime\prime}\beta=b^{g}\). The notation \(b^{\prime}=b^{g}\) is meant to indicate that \(b^{\prime}\) can be seen as obtained from \(b\) by a right action transformation by a transition function \(g\). For yet another local section \(s^{\prime\prime}=s^{\prime}g^{\prime}=sgg^{\prime}\), we have that \(b^{\prime\prime}=(b^{\prime})s^{\prime}=b^{gg^{\prime}}\). These constitute the _gluing relations_ of local representatives3 and are known in gauge field theory as _passive gauge transformations_.
Footnote 3: The terminology stems from considering \(s\), \(s^{\prime}\) and \(s^{\prime\prime}\) as local sections over overlapping opens \(U\), \(U^{\prime}\) and \(U^{\prime\prime}\) respectively, related by the transition functions \(g\) and \(g^{\prime}\). Then the gluing relations reflects the fact that the local representatives on \(M\), the \(b\)’s, comes from the same global object \(\beta\) on \(P\) – which can be reconstructed from its local representatives.
To illustrate, for \(\beta=\omega\in\mathcal{C}\), we define \(A:=\sigma^{*}\omega\in\mathcal{A}\) (the gauge potential) and we have the well known result \(A^{\prime}=\mathrm{Ad}_{g^{-1}}A+g^{-1}dg=:A^{g}\), and \(A^{\prime\prime}=\mathrm{Ad}_{g^{-1}}A^{\prime}+g^{\prime-1}dg^{\prime}=A^{gg^ {\prime}}\). In the case of \(\beta=\alpha\in\Omega^{\bullet}_{\rm tens}(P,\rho)\), one has \(a:=\sigma^{*}\alpha\in\Omega^{\bullet}_{\rm tens}(U,\rho)\) and it is well-known that \(a^{\prime}=\rho(g^{-1})\,a:=a^{g}\), and \(a^{\prime\prime}=\rho({(gg^{\prime})}^{-1})\,a^{\prime}=\rho({(gg^{\prime})}^{ -1})\,a=a^{gg^{\prime}}\). The local representative of the curvature \(\Omega\in\Omega^{2}_{\rm tens}(P,\mathrm{Ad})\) of \(\omega\) is \(F:=\sigma^{*}\Omega\). As a special case of the above, we have then \(F^{\prime}=\mathrm{Ad}_{g^{-1}}F=:F^{g}\) and \(F^{\prime\prime}=\mathrm{Ad}_{g^{-1}}F^{\prime}=F^{gg^{\prime}}\). A matter field would be some \(\alpha=\phi\in\Omega^{0}_{\rm tens}(P,\rho)\) with local representative \(a=\phi\). Its covariant derivative \(D^{\omega}\phi=d\phi+\rho_{*}(\omega)\phi\) has local representative \(D^{A}\phi=d\phi+\rho_{*}(A)\phi\), which represents the minimal coupling to the gauge potential. It is well known that \((D^{A}\phi)^{g}=D^{A^{g}}\phi^{g}=\rho(g^{-1})D^{A}\phi\), which expresses the requirement of the (passive) _gauge principle_.
Of course, the group \(\mathrm{Aut}_{v}(P)\) cannot act on local representatives, but there is naturally such a thing as the local representative of the gauge transformed \(\beta^{\gamma}:=\psi^{*}\beta\), for \(\gamma\in\mathcal{H}\simeq\psi\in\mathrm{Aut}_{v}(P)\).
Given the defining equivariance of elements of \(\mathcal{H}\), making them tensorial \(0\)-forms for the conjugate action of \(H\), the \(\mathrm{Aut}_{v}(P)\simeq\mathcal{H}\)-transformation of \(\eta\in\mathcal{H}\) is - as a special case of the one discussed below (36) \(-\eta^{\gamma}:=\psi^{*}\eta=\gamma^{-1}\eta\gamma\). This relation defines the action of gauge group \(\mathcal{H}\) on itself. Defining \(\gamma:=\sigma^{*}\gamma\) and \(\eta:=\sigma^{*}\eta\), we have \(\eta^{\gamma}:=\sigma^{*}(\eta^{\gamma})=\gamma^{-1}\eta\gamma\). Therefore, the local representative of \(\mathcal{H}\), the _local gauge group_, is defined as: \(\mathcal{H}_{\rm loc}:=\{\eta,\gamma:U\to H\,|\,\eta^{\gamma}=\gamma^{-1}\gamma \gamma\}\).
We have therefore the local representative of a gauge transformed form: \(b^{\gamma}:=\sigma^{*}(\beta^{\gamma})\). Given that \((\beta^{\gamma})^{\gamma}=\beta^{\gamma\gamma}\), it is the case that \((b^{\mathrm{n}})^{\gamma}=b^{\mathrm{n}\gamma}\). This secures the consistency of a heurisitic "field theoretic rule": Considering \(\gamma\) as acting on \(b^{\mathrm{n}}\) as a concatenation of the fields \(b\) and \(\eta\), one may write \((b^{\mathrm{n}})^{\gamma}=(b^{\gamma})^{\eta^{\gamma}}\), using then their respective \(\mathcal{H}_{\mathrm{loc}}\)-transformations, one writes the concatenation as \((b^{\gamma})^{\eta^{\gamma}}=(b^{\gamma})^{\gamma^{-1}\eta\gamma}=b^{\gamma} \gamma^{\gamma^{-1}\eta\gamma}=b^{\mathrm{n}\gamma}\).
The relation \((b^{\mathrm{n}})^{\gamma}=b^{\mathrm{n}\gamma}\) expresses the fact that the action of the local gauge group \(\mathcal{H}_{\mathrm{loc}}\) on local representatives - the fields of physics - is a right action. Hence, the field space \(\Phi=\{b\}\) can be seen (under proper restrictions) as an infinite dimensional bundle with structure group \(\mathcal{H}_{\mathrm{loc}}\). A fact often overlooked in the covariant phase space literature - with rare exceptions, see e.g. [11, 14, 15, 16].
To illustrate as above, we have that \(A^{\gamma}:=\sigma^{*}(\omega^{\gamma})=\mathrm{Ad}_{\gamma^{-1}}A+\gamma^{-1 }d\gamma\). And \((A^{\mathrm{n}})^{\gamma}=A^{\mathrm{n}\gamma}\), as can be checked from using the previous definition and \((A^{\mathrm{n}})^{\gamma}=(A^{\gamma})^{\eta^{\gamma}}\). Similarly for a tensorial form we have \(a^{\gamma}:=\sigma^{*}(\alpha^{\gamma})=\rho(\gamma^{-1})\,a\), from which one checks that \((a^{\mathrm{n}})^{\gamma}=a^{\mathrm{n}\gamma}\) via \((a^{\mathrm{n}})^{\gamma}=a^{\mathrm{n}\gamma}\). As a special cases, \(F^{\gamma}:=\sigma^{*}(\Omega^{\gamma})=\mathrm{Ad}_{\gamma^{-1}}F\), and \(\Phi^{\gamma}:=\sigma^{*}(\phi^{\gamma})=\rho(\gamma^{-1})\,\Phi\) while its covariant derivative is \((D^{A}\phi)^{\gamma}:=\sigma^{*}((D^{\omega}\phi)^{\gamma})=\rho(\gamma^{-1}) \,D^{A}\phi\). Here, the local covariant derivative operator can be _defined_ as the expression \(D:=d\,+\,\rho_{*}(A)\). It is then easily checked that the previous result is also recovered from the field theoretic rule: \((D^{A}\phi)^{\gamma}=D^{A^{\gamma}}\phi^{\gamma}=\rho(\gamma^{-1})\,D^{A}\phi\), which is yet another expression of the gauge principle.
Notice that, from the standpoint of gauge field theory, there is no way to distinguish between local gluings \(b^{\mathrm{q}}\) and the action \(b^{\gamma}\) of \(\mathcal{H}_{\mathrm{loc}}\), i.e. between passive gauge transformations and local active gauge transformations. The "gauge principle" therefore actually encapsulates two gauge principles: a passive and an active one. Yet, the conceptual meaning of each is quite different, as different as coordinate changes (a.k.a. passive diffeomorphisms) and (active) diffeomorphisms \(\mathrm{Diff}(M)\) are in GR.
A theory being given by a Lagrangian \(L\), invariance under gluings, \(L(b)=L(b^{\mathrm{g}})\), mean that a gauge field theory is only sensitive to the intrinsic geometry of the bundle \(P\), and to global object \(\beta\) living in it. Invariance under \(\mathcal{H}_{\mathrm{loc}}\), \(L(b)=L(b^{\gamma})\), means that a gauge field theory is sensitive only to the geometry of the \(\mathrm{Aut}_{\nu}(P)\)-class of \(P\), i.e. to the \(\mathcal{H}\)-orbits of global objects. In the same way that the combination of the hole argument and the point-coincidence argument [17] clarified that \(\mathrm{Diff}(M)\)-invariance in GR encodes the _relational_ nature of spacetime, and of general relativistic physics more generally, arguably a combination of an "internal hole argument" and "internal point-coincidence argument" suggests that \(\mathcal{H}_{\mathrm{loc}}\)-invariance encodes the _relational_ character of the enriched spacetime (represented by the principal bundle) and of gauge field physics more generally. We will elaborate on this in a forthcoming work.
### Linear versions
All of the above naturally have linear versions. Consider a transition function \(g_{\tau}:U\to H\), so that \(\sigma^{\prime}_{\tau}=\sigma g_{\tau}\), s.t. \(g_{\tau=0}=\mathrm{id}_{\mathrm{ii}}\). The object \(\lambda=\frac{d}{d\tau}g_{\tau}\big{|}_{\tau=0}:U\to\mathrm{Lie}H\) is an infinitesimal transition function of the bundle \(P\). We define \(\delta_{\lambda}b:=\frac{d}{d\tau}\,b^{\delta_{\tau}}\big{|}_{\tau=0}\): It is the limit of the difference between the local representatives obtained via \(\sigma^{\prime}_{\tau}\) and \(\sigma\) respectively. Such infinitesimal gluing can be called an infinitesimal passive gauge transformation.
A commutator \([\delta_{\lambda},\delta_{\nu^{\prime}}]b=\delta_{[\lambda,\lambda^{\prime}] \mathrm{Lie}h\omega}b\) arises from the definition \([\delta_{\lambda},\delta_{\nu^{\prime}}]b:=\frac{d}{ds}\,\frac{d}{d\tau}\,b^{ \delta_{\tau}\delta_{\tau}\delta_{\tau}^{-1}\,\delta_{\tau}^{-1}}\big{|}_{\tau= 0}\big{|}_{s=0}\) - i.e. the commutator in \(H\). It is of course recovered when one has concrete expressions for the result of \(\delta_{\lambda}b\), and considering \(\delta_{\lambda}\) as an even graded derivation on \(\Omega^{\bullet}(U)\) s.t. \([\delta_{\lambda},d]=0\), and whose action on \(b\)'s is defined by such expressions. The commutator is then of course \([\delta_{\lambda},\delta_{\nu^{\prime}}]=\delta_{\lambda}\delta_{\nu^{\prime}} -\delta_{\lambda}\delta_{\nu^{\prime}}\). Let us illustrate.
From above, we obtain well-known relations, for \(A\in\mathcal{A}\) and \(a\in\Omega^{\bullet}_{\mathrm{lens}}(U,\rho)\): First, \(\delta_{\lambda}A:=\frac{d}{d\tau}\,A^{\delta_{\tau}}\big{|}_{\tau=0}=d \lambda-\mathrm{ad}_{\lambda}A=d\lambda+[A,\lambda]=D^{A}\lambda\). Then, \(\delta_{\lambda}a:=\frac{d}{d\tau}\,a^{\delta_{\tau}}\big{|}_{\tau=0}=-\rho_{*} (\lambda)\,a\). As special case we have, \(\delta_{\lambda}F=-\mathrm{ad}_{\lambda}F=[F,\lambda]\), \(\delta_{\lambda}\Phi=-\rho_{*}(\lambda)\Phi\) and \(\delta_{\lambda}D^{A}\Phi=-\rho_{*}(\lambda)D^{A}\Phi\). The latter result is recovered, seeing \(\delta_{\lambda}\) as an even derivation, via \(\delta_{\lambda}D^{A}\Phi=d\delta_{\lambda}\,\Phi+[\delta_{\lambda}A,\Phi]-[A, \delta_{\lambda}\,\Phi]\). In the same algebraic way, it it easily verified on \(A\) and \(a\) that \([\delta_{\lambda},\delta_{\nu^{\prime}}]A=\delta_{\lambda}\delta_{\nu^{\prime}} A-\delta_{\lambda}\delta_{\nu^{\prime}}A=D^{A}([\lambda,\,\lambda^{\prime}]_{\mathrm{Lie}h \omega})=\delta_{[\lambda,\lambda^{\prime}]_{\mathrm{Lie}h\omega}}A\), and that \([\delta_{\lambda},\delta_{\nu^{\prime}}]a=\delta_{\lambda}\delta_{\nu^{\prime}}a- \delta_{\lambda}\delta_{\nu^{\prime}}a=-\rho_{*}([\lambda,\lambda^{\prime}]_{ \mathrm{Lie}h\omega})=\delta_{[\lambda,\,\lambda^{\prime}]_{\mathrm{Lie}h\omega}}a\). These computations are familiar in gauge field theory.
Given the infinitesimal equivariance of elements of \(\mathrm{Lie}\mathcal{H}\) (see below (17)), making them tensorial \(0\)-form for the \(\mathrm{Ad}\)-representation of \(H\), the \(\mathrm{Aut}_{\nu}(P)\simeq\mathcal{H}\)-transformation of \(Y\in\mathrm{Lie}H\) is \(Y^{\gamma}:=\psi^{*}Y=\mathrm{Ad}_{\gamma^{-1}}X=\gamma^{-1}X\gamma\). The corresponding \(\mathrm{aut}_{\nu}(P)\simeq\mathrm{Lie}\mathcal{H}\)-transformation is \(L_{X^{\gamma}}Y=X^{\gamma}(Y)=-\mathrm{ad}_{\lambda}Y=[Y,X]_{\mathrm{Lie}h\omega}\). Thus, defining \(\xi:=\sigma^{*}X\) and \(\zeta:=\sigma^{*}Y\), we have locally: \(\zeta^{\gamma}=\mathrm{Ad}_{\gamma^{-1}}\zeta\) and the corresponding linearisation \(\delta_{\xi}\zeta^{\gamma}:=\sigma^{*}(L_{X^{\gamma}}Y)=[\zeta,\xi]_{\mathrm{Lie}h\omega}\). We define the Lie algebra of the local gauge group as \(\mathrm{Lie}\mathcal{H}_{\mathrm{loc}}:=\{\xi,\zeta:U\to\mathrm{Lie}H\,|\, \delta_{\xi}\zeta=[\zeta,\xi]_{\mathrm{Lie}h\omega}\}\).
Local active infinitesimal gauge transformations of \(b\) are defined as \(\delta_{\xi}b:=\sigma^{*}(L_{X^{\prime}}\beta)\). Here, \(\delta_{\xi}\) is immediately seen as an even derivation on \(\Omega^{\bullet}(U)\) s.t. \([\delta_{\xi},d]=0\) arising from the Lie derivative on \(P\) along \(X^{\nu}\in\operatorname{\mathfrak{aut}}_{\nu}(P)\). So, naturally, a Lie bracket arises by \([\delta_{\xi},\delta_{\xi}]b:=\sigma^{*}([L_{X^{\prime}},L_{Y^{\prime}}]\beta) =\sigma^{*}(L_{-[X,Y]_{\text{Lie}}}\beta)=-\delta_{[\xi,\zeta]_{\text{Lie}}}b\). See again below (17). This is of course cross-checked by writing the bracket as a commutator \([\delta_{\xi},\delta_{\xi}]=\delta_{\xi}\delta_{\xi}-\delta_{\xi}\delta_{\xi}\), from explicit expressions for \(\delta_{\xi}b\) seen as defining the action of \(\delta_{\xi}\) on the \(b\)'s and by using the action \(\delta_{\xi}\xi=[\xi,\xi]_{\text{Lie}}\) of \(\operatorname{Lie}\!\mathcal{H}\) on its elements. Let us illustrate.
For \(A\in\mathcal{A}\) and \(a\in\Omega^{\bullet}_{\text{\rm tens}}(U,\rho)\), we obtain: \(\delta_{\xi}A:=\sigma^{*}(L_{X^{\prime}}\omega)=D^{A}\xi\) and \(\delta_{\xi}a:=\sigma^{*}(L_{X^{\prime}}\alpha)=-\rho_{*}(\xi)\,a\). As special case we have, \(\delta_{\xi}F=-\mathrm{ad}_{\xi}F=[F,\xi]\), \(\delta_{\xi}\Phi=-\rho_{*}(\xi)\,\Phi\) and \(\delta_{\xi}\,D^{A}\Phi=-\rho_{*}(\xi)\,D^{A}\Phi\). This last result is recovered algebraically by \(\delta_{\xi}\,D^{A}\Phi=d\delta_{\xi}\Phi+[\delta_{\xi}A,\Phi]-[A,\delta_{\xi }\Phi]\). In the same way, one easily shows: \([\delta_{\xi},\delta_{\zeta}]A=\delta_{\xi}\delta_{\zeta}A-\delta_{\zeta} \delta_{\xi}A=-D^{A}([\xi,\zeta]_{\text{Lie}}\mu)=-\delta_{[\xi,\zeta]_{\text {Lie}}\mu}A\), and that \([\delta_{\xi},\delta_{\zeta}]a=\delta_{\xi}\delta_{\zeta}a-\delta_{\zeta} \delta_{\xi}a=\rho_{*}([\xi,\zeta]_{\text{Lie}}\mu)=-\delta_{[\xi,\zeta]_{ \text{Lie}}\mu}a\).
We observe again that, from a gauge field theoretic perspective, \(\operatorname{Lie}\!\mathcal{H}_{\text{\rm{loc}}}\)-transformations are indistinguishable from infinitesimal gluings (apart from a sign difference in their commutators). Both can be encapsulated via the BRST framework [18]: There, one extends \(\Omega^{\bullet}(U)\) to the bigraded complex \(\Omega^{\bullet}(U,\rho)\otimes\wedge^{\bullet}c\), where \(c\) is the odd degree (Grassmann) \(\operatorname{Lie}\!H\)-valued ghost field, place-holder for the parameters \(\lambda\) or \(\xi\in\operatorname{Lie}\!\mathcal{H}_{\text{\rm{loc}}}\). The BRST differential \(s\), s.t. \(s^{2}=0\) and \(sd=-ds\), is introduced, playing the role of \(\delta_{\lambda}\) or \(\delta_{\xi}\). The action of \(s\) on \(b=\{A,a\}\) and \(c\) is by definition: \(sA:=-Dc=-dc-Ac-cA\) (a bigraded bracket is understood in \(Dc\)), \(sa:=-\rho_{*}(c)\,\phi\), and \(sc:=\,\nicefrac{{1}}{{2}}[c,c]_{\text{Lie}}\mu\). The third relation enforces \(s^{2}=0\) on all fields, while the first two reproduces infinitesimal gauge transformations. This algebraic treatment, efficient as it is, erases the key conceptual difference between passive and active local gauge transformations, i.e. between gluings and \(\operatorname{Lie}\!\mathcal{H}_{\text{\rm{loc}}}\).
As we show below, the local version of general vertical transformations clearly distinguishes itself from local gluings. This opens the possibility for the BRST formalism to be modified accordingly, which we elaborate on in the conclusion.
### Generalised local active gauge transformations
Like \(\operatorname{Aut}_{\nu}(P)\simeq\mathcal{H}\), the group \(\operatorname{Diff}_{\nu}(P)\simeq C^{\infty}(P,H)\) cannot act on local representatives \(b\) of forms \(\beta\) on \(P\). Yet, one can define the local representatives of \(\beta^{\gamma}:=\psi^{*}\beta\) for \(\gamma\in C^{\infty}(P,H)\sim\psi\in\operatorname{Diff}_{\nu}(P)\).
As the equivariance of elements in \(C^{\infty}(P,H)\) are left unspecified, so are both their \(\operatorname{Aut}_{\nu}(P)\simeq\mathcal{H}\) and \(\operatorname{Diff}_{\nu}(P)\) transformations. We thus simply use the generic notation \(\eta^{\gamma}:=\psi^{*}\eta=\eta\circ R_{\gamma}\), for \(\gamma,\eta\in C^{\infty}(P,H)\). By (7) the composition law in \(\operatorname{Diff}_{\nu}(P)\) is thus represented by the element \(\gamma\,\eta^{\gamma}\in C^{\infty}(P,H)\).
We define \(\gamma:=\sigma^{*}\gamma\), and \(C^{\infty}(U,H):=\{\gamma:U\to H\}\) is the local version of the group of vertical transformations, so we may call it the group of _generalised local active gauge transformations_. Its action on local representatives is given by definition as \(b^{\eta}:=\sigma^{*}(\beta^{\eta})\). By (36), the iteration law is given by \((b^{\eta})^{\gamma}:=\sigma^{*}((\beta^{\eta})^{\gamma})=\sigma^{*}(\beta^{ \gamma}\eta^{\gamma})=b^{\gamma}\,\nicefrac{{1}}{{\eta^{\gamma}}}\). This again secures the consistency of "field theoretic rule": Considering \(\gamma\) as acting on \(b^{\eta}\) as a concatenation of the fields \(b\) and \(\eta\), one may write \((b^{\eta})^{\gamma}=(b^{\gamma})^{\eta}\,\gamma\), which results indeed in the concatenation \(b^{\gamma}\,\nicefrac{{1}}{{\eta^{\gamma}}}\).
For example, by (34)-(35) and (36), the generalised gauge transformations of the gauge potential \(A\in\mathcal{A}\) and \(a\in\Omega^{\bullet}_{\text{\rm tens}}(U,\rho)\) are:
\[\begin{split}& A^{\eta}=\operatorname{Ad}_{\eta^{-1}}\!A+\eta^{-1}d \eta&\text{and}\quad(A^{\eta})^{\gamma}=\operatorname{Ad}_{( \gamma\,\eta^{\gamma})^{-1}}\!A+(\gamma\,\eta^{\gamma})^{-1}\!d(\gamma\,\eta^{ \gamma}),\\ & a^{\eta}=\rho(\eta)^{-1}a&\text{and}\quad(a^{\eta})^{ \gamma}=\rho(\gamma\,\eta^{\gamma})^{-1}a.\end{split} \tag{56}\]
The second line gives in particular the generalised transformations of the field strength \(F\), matter field \(\Phi\) and its covariant derivative \(D^{A}\Phi\). The latter is also found via the field-theoretic computation: \((D^{A}\Phi)^{\eta}=D^{A^{\eta}}\Phi^{\eta}\), idem for iterated transformations. This is noteworthy: It shows that the operator \(D:=d\ +\rho_{*}(A)\) preserves the covariance of \(a\), or \(\phi\), and deserve the name "covariant derivative", even under the action \(C^{\infty}(U,H)\), extending that of \(\mathcal{H}_{\text{\rm{loc}}}\). And this, as discussed at the end of section 3.2, despite the fact that \(A^{\eta}\) is not the local representative of a connection, since \(\omega^{\eta}\not\in C\), and that \(a^{\eta}\) is not the local representative of a tensorial form, since \(\alpha^{\eta}\not\in\Omega^{\bullet}_{\text{\rm tens}}(P,\rho)\). This shows that a Gauge Principle, or Gauge Argument, applies still to generalised gauge transformations.
A Lagrangian invariant under gluings, \(L(b^{\delta})=L(b)\), will then not only be invariant under \(\mathcal{H}_{\text{\rm{loc}}}\) as previously mentioned, but also under \(C^{\infty}(U,H)\): \(L(b^{\gamma}\,\nicefrac{{1}}{{\eta^{\gamma}}})=L(b^{\eta})=L(b)\). Gauge field theories thus naturally enjoys a larger group of gauge symmetries, arising from \(\operatorname{Diff}_{\nu}(P)\).
Let us consider the infinitesimal counterpart of the above. The Lie algebra of generalised gauge transformations is the local version of \(C^{\infty}(P,\mathrm{Lie}H)\) equipped with the extended bracket (16). Given that the infinitesimal equivariance of elements of \(C^{\infty}(P,\mathrm{Lie}H)\) is left unspecified, so is their their \(\mathrm{Aut}_{v}(P)\simeq\mathcal{H}\) and \(\mathrm{Diff}_{v}(P)\) transformations. We thus have, for the action of \(\gamma\in C^{\infty}(P,H)\) on \(Y\in C^{\infty}(P,\mathrm{Lie}H)\), the generic notation \(Y^{\gamma}:=\psi^{*}Y\). Correspondingly, the action of \(X\in C^{\infty}(P,H)\) is noted \(L_{X^{\gamma}}Y=X^{\gamma}(Y)\). Thus, defining \(\xi:=\sigma^{*}X\) and \(\zeta:=\sigma^{*}Y\), we have locally: \(\zeta^{\gamma}:=s^{*}(Y^{\gamma})\) and the corresponding linearisation \(\delta_{\xi}\,\xi:=\sigma^{*}(L_{X^{\gamma}}Y)\). The Lie algebra of infinitesimal generalised gauge transformations is then \(C^{\infty}(U,\mathrm{Lie}H)\) equipped with the local Frolicher-Nijenhuis bracket (16)-(29):
\[[\xi,\zeta]=[\xi,\zeta]_{\mathrm{Lie}H}+\delta_{\xi}\,\zeta-\delta_{\zeta}\,\xi, \tag{57}\]
We may call this a generalised gauge algebra.
Local infinitesimal generalised gauge transformations of \(b\) are defined by \(\delta_{\xi}b:=\sigma^{*}(L_{X^{\gamma}}\beta)\), given (47), where \(\delta_{\xi}\) is immediately seen as an even derivation on \(\Omega^{\bullet}(U)\), s.t. \([\delta_{\xi},d]=0\), arising from the Nijenhuis-Lie derivative on \(P\) along \(X^{v}\in\mathfrak{hif}_{v}(P)\). Naturally, a Lie bracket for \(\delta_{\xi}\)'s arises via \([\delta_{\xi},\delta_{\zeta}]\,b:=\sigma^{*}([L_{X^{\gamma}},L_{Y^{\gamma}}]\,\beta)\), i.e. from the commutator of the Nijenhuis-Lie derivatives (17), which by (48) gives:
\[[\delta_{\xi},\delta_{\zeta}]\,b=\delta_{[\xi,\zeta)}\,b. \tag{58}\]
Local fields are representations for the generalised gauge algebra. This is obtained algebraically too, writing the commutator \([\delta_{\xi},\delta_{\zeta}]=\delta_{\xi}\delta_{\zeta}-\delta_{\zeta}\delta _{\xi}\), and using explicit expressions for \(\delta_{\xi}b\) seen as defining the action of \(\delta_{\xi}\) on the \(b\)'s, as well as the action \(\delta_{\xi}\zeta\) of \(C^{\infty}(U,\mathrm{Lie}H)\) on its elements.
The illustration with \(A\in\mathcal{A}\) and \(a\in\Omega^{\bullet}_{\mathrm{tems}}(U,\rho)\) is formally as before: we obtain from (49)-(53),
\[\delta_{\xi}A:=\sigma^{*}(L_{X^{\gamma}}\omega)=D^{A}\xi,\quad\text{and}\quad \delta_{\xi}a:=\sigma^{*}(L_{X^{\gamma}}\alpha)=-\rho_{*}(\xi)\,a. \tag{59}\]
As special case of the second equation we have, \(\delta_{\xi}F=-\mathrm{ad}_{\xi}F=[F,\xi]\), \(\delta_{\xi}\Phi=-\rho_{*}(\xi)\,\Phi\) and \(\delta_{\xi}\,D^{A}\Phi=-\rho_{*}(\xi)\,D^{A}\Phi\). The last result being recovered algebraically by \(\delta_{\xi}\,D^{A}\Phi=d\delta_{\xi}\Phi+[\delta_{\xi}A,\Phi]-[A,\delta_{ \xi}\Phi]\). In the same algebraic way, one checks the commutator on \(\alpha\):
\[[\delta_{\xi},\delta_{\zeta}]\,a =-\delta_{\xi}\,\rho_{*}(\zeta)\,a+\delta_{\zeta}\,\rho_{*}( \xi)\,a,\] \[=-\rho_{*}(\delta_{\xi}\,\zeta)\,a+\rho_{*}(\zeta)\rho_{*}(\xi)\,a \,+\ \rho_{*}(\delta_{\zeta}\,\xi)\,a-\rho_{*}(\xi)\rho_{*}(\zeta)\,a,\] \[=-\rho_{*}\left([\xi,\xi]_{\mathrm{Lie}H}+\delta_{\xi}\,\zeta- \delta_{\xi}\,\xi\right]\right)\,a, \tag{60}\] \[=-\rho_{*}([\xi,\xi])\,a,\] \[=\delta_{[\xi,\zeta]}\,a.\]
The local version of (55). Similarly, one finds:
\[[\delta_{\xi},\delta_{\zeta}]\,A=\delta_{\xi}\delta_{\zeta}A-\delta_{\zeta} \delta_{\xi}A=D^{A}([\xi,\zeta])=\delta_{[\xi,\zeta]}\,A, \tag{61}\]
as the local counterpart of (52)
The bracket (58) is found in the physics literature, notably in the covariant phase space literature, as pointed out already. It is often derived heuristically: Terms like \(\delta_{\xi}\,\zeta\) are assumed to arise because of a field-dependence of the gauge parameters, i.e. \(\zeta=\zeta(b)=\zeta(A,\Phi)\) - itself arising either because of gauge fixing or because boundary conditions must be preserved. As we have demonstrated in this paper, such is not necessary the case.
Yet, there is indeed a geometric arena where field-dependent parameters \(\zeta=\zeta(b)\) arise naturally: When one considers the field space \(\Phi=\{b\}\) of a gauge theory as a principal bundle with structure group \(\mathcal{H}_{\mathrm{nc}}\). The gauge group \(\mathbf{Aut}_{v}(\Phi)\) of \(\Phi\), or its group of vertical automorphisms \(\mathbf{Diff}_{v}(\Phi)\), are the geometric underpinning of the notion of field-dependent gauge transformations. Then, objects like \(\zeta=\zeta(b)\) belong to \(C^{\infty}(\Phi,\mathrm{Lie}\mathcal{H}_{\mathrm{nc}})\simeq\mathfrak{hif}_{v}(\Phi)\), and (58)-(57) arise from the Nijenhuis-Lie derivative and Frolicher-Nijenhuis bracket on \(\Phi\). See for a [11] for a detailed treatment of the case where \(\mathrm{Diff}(M)\) is the structure group of \(\Phi\).
Conclusion
In this paper, we have detailed the global and local geometry arising from the group of vertical diffeomorphisms \(\mathrm{Diff}_{\nu}(P)\simeq C^{\infty}(U,H)\) of a principal bundle \(P\). Notably, we have shown that its Lie algebra is realised as by the Nijenhuis-Lie derivative: This shows that the extended bracket often encountered in the gauge field theory literature, mainly on gravity [7; 8] and its asymptotic symmetries (BMS and extensions) [1; 2; 3; 4; 5; 6] is but an instance of the Frolicher-Nijenhuis bracket.
The local counterpart \(C^{\infty}(U,H)\) of the general vertical transformations of \(P\), that we called generalised local active gauge transformations, is manifestly distinct from the local gluings, contrary to standard local active gauge transformations \(\mathcal{H}_{\mathrm{loc}}\) arising from vertical automorphisms \(\mathrm{Aut}_{\nu}(P)\). In consequence, since the usual BRST framework [18] indistinguishably encodes both infinitesimal gluings and Lie\(\mathcal{H}_{\mathrm{loc}}\), one may inquire as to the possibility that it must be adjusted to encode \(C^{\infty}(U,LieH)\). It is known that BRST cohomology is just the Chevaley-Eilenberg (CE) cohomology of \(\mathrm{Lie}\mathcal{H}_{\mathrm{loc}}\), or \(\mathcal{H}\simeq\mathrm{Aut}_{\nu}(\mathcal{P})\), with coefficient in \(\Omega^{\bullet}(U)\) (or local polynomials in the fields \(\{b\}=\{A,\phi,\cdots\}\)) [19; 20]. Perhaps the extended BRST framework is just CE-cohomology of \(C^{\infty}(U,H)\simeq\mathrm{Diff}_{\nu}(P)\). We will investigate this further elsewhere.
As we observed, most available treatments of bundle geometry do not mention \(\mathrm{Diff}_{\nu}(P)\). A good reason for this may be that, since its elements generically do no commute with the right action of the structure group \(H\), stricto sensu these are not bundle automorphisms: i.e. they are not morphisms in the category of principal bundles. They are not "relevant structures" from a categorical perspective. Indeed, the action of \(\mathrm{Diff}_{\nu}(P)\) on objects usually well defined for that category, such as the spaces of connections, equivariant and tensorial forms, is "problematic" (it doesn't preserve those spaces) even if definable and possible to work out. Still, these are maps from a bundle to itself, preserving the fibration. One may thus enquire as to what category these are natural arrows of.
At any rate, this is not an issue for local gauge field theory, which can accommodate this extended notion of gauge transformation: a gauge argument still carries through, and the covariant derivative still does its job of preserving covariance under the action of \(C^{\infty}(U,H)\). In a companion paper [11], we show how the the finite dimensional geometry exposed here may extend to the infinite dimensional bundle geometry of the field space \(\Phi=\{b\}\) of a gauge theory: there, it further clarifies the literature mentioned above.
## Acknowledgment
This work was funded by the OP J.A.C MSCA grant, number CZ.02.01.01/00/22_010/0003229, co-funded by the Czech government Ministry of Education, Youth & Sports and the EU. This research was also funded in part by the Austrian Science Fund (FWF), [P 36542]. Support from the service _Physics of the Universe, Fields and Gravitation_ at UMONS (BE) is also acknowledged.
|
2308.09392 | Attacking logo-based phishing website detectors with adversarial
perturbations | Recent times have witnessed the rise of anti-phishing schemes powered by deep
learning (DL). In particular, logo-based phishing detectors rely on DL models
from Computer Vision to identify logos of well-known brands on webpages, to
detect malicious webpages that imitate a given brand. For instance, Siamese
networks have demonstrated notable performance for these tasks, enabling the
corresponding anti-phishing solutions to detect even "zero-day" phishing
webpages. In this work, we take the next step of studying the robustness of
logo-based phishing detectors against adversarial ML attacks. We propose a
novel attack exploiting generative adversarial perturbations to craft
"adversarial logos" that evade phishing detectors. We evaluate our attacks
through: (i) experiments on datasets containing real logos, to evaluate the
robustness of state-of-the-art phishing detectors; and (ii) user studies to
gauge whether our adversarial logos can deceive human eyes. The results show
that our proposed attack is capable of crafting perturbed logos subtle enough
to evade various DL models-achieving an evasion rate of up to 95%. Moreover,
users are not able to spot significant differences between generated
adversarial logos and original ones. | Jehyun Lee, Zhe Xin, Melanie Ng Pei See, Kanav Sabharwal, Giovanni Apruzzese, Dinil Mon Divakaran | 2023-08-18T08:49:11Z | http://arxiv.org/abs/2308.09392v2 | # Attacking logo-based phishing website detectors with adversarial perturbations
###### Abstract
Recent times have witnessed the rise of anti-phishing schemes powered by deep learning (DL). In particular, logo-based phishing detectors rely on DL models from Computer Vision to identify logos of well-known brands on webpages, to detect malicious webpages that imitate a given brand. For instance, Siamese networks have demonstrated notable performance for these tasks, enabling the corresponding anti-phishing solutions to detect even "zero-day" phishing webpages. In this work, we take the next step of studying the robustness of logo-based phishing detectors against adversarial ML attacks. We propose a novel attack leveraging generative adversarial perturbations to craft "adversarial logos" that, with no knowledge of phishing detection models, can successfully evade the detectors. We evaluate our attacks through: (i) experiments on datasets containing real logos, to evaluate the robustness of state-of-the-art phishing detectors; and (ii) user studies to gauge whether our adversarial logos can deceive human eyes. The results show that our proposed attack is capable of crafting perturbed logos subtle enough to evade various DL models--achieving an evasion rate of up to 95%. Moreover, users are not able to spot significant differences between generated adversarial logos and original ones.
Keywords:Phishing Adversarial Machine Learning Deep Learning
## 1 Introduction
Phishing attacks are on the rise [2], and they represent a serious threat to both organizations and individuals alike. While there have been numerous research efforts to counter this long-running security problem [25, 56, 30, 31], a universal solution against phishing has yet to be found, as new ways to lure unaware victims keep emerging [3]. We focus on the problem of detecting phishing _websites_, which has witnessed 61% increase in 2022 [6].
The first line of defense against phishing websites is represented by blocklists, which are nowadays leveraged at scale [29]. Unfortunately, such rule-based countermeasures only work against the phishing entries in the blocklist, and attackers are well-aware of this (for a recent report, see [4]). To protect users against
evolving phishing websites, current anti-phishing schemes are now equipped with data-driven methods that detect malicious webpages by leveraging some heuristics [5]. In particular, the constant progress and successes of _machine learning_ (ML) algorithms in research [51, 57] led to the integration of ML-based phishing detectors also in popular browsers [33].
There are various ways in which ML is used to identify phishing websites, depending on the input analyzed by the ML model [22]: URL (e.g., [53, 30]), HTML contents (e.g., [56, 57, 32]), or visual representations (e.g., [20, 7]) of a webpage. Detection methods based on visual analytics are now receiving much attention (e.g., [20, 19, 7, 34, 28, 35]), likely due to the tremendous advancements in deep learning (DL). In this work, we delve into the application of DL for _logo-based_ phishing website detection--a state-of-the-art approach5 that is _(i)_ considered in recent researches (e.g., [19, 28, 34, 35]), and _(ii)_ deployed in practice [11].
Footnote 5: **Background:** in simple terms, logo-based phishing detection seeks to identify those (malicious) webpages that attempt to imitate a well-known brand. Intuitively, if a given webpage has the logo of a well-known brand (e.g., PayPal), but the domain does not correspond to the same brand (e.g., www.p4y-p4l.com), the webpage is classified as phishing. Though these approaches require maintenance of a database of logos for brands, such a task is not impractical given that the number of brands targeted by attackers is typically small (\(\approx 200\)) [7, 18, 34].
In logo-based detection, the first task is to extract the logo(s) from a webpage (typically from its screenshot); the subsequent task is to identify the brand of the logo. The latter task can be accomplished by means of DL today, as demonstrated by recent works, e.g., by employing Siamese neural networks [34, 35]. Given the relevance of these solutions in anti-phishing schemes, we scrutinize the robustness of DL models for logo identification against subtle adversarial perturbations. Even though many efforts in the DL community reveal the vulnerability of image classification models to adversarial examples [50, 26, 43, 38], to the best of our knowledge, there exists no work that studies the vulnerability of logo-based phishing detectors against such sophisticated attacks. Therefore, besides the Siamese models proposed by prior work, we also develop two new logo-identification solutions based on state-of-the-art transformer models from Computer Vision--namely, Vision Transformer ViT[23] and Swin[36].
Subsequently, we propose a novel attack using _generative adversarial perturbations_ (GAP) [43], to craft adversarial logos that simultaneously deceive _(i)_ DL models for logo identification, and _(ii)_ human users, i.e., potential victims. Through a comprehensive experimental study based on datasets of real logos, we demonstrate the quality of our proposed DL models for logo identification and the efficacy of the adversarial logos generated by our GAP attack to evade all three powerful models for logo identification (Siamese, ViT and Swin).
Finally, we carry out two user studies to assess the impact of our attack on real humans. We summarise our three major contributions:
1. We propose _a novel attack_, based on generative adversarial perturbations (GAP), against logo-based anti-phishing schemes (Section 4). Our proposed attack treats a phishing detection (specifically, logo-identification) model as a black-box and does not require any model-specific information.
2. We propose _two new logo-identification solutions_ leveraging transformer-based DL models: ViT and Swin (Section 3). We empirically demonstrate that both ViT and Swin achieve performance comparable to the state-of-the-art solutions relying on Siamese models [34; 35] (Section 5.3).
3. Through a reproducible evaluation on real data, we _evaluate the robustness of three DL models for logo-identification_ (ViT, Swin, Siamese) against our GAP-based attack (Section 5.4). We further validate the _impact of our attack on real humans_ through a user study entailing \(\sim\)250 people (Section 6).
We suggest potential countermeasures against our attack, and also discuss ways that attackers can use to circumvent such countermeasures (Section 7). Finally, we publicly release our resources to the scientific community [1].
## 2 Threat model
We describe the threat model by first summarizing the functionality of the target system, and then presenting the characteristic of our envisioned attacker.
### Target system: Logo-based phishing website detectors
Fig. 1 presents the general workflow of logo-based phishing detection systems. From a given webpage, the detection system first extracts the logo as an image; then, it identifies the brand the logo belongs to by using a discriminator. Such a discriminator can be implemented in various ways, e.g., earlier works employed methods based on SIFT (scale-invariant feature transformation) [9; 54]; however, current state-of-the-art methods use DL models [34; 16; 35], and we focus on these. Upon identifying the brand of a logo, the system determines if the webpage is legitimate or not by comparing the webpage's domain with the domain of the brand associated with the logo.
Since logo-identification is a multi-class classification problem, the DL model is trained on a static set of classes, i.e., the brands of the logos. Such a set of _protected brands_ determines the size of the prediction classes; one brand may have multiple logos. Previous research has shown that 99% of the attacks target less than 200 brands [7; 34; 35].
In practice, phishing detectors must exhibit low false-positive rates (FPR), typically below \(10^{-3}\)[31; 44]. To successfully detect phishing webpages while maintaining low FPR, logo-based detectors follow two principles [34]: _(a)_ the
Figure 1: Detection process of logo-based phishing detection systems
highest predicted class is decided as the target brand _if and only if_ the prediction probability is greater than a predefined _decision threshold_ (say, \(\theta\)); _(b)_ if the identified logo does not belong to any brand in the protected set, the webpage is considered benign to avoid triggering false positives (see Fig. 1). Unfortunately, these principles can be maliciously exploited: by lowering the prediction probability, it is possible to evade logo-based phishing detectors.
### Attack: Adversarial logos
The basic intuition behind our attack is to create an _adversarial logo_ that is _(i)_ minimally altered w.r.t. its original variant (to deceive the human eye); and that _(ii)_ misleads the phishing detector. Let us describe our attacker by using the well-known notion of adversarial ML attacks [17, 11].
* Goal: The attacker wants to craft an adversarial logo related to brand \(b\) which evades the phishing detector (at inference) while deceiving human eyes.
* Knowledge and Capabilities: To train a model for evasion, an attacker can collect authentic logos of any brand (e.g., of PayPal), via crawling or from public datasets (e.g., _Logo2K+_[55]). The attacker knows that their victims are protected by a logo-based phishing detector powered by ML. The attacker has a way to infer the decision result of the phishing detector (this is doable even if the detector is "invisible" [11], e.g., by inspecting visits to the hosted phishing webpage). The attacker does not i) require knowledge of the logo-identification model employed by the phishing detector, ii) manipulate the data used to train the ML model. In other words, it's neither a white-box attack nor performs data poisoning. Note, the attacker targets a set of brands for phishing; if the targeted brand is not within the protected set, then that is already favorable for an attacker--there is no perturbation required! Finally, the attacker naturally has control on their phishing webpages.
* Strategy: The attacker manipulates the logo(s) of brand \(b\) in their phishing webpages by introducing perturbations so that the logo-identification model predicts with lower confidence, i.e., the probability of the logo being of any brand is lower than the decision threshold (\(\theta\)). This way, the phishing detector decides the logo _not_ to be one of the protected brands, which makes way for successful evasion.
#### 2.2.1 Scope of attack.
In our threat model, the attacker exploits the vulnerability of _logo-identification_ methods integrated into phishing detectors. We focus on logo-identification DL models because they are i) state-of-the-art research with phishing detecting capability in the wild ('zero-day' phishing) [34, 35], and ii) used in commercial phishing detectors [11]. Threats against logo extraction from a webpage, however interesting, are not within the scope of our current work. Lastly, we do not consider attacks to make an unknown logo be identified as one of the protected logos, as that is not beneficial for the attacker.
## 3 Deep Learning for Logo-based Phishing Detection
Development of the transformer architecture [52] paved the way for various state-of-the-art language models, such as BERT, ChatGPT, and PaLM. Dosovitskiy et al. [23] applied transformer to Computer Vision tasks with the introduction of Vision Transformer (ViT), demonstrating state-of-the-art performance on benchmark datasets [23]. The attention mechanism in transformers allows them to capture local and global contextual information effectively, resulting in superior performance on large-scale image classification tasks. This capability is also beneficial for logo identification, since logos of the same brand, while being visually distinct, share the same inherent design structure. Therefore, in this work, we propose, develop and evaluate two transformer-based models, ViT and Swin, for logo identification. To the best of our knowledge, we are the first to leverage transformers for logo-based phishing detection.
We now describe our proposed ViT (Section 3.1) and Swin (Section 3.2), for which we provide an overview in Figs 2 and 3. Then, we present our own implementation of Siamese (Section 3.3) neural networks. Altogether, these three DL models will represent the target of our attacks (Section 5).
### ViT for logo identification
As illustrated in Fig. 2, we develop a logo-identification model by fine-tuning a pre-trained ViT-base model [23] on our dataset (which we discuss in Section 5.1). The model takes as input an image of size \(3\times 224\times 224\). The image is then split into patches, each of size \(16\times 16\), for further processing. Each patch is then linearly embedded into a vector of size \(1\times 768\). An additional classification token is then added to the linear embedding to form an embedded vector of size \(197\times 768\). The embeddings are positionally encoded before being fed into the transformer encoder. Finally, a fully connected layer takes the output from the encoder and maps it to a 2-dimensional space. The resulting logits are passed through a softmax layer to produce the final prediction probabilities for each class (logo). We denote this new logo-identification model as \(\mathcal{D}_{\texttt{ViT}}\).
### Swin for logo identification
Next, we propose Swin-based logo-identification model that utilizes the Swin transformer, a hierarchical transformer architecture introduced by Liu et al. [36].
Unlike ViT, Swin uses shifted windows to efficiently compute local self-attentions and build hierarchical feature maps through patch merging techniques. As illustrated in Fig. 3, each window contains multiple non-overlapping patches, and each transformer block in the Swin architecture contains two attention layers: a window-based multi-head self-attention (W-MSA) layer that calculates local attention within a specific window, and a shifted window-based multi-head self-attention (SW-MSA) layer that introduces cross-window connections. This approach allows for more efficient computation while still extracting both local and global contextual information.
In our implementation, we use the Swin-Transformer-Small architecture proposed by Liu et al. [36]. The model takes an input image of size \(3\times 224\times 224\), which is split into patches of size \(4\times 4\). As depicted in Figure 3, the patches are fed sequentially into four encoding stages consisting of 2, 2, 18, and 2 encoder blocks. Each encoding stage merges and downsamples the size of the feature maps by a factor of two, while doubling the number of channels.
The final feature map of size is \(7\times 7\) is transformed by a fully connected and softmax layer to obtain the output logits. We denote this model as \(\mathcal{D}_{\text{Swin}}\).
### Siamese and Siamese++ for logo identification
The Siamese neural network is a state-of-the-art for image-based phishing detection, both for comparing screenshots [7] and logos [34, 16, 35]. In logo-based phishing detectors, Siamese models measure the similarity of a given logo to those in the protected set. We train a Siamese model as proposed in Phishpedia [34] and PhishIntention [35], utilizing a transfer learning approach. Specifically, we train a logo classification model with the ResNetV2 network as the backbone, which effectively extracts different features from various logo variants. We then connect the trained ResNetV2 network to a Global Average Pooling layer to output a vector for any given logo. The learned vector representation is compared to those of the logos of protected brands using cosine similarity; the target with the highest similarity is identified as the brand the logo is trying to imitate.
We refer to our implementation of the Siamese model as \(\mathcal{D}_{\text{Siamese}}\). Additionally, Phishpedia [34] proposed an adversary-aware detector by replacing the ReLU activation function with a variant called step-ReLU (Appendix A). We also consider this robust version of Siamese, which we refer to as \(\mathcal{D}_{\text{Siamese++}}\).
## 4 Our Attack: Adversarial Logos
While recent logo-based phishing detection systems [34, 35] have demonstrated robustness against generic gradient-based attacks such as FGSM [26] and DeepFool [39],6 their resilience against more sophisticated adversarial attacks proposed in the literature [43, 38] remains unexplored. To this end, we propose a
DL-based generative framework inspired by Generative Adversarial Perturbations (GAP) [43], that specifically trains against logo identification models. This framework generates perturbation vectors that can be added to a target logo image, allowing the perturbed logo to evade phishing detection while remaining imperceptible to the human eye. We now describe our framework at a high-level (Section 4.1), for which we provide an overview in Fig. 4; and then provide low-level details on how to practically implement our attacks (Section 4.2).
### Framework: generative adversarial perturbations for logos
As illustrated in Fig. 4, our framework involves training a Generator that learns to generate perturbations. When added to a logo image, these perturbations can mislead a logo-identification model, which acts as the Discriminator, into lowering its prediction probability below the decision threshold. During the training process, the weights of the Discriminator are frozen, treating it as a black box to guide the training of the Generator.
**Generator.** We employ a Deep Residual Network with six residual blocks (ResNet-6) [27] as the core architecture of our Generator. Given a legitimate logo image as input, the Generator is trained to generate a _perturbation vector_. The generated perturbations undergo a _Scaling and Clipping_ stage. In this stage, the perturbation vector is first scaled and normalized based on the \(L_{\infty}\) norm to control the magnitude of the perturbations, so that they remain imperceptible to human viewers. Subsequently, the normalized perturbations are added pixel-wise to the legitimate logo image, resulting in the adversarial logo.
**Discriminator.** The Discriminator is a pre-trained multi-class classifier designed to process a logo image and estimate the probability that the image belongs to a target brand in the protected set. In our framework, we select one of the logo-identification models described in Section 3 to serve as the Discriminator.
### Implementation
We utilize the pre-trained Discriminator as a black box to assess the effectiveness of the Generator in crafting adversarial logo images. The Discriminator predicts the probability of a given logo belonging to each of the \(k\) protected brands; \(\mathbf{V}_{\text{true}}:[p_{1},p_{2},p_{3}....p_{k}]\), where \(\sum_{i=1}^{k}p_{i}=1\). As mentioned in Section 2.1, for a webpage to be classified as phishing, the logo-identification model must
Figure 4: Generative adversarial perturbation workflow
confidently identify the logo as one of the target brands \(i\) from the protected set, with a probability \(p_{i}\) greater than the phishing detector's decision threshold \(\theta\).
Hence, to devise our Generator, we introduce a target probability \(p_{\text{adversarial}}\), such that \(p_{\text{adversarial}}<\theta\). The Generator is trained to craft adversarial logos that are classified with probabilities lower than \(p_{\text{adversarial}}\) for all of the protected brands, so as to evade phishing detection. Empirically, we observe that \(\theta\) is very high (above 0.8) for all discriminators, and for our attacks, \(p_{\text{adversarial}}\) can be much lower (in our experiments, it is 0.5; see Table 3 in Appendix B).
To guide the training process, the Generator is trained with a target probability vector \(\mathbf{V}_{\text{target}}:[p_{1}^{\prime},p_{2}^{\prime},p_{3}^{\prime}...p_{ k}^{\prime}]\), where each element \(p_{i}^{\prime}\) is defined such that \(p_{i}^{\prime}=\min(p_{i},p_{\text{adversarial}})\). This ensures that the generated adversarial logos are classified with probabilities below the \(\theta\) for all protected brands.
The loss function is defined as a decreasing function of the cross entropy \(\mathcal{H}(V_{\text{true}},V_{\text{target}})\) between the target probability vector \(\mathbf{V}_{\text{target}}\) and \(\mathbf{V}_{\text{true}}\). The specific form of the loss function can be expressed as follows:
\[\text{loss}=\log\left(\mathcal{H}\left(\mathbf{V}_{\text{true}},\mathbf{V}_{ \text{target}}\right)\right) \tag{1}\]
Minimizing this loss, the Generator learns to craft adversarial logos that evade phishing detection7; furthermore, perturbations preserve the visual similarity with the original logo, thereby facilitating deception to the human eye.
Footnote 7: **Remark:** Our attack relies on the logos generated by the Generator, which in turn depend on a Discriminator, i.e., a DL model for identifying logos. However, the Discriminator_does not_ necessarily have to be the identical one used in the targeted phishing detection system: as our experiments show, our adversarial logos evade even DL models that have not been used to develop the Generator (by leveraging the well-known transferability property of adversarial examples [21]).
## 5 Experimental evaluations
We now empirically assess the quality of our contributions. We begin by describing the datasets used for our experiments (Section 5.1), and introduce the metrics used for our performance assessment (Section 5.2). Then, we first show that our two DL models for logo-identification achieve state-of-the-art performance (Section 5.3), and then demonstrate that our attacks can evade all our considered logo-identification models (Section 5.4). Our code, dataset used, as well as generated perturbed logos are available at [1].
### Dataset
To evaluate the performance of logo-based phishing detectors and their robustness against generative adversarial perturbations, we use two sets of logo images:
* **L**, **Protected brands:** The logo image set of protected brands, **L**, consists of images of 181 brands which are identical to the brands used in Phishpedia [34]. According to the empirical observation in [34], 99% of phishing
pages target one of these 181 brands. For these protected brands, we collected 28 263 public logo images from search engines and Pawar's logo image dataset [42]. Each brand's logo has 100-200 variants.
* \(\mathbf{\tilde{L}}\)**, Unprotected brands:** Logo image set \(\mathbf{\tilde{L}}\) is the set of 2 045 images from 2 000 brands that do not belong to the brands in \(\mathbf{L}\). The image samples are from the _Logo2K+_ dataset, which is publicly available [55].
The data was collected in the second half of January 2023.
### Performance Metrics
In what follows, we denote the logo-identification models as discriminators; the attack generators also use the discriminators in their training phase.
Logo identification performance: We provide the definitions of metrics for logo-based phishing webpage detection. Note that, for a discriminator used for phishing detection, the positives are the logos in \(\mathbf{L}\), the protected brand list, that need to be identified. If the highest prediction probability of a logo is below a certain decision threshold, it is classified as an unknown brand.
* _True positive (TP)_: A TP in our evaluation denotes the case of correct brand identification of the given logo (of a protected brand) by the discriminator.
* _False positive (FP)_: An FP denotes the case when the given logo image is wrongly identified as one of the protected brands when in reality, the given logo image does not belong to the protected brand set.
* _True negative (TN)_: A TN occurs when the brand of the given logo is not in the protected brand set and gets correctly classified as an unknown brand.
* _False negative (FN)_: An FN denotes when the brand of the given logo belonging to the protected brand set is classified as any other brand.
Denoting the actual brand of a given logo \(l\) as \(l_{b}\), and the predicted brand by the discriminator as \(l_{p}\), we define the True Positive Rate (TPR) and False Positive Rate (FPR):
\[\text{TPR}=\frac{|(l_{b}=l_{p})\wedge(l_{p}\in\mathbf{L})|}{|l_{b}\in\mathbf{ L}|};\hskip 28.452756pt\text{FPR}=\frac{|(l_{p}\in\mathbf{L})\wedge(l_{b} \in\mathbf{\tilde{L}})|}{|l_{b}\in\mathbf{\tilde{L}}|}\quad(2)\]
Impact of the attacks: Recall that our attacker aims to fool the discriminator into classifying a protected brand logo as an unknown brand. Hence, we introduce the _Fooling ratio_, which is the rate of adversarial logos classified as being of an unknown brand (out of all the phishing logos). Formally:
\[\text{Fooling ratio}=\frac{|l_{p}\notin\mathbf{L}\wedge l_{b}\in\mathbf{L}|}{| l_{b}\in\mathbf{L}|} \tag{3}\]
Intuitively, a higher fooling ratio denotes an attack with a higher impact.
### Baseline: Analysis of logo-identification models
We assess the performance of the four DL models for logo-identification presented in Section 3. Specifically, we first measure the TPR and FPR of the state-of-the-art discriminators (i.e., Siamese and its robust version Siamese\({}^{++}\)[34]), and compare them with the transformer-based discriminators that we proposed in this work (i.e., ViT and Swin).
Setup. We use the datasets \(\mathbf{L}\) and \(\mathbf{\bar{L}}\) (see Section 5.1), with a train:test split of 85:15. For ViT and Swin, we apply the common model head fine-tuning for 50 epochs and then transfer training on the entire networks for the next 150 epochs, reducing computational time while improving performance. We provide hyper-parameters configurations of our discriminators in Table 2 (in the appendix).
Results. Fig. 4(a) shows the ROC curves of the four discriminators (the x-axis denoting FPR is in log-scale for visibility). Overall, Siamese and Siamese\({}^{++}\) show the best performance in terms of logo identification. All four models show comparable TPRs at FPR above \(10^{-2}\). For practical purposes, however, we have to evaluate the detection capability at low FPRs [44, 22]. Observe that, the TPR values of the discriminators ViT and Swin at FPR below \(10^{-2}\) are worse than the Siamese models. Fig. 4(b) shows the gap in TPR between the discriminators at the more practical FPR value of \(10^{-3}\); Siamese and Siamese\({}^{++}\) show around six and twelve percent-point higher TPR than the ViT and Swin, respectively.
Although Swin and ViT are not better than Siamese, they still achieve an appreciable degree of performance, and hence are used to evaluate our attacks.
### Attack: evasiveness of adversarial logos, and computational cost
We quantitatively analyze the effects of adversarial logos generated by our attack against DL models for logo identification. We do this through a cross-evaluation that captures both 'white-box' and 'black-box' adversarial settings. At the end of this section, we also discuss the computational cost of our attacks.
Setup. Recall that our attack (Section 4) entails training a generator by using a given discriminator (i.e., DL models for identifying logos). For our experiments, we consider three discriminators: ViT, Swin and Siamese, thereby yielding three corresponding generators: \(\mathcal{G}_{\text{ViT}}\), \(\mathcal{G}_{\text{Swin}}\) and \(\mathcal{G}_{\text{Siamese}}\). After training each generator, we assess the adversarial logos against _all our discriminators_. Such an
Figure 5: Comparing discriminators for logo identification
evaluation protocol allows one to analyze the effects of our attacks when the adversary does not know the DL model used for the defense.
For evaluations, we train our generators on the dataset \(\mathbf{L}\); we provide the hyperparameters of our generators in Table 3 (Appendix B). Subsequently, we test the discriminators with the adversarial logos crafted by each generator.
Results. The results are plotted in Fig. 6, where we compare the fooling ratio of discriminators against the different attacker models for varying FPRs (in log-scale). It stands out that each discriminator is much weaker against the adversarial logos created by the'matching' generator compared to those created by generators trained on different discriminators. For instance, from Fig. 5(a), we observe that the adversarial logos generated by \(\mathcal{G}_{\text{ViT}}\) are more effective against ViT (blue line) than against Swin (green line). We observe from Fig. 5(b) and Fig. 5(c) that, if the attacker's generator model is not trained with ViT, the fooling ratio drops significantly for the defender with the ViT discriminator.
From the adversary's perspective, ViT is the most effective generator against all discriminators. Fig. 5(d) compares the fooling ratios of the four discriminators at a fixed FPR of \(10^{-3}\); note, **fooling ratios against \(\mathcal{G}_{\text{ViT}}\) are high, ranging from 42% to 95%**. In other words, with \(\mathcal{G}_{\text{ViT}}\), at least 42% of attacker generated logos can evade phishing detectors, independent of the discriminator deployed. Against such an attacker, the defender might prefer to use Siamese (or Siamese++) as it achieves the lowest fooling ratio (of around 42% at \(10^{-3}\) FPR). Interestingly, the most robust model for the defender against an _arbitrary_ generator model would be ViT, since, on average, ViT achieves a lower fooling ratio against all generator models.
Computational cost. Two factors contribute to the computation time to realize our adversarial logos: i) generator training and ii) perturbed logo generation. We measure the generator training time with the three models, i.e., ViT, Swin, and Siamese, for each training epoch and the required epochs till reaching a compelling performance, i.e., 0.9 of fooling ratio against the discriminator with the corresponding model. The experiments are performed on a system with NVIDIA RTX3090 GPU, 2.8GHz 32-core AMD CPU, 80GB RAM with Python 3.8.10, and PyTorch 1.2.0 on Ubuntu 20.04 OS. We report the results in Table 1.
From this table, we observe an apparent gap between the models in their training time. While the ViT-based generator, \(\mathcal{G}_{\text{ViT}}\), takes only half the training
Figure 6: Comparison of different generators against different discriminators
time per epoch in comparison to \(\mathcal{G}_{\text{Swin}}\), it requires five times more training epochs to reach the same level of performance, (i.e., 0.9 fooling ratio). \(\mathcal{G}_{\text{Siamese}}\) shows significantly less overhead than the other two, in both, training time per epoch and the required epoch. \(\mathcal{G}_{\text{Siamese}}\) accomplishes a fooling ratio of 0.9 against \(\mathcal{D}_{\text{Siamese}}\) after just one epoch of training which takes only eight minutes. Overall, training \(\mathcal{G}_{\text{ViT}}\) takes 744 minutes to have 0.9 fooling ratio, which is around 2.8 and 93 times longer training time than \(\mathcal{G}_{\text{Swin}}\) and \(\mathcal{G}_{\text{Siamese}}\), respectively. Although there are significant differences in training times, when it comes to generating perturbed logos, all three generators take only around 0.7 seconds per image on average; this negligible cost allows an attacker to generate a large number of samples to test against a deployed phishing detector.
**Takeaways.** i) An attacker with knowledge of the discriminator used for defense achieves more than 95% fooling ratio with our adversarial generator. ii) In the absence of knowledge of the discriminator (i.e., independent of the discriminator), an attacker choosing \(\mathcal{G}_{\text{ViT}}\) as the generator achieves a fooling ratio of at least 42% against the defender (see Fig. 5(d)).
## 6 User study: do adversarial logos trick humans?
We now provide a complementary evaluation of our proposed attack. Specifically, we seek to investigate _if our adversarial logos can be spotted by humans_. Indeed, even if a phishing detector can be evaded, this would be useless if the human, the actual target of the phishing attack, can clearly see that something is "phishy". Hence, we carry out **two user-studies**, which we describe (Section 6.1) and discuss (Section 6.2) in the remainder of this section.
### Methodology
Our goal is to assess if the perturbations entailed in an adversarial logo can be recognized by humans. There are many ways to perform such an assessment through a user-study, each with its own pros and cons8.
Footnote 8: Designing bias-free user-studies in the phishing context is an open problem [48, 10].
We build our user-studies around a central research question (RQ): _given a pair of logos (i.e., an 'original' one, and an 'adversarial' one), can the human spot any difference?_ Our idea is to design a questionnaire containing multiple pairs of logos, and ask the participants to rate (through a 1-5 Linkert scale) the similarity of the logos in each pair. Intuitively, if the results reveal that users
\begin{table}
\begin{tabular}{l|c c c} \hline \hline & \(\mathcal{G}_{\text{ViT}}\) & \(\mathcal{G}_{\text{Swin}}\) & \(\mathcal{G}_{\text{Siamese}}\) \\ \hline Avg. training time per epoch (min.) & 12 & 23 & 8 \\ No. of epochs for 0.9 fooling ratio & 62 & 12 & 1 \\ Training time for 0.9 fooling ratio (min.) & 744 & 277 & 8 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Training time for the perturbation generators
perceive the logos to be "different", then it would mean that our adversarial logos are not effective against humans.
To account for the fact that the responses we would receive are entirely subjective, we carry out (in April 2023) two quantitative user studies:
1. _Vertical Study_ (VS), which entails a small population (N=30) of similar users (students of a large university, aged 20-30). The questionnaire has ten questions (each being a pair of logos to rate), wherein each participant is shown a different set of questions. The purpose of VS is to capture the responses of a specific group of humans across a large set of adversarial logos.
2. _Horizontal Study_ (HS), which entails a large population (N=287) of users with diverse backgrounds (Amazon Turk Workers with 95+% hit-rate, aged 18-70). The questionnaire includes 21 questions, which are always the same for each participant. The purpose of HS is to capture the response of various humans to a small set of adversarial logos.
For both VS and HS, participants were asked to provide a response within 5s of seeing the pair of logos (because, realistically, users do not spend much time looking at the logo on a website). We also included control questions (e.g., pairs of identical logos, and pairs of clearly different logos) as a form of attention mechanism9. Finally, we shuffled the questions to further reduce bias. For transparency, we provide our questionnaire at [1].
Footnote 9: For HS, we received 322 responses, but we removed 35 because some users took too little time to answer the entire questionnaire, or did not pass our attention checks.
For VS (resp. HS), we included 2 (resp. 3) "identical" pairs as baseline; and 5 (resp. 12) "original-adversarial" pairs to answer our RQ.
### Results
We present the results of both of our user studies in Fig. 7. Specifically, Fig. 6(a) shows the cumulative distribution of the scores for the three 'identical' pairs, and the five 'original-adversarial' pairs in VS. Whereas the boxplots in Fig. 6(b) show how the participants of HS rated the 12 "original-adversarial" pairs; the rightmost boxplot aggregates all results. In our rating definition, 5 means'similar', and 1 means 'different'.
From Figure 6(a), we observe that 95% of all responses (30 users \(\times\) 10 questions) rated all 'identical' pairs (left bin) between 4 and 5 (only 5% answered with a 3). That is to say; they correctly guessed that all identical pairs were indeed very similar, thereby also confirming that this population was very reliable. For this reason, we find it noteworthy that **our adversarial logos are able to deceive them**: in the right bin, 66% rated the 'original-adversarial' pairs with either 4 or 5, and only 10% rated them with a 1 or 2.
Figure 6(b) shows the results for the 'adversarial-original' pairs (we already removed some clearly noisy answers, as stated in Section 6.1). We observe that the wide majority of HS population rated the pairs as similar (the average is always below the middle point, 3). Hence, we can conclude: HS also reveals that **our adversarial logos are barely detected by humans as perturbed**.
## 7 Countermeasures (and counter-countermeasures)
Given that our adversarial logos can simultaneously fool state-of-the-art DL models for logo-identification and human eyes, we ask ourselves: _how can adversarial logos be countered?_ One potential mitigation is to leverage _adversarial learning_ by injecting evasive logos in the training set [12], thereby realizing an _adversarially robust_ discriminator. However, an expert attacker may anticipate this and can hence attempt to circumvent such a robust discriminator by developing a new generator, thereby crafting more evasive adversarial logos (e.g., as demonstrated in other domains [49, 45]). We now investigate both of these scenarios through additional proof-of-concept experiments, which involve the strongest discriminator of our evaluation: ViT.
Countermeasure: building robust discriminator. Adversarial training is one of the most well-known techniques to defend against adversarial examples [46, 12]. The idea is to update a given ML model by training it on adversarial examples that can mislead its predictions. We build our robust discriminators, \(\mathcal{D^{\prime}}_{\text{ViT}}^{0.3}\), \(\mathcal{D^{\prime}}_{\text{ViT}}^{0.5}\), and \(\mathcal{D^{\prime}}_{\text{ViT}}^{0.7}\), by replacing 30%, 50%, and 70% of the logos in the training dataset \(\mathbf{L}\) with their adversarial variants, respectively. In particular, we use the adversarial logos generated with \(\mathcal{G}_{\text{ViT}}\), i.e., trained with the vanilla ViT discriminator. Then, we compare these three robust discriminators with the vanilla ViT discriminator \(\mathcal{D}_{\text{ViT}}\), against the same attack presented in Section 5.4. The results are shown in Fig. 7(a). We observe that the robust discriminators exhibit much lower fooling ratios: while the vanilla ViT has a fooling ratio above 0.8, the robust discriminators have fooling ratios below 0.2 even at a low FPR of \(10^{-3}\).
Counter-countermeasure: evading robust discriminators. An attacker is also capable of taking a sophisticated strategy to counter a robust logo-identification discriminator built via adversarial training. To do this, the attacker must obtain such a robust discriminator--this can be done through well-known black-box strategies [41, 15], or the attacker could even build one on their own. The attacker must then use the robust discriminator to train an 'adaptive' generator that can yield more evasive perturbations. For this experiment, we consider the case wherein the attacker trains the adaptive generator by using \(\mathcal{D^{\prime}}_{\text{ViT}}^{0.3}\), \(\mathcal{D^{\prime}}_{\text{ViT}}^{0.5}\), and
Figure 7: Results of our two user-studies: vertical study and horizontal study
\(\mathcal{D^{\prime}}^{0.7}_{\text{ViT}}\), thereby realizing \(\mathcal{G^{\prime}}^{0.3}_{\text{ViT}}\), \(\mathcal{G^{\prime}}^{0.5}_{\text{ViT}}\), and \(\mathcal{G^{\prime}}^{0.7}_{\text{ViT}}\), respectively. The results are shown in Fig. (b)b, which plots the fooling ratio of the _adaptive_ generator against the corresponding _robust_ discriminator.
Compared to the attacks from the 'vanilla' generator \(\mathcal{G}_{ViT}\) in Fig. (a)a (which achieves below 20% of fooling ratio at \(10^{-3}\) FPR), the adaptive generators in Fig. (b)b are much more effective. Yet, we observe that discriminators trained with more adversarial logos tend to be more robust: at \(10^{-3}\) FPR, \(\mathcal{D^{\prime}}^{0.3}_{\text{ViT}}\) has a fooling ratio of 0.9, whereas \(\mathcal{D^{\prime}}^{0.5}_{\text{ViT}}\) and \(\mathcal{D^{\prime}}^{0.7}_{\text{ViT}}\) have 0.8 and 0.6, respectively.
We find it enticing that this continuous game between attacker and defender, reflected in the generator (attacker) and discriminator (defender), eventually forms the concept of the Generative Adversarial Network (GAN). Indeed, a question rises: "what happens if this process is repeated many times?" We plan to address this intriguing research question in our future work.
## 8 Related works
Phishing Website Detection via ML. Many works leveraged statistical models, including ML, for phishing website detection (e.g., [8, 56, 57, 37, 51]). Typically, these models are trained on labeled datasets to learn to discriminate between phishing and benign webpages. There also exists an orthogonal family of countermeasures, referred to as reference-based phishing detectors, that identify visually similar webpages. This is based on the notion that phishing webpages are more successful when they imitate a legitimate website. This characteristic has been extensively scrutinized by prior literature [24, 9, 54, 19, 7, 28, 34, 35]. For example, VisualPhishNet trains a Siamese model to detect visually similar _screenshots_ between a given webpage and those in a set of well-known brands [7]. Other works (e.g., [9, 54, 19, 34, 35]) focus on identifying visually invariant _logos_.
Attacks against ML-based Phishing Website Detectors. Expert attackers are aware of the development of anti-phishing solutions and constantly refine their techniques to avoid being taken down. For instance, phishers can use cloaking to evade automated crawlers often used by security vendors [59]; alternatively,
Figure 8: Performance of discriminator and generator due to adversarial training
they can exploit'squatting' to evade detectors analyzing the URL [51]. It is also easy to change the HTML contents to evade HTML-based phishing detectors [32, 13]. Researchers have also examined the impact of adversarial perturbations on image-based phishing detectors [7, 34, 35, 20]. However, these attacks assume that the attacker possesses complete knowledge of the deployed model and can access the model gradients, enabling manipulations in the feature-space (for further details, refer to [13]). We demonstrate a successful attack conducted by an attacker lacking both knowledge of and access to the deployed model. Furthermore, none of the prior works have conducted user studies to validate the practicality of their attacks.
Adversarial Perturbations.Moving away from gradient-based perturbations, Moosavi et al. introduced Universal Adversarial Perturbations [38], a framework for learning perturbations that are image-agnostic and generalized across various image classification models. This work sparked further proposals [47, 40, 58] aiming to enhance universal perturbations. Subsequently, Poursaeed et al. proposed Generative Adversarial Perturbations [43]. The generative model achieved state-of-the-art performance, unifying the framework for image-agnostic and image-dependent perturbations and considering both targeted and non-targeted attacks. We draw inspiration from their framework to develop a generative network specifically for crafting adversarial logos.
## 9 Conclusions
Logo-based phishing detectors have shown significant capabilities with the employment of DL models. In this work, we developed and presented a novel attack against logo-based phishing detection systems. Our experiments demonstrate the capability of an attacker equipped with a generative adversarial model in defeating the detection systems as well as human users. We hope this will trigger further research and development of phishing detection solutions that are robust to adversarial ML attacks.
**Ethical Statement.** Our institutions do not require any formal IRB approval to carry out the research discussed herein. We always followed the guidelines of the Menlo report [14]. For our user-studies, we never asked for sensitive data or PII. Finally, although we publicly release our code for the sake of science, as mentioned on the GitHub page [1], such code should not be used for any unethical or illegal purposes.
**Acknowledgment.** We thank the Hilti Corporation, Trustwave, NUS (National University of Singapore) and Acronis, for supporting this research. |
2309.00921 | An iterative scheme for finite horizon model reduction of
continuous-time linear time-varying systems | In this paper, we obtain the functional derivatives of a finite horizon error
norm between a full-order and a reduced-order continuous-time linear
time-varying (LTV) system. Based on the functional derivatives, first-order
necessary conditions for optimality of the error norm are derived, and a
projection-based iterative scheme for model reduction is proposed. The
iterative scheme upon convergence produces reduced-order models satisfying the
optimality conditions. Finally, through a numerical example, we demonstrate the
better performance of the proposed model reduction scheme in comparison to the
finite horizon balanced truncation algorithm for continuous-time LTV systems. | Kasturi Das, Srinivasan Krishnaswamy, Somanath Majhi | 2023-09-02T12:02:18Z | http://arxiv.org/abs/2309.00921v1 | An iterative scheme for finite horizon model reduction of continuous-time linear time-varying systems
###### Abstract
In this paper, we obtain the functional derivatives of a finite horizon error norm between a full-order and a reduced-order continuous-time linear time-varying (LTV) system. Based on the functional derivatives, first-order necessary conditions for optimality of the error norm are derived, and a projection-based iterative scheme for model reduction is proposed. The iterative scheme upon convergence produces reduced-order models satisfying the optimality conditions. Finally, through a numerical example, we demonstrate the better performance of the proposed model reduction scheme in comparison to the finite horizon balanced truncation algorithm for continuous-time LTV systems.
m +
Footnote †: journal: Computer Science
0000-0002-1339]Kasturi Das
0000-0002-1881-7880]Srinivasan Krishnaswamy
0000-0002-4882-3880]Somanath Majhi
odel reduction; time-varying systems; iterative schemes; simulation of dynamic systems.
## 1 Introduction
Linear dynamical models are used to capture the behaviour of physical systems. Complex systems require large models to capture the system dynamics accurately. However, simulating, analyzing and designing controllers for such large models is computationally intensive. To overcome this problem, large models are approximated by smaller models based on various performance measures.
The balanced truncation (BT) algorithm [11] is a Singular Value Decomposition (SVD)-based model reduction method. It involves obtaining balanced reduced-order models (with respect to the system gramians) via projection. The \(H_{2}\) optimal model reduction problem, which is based on minimizing well-defined error criteria, is widely discussed in literature [2, 3, 5, 15]. As the \(H_{2}\) model reduction problem is non-convex, the model reduction problem involves deriving gradients of a \(H_{2}\) error norm and obtaining first-order necessary conditions for optimality. Then, model reduction methods are developed to obtain reduced-order models satisfying the optimality conditions. Examples of such model reduction schemes include the iterative rational Krylov algorithm (IRKA) [5] and the two-sided iterative algorithm (TSIA) [17]. In [4], the problem of \(H_{2}\) optimal model reduction for discrete-time LTI systems is investigated, and a model reduction algorithm called MIMO iterative rational interpolation algorithm (MIRIAm) is proposed. The BT algorithm for LTI systems has been extended to LTV systems. Finite horizon balanced truncation and some of its generalizations for continuous-time LTV systems are explored in [16, 14, 13]. The same algorithm is extended to discrete-time LTV systems in [7, 14]. In [9, 10], a finite horizon \(H_{2}\) error norm for discrete-time LTV systems is proposed. Conditions for optimality of the error norm are obtained, and an iterative algorithm for obtaining reduced-order models satisfying the optimality conditions is proposed. Such an algorithm, unlike BT, is based on minimising a well-defined error criterion. To the best of the authors' knowledge, an analogous investigation for continuous-time LTV systems is not available.
This paper proposes an error norm between a full-order and a reduced-order LTV system, minimizing which ensures that the reduced-order system is a good approximation of the full-order LTV system over a finite time interval. The functional derivatives of the error norm are derived, and first-order necessary conditions for the optimality of the error norm are obtained. The optimality conditions are used to propose a projection-based iterative scheme for model reduction, which is a generalization of the TSIA algorithm for LTI systems to continuous-time LTV systems.
The rest of the paper is arranged as follows. In Section 2, a few properties of a continuous-time LTV system and its modified adjoint are presented, and some essential results used in the later sections are established. Section 3 introduces a finite horizon \(H_{2}\) error norm, derives the functional derivatives of the error norm and proposes an iterative technique for finite horizon model reduction. In Section 4, the perfor
mance of the iterative model reduction scheme is demonstrated with the help of a numerical example. The paper is concluded in Section 5.
## 2 Modified adjoint of a continuous-time LTV system and some associated results
This section discusses the modified adjoint of a continuous-time LTV system and studies the relation between their state transition matrices. Further, the relation between the gramians of the original LTV system and its modified adjoint system is also established.
### State-space realization and gramians of LTV systems
The state-space realization of a continuous-time LTV system is as follows:
\[\frac{dx}{dt}(t) =A(t)x(t)+B(t)u(t),\] \[y(t) =C(t)x(t). \tag{1}\]
Here, \(A(\cdot):[t_{0},t_{f}]\rightarrow\mathbb{R}^{n\times n}\), \(t\mapsto A(t)\), \(B(\cdot):[t_{0},t_{f}]\rightarrow\mathbb{R}^{n\times m}\), \(t\mapsto B(t)\) and \(C(\cdot):[t_{0},t_{f}]\rightarrow\mathbb{R}^{p\times n}\), \(t\mapsto C(t)\) are continuous and bounded. This ensures the existence and uniqueness of the solutions of the LTV system over \([t_{0},t_{f}]\). Let \(\phi(t,\tau)\) be the state transition matrix (STM), and \(h(t,\tau)\) be the impulse response of the system.
A few properties of the STM for \(t_{0}\leq t_{1}\leq t_{3}\leq t_{2}\leq t_{f}\) are as follows:
\[a) \phi(t,t)=I_{n},\quad t\in[t_{0},t_{f}], \tag{2}\] \[b) \phi(t_{1},t_{2})=\phi(t_{1},t_{3})\phi(t_{3},t_{2}),\quad,\] (3) \[c) \det(\phi(t_{1},t_{2}))\neq 0,\quad\text{and}\] (4) \[d) (\phi(t_{1},t_{2}))^{-1}=\phi(t_{2},t_{1}). \tag{5}\]
For \(t_{0}\leq\tau\leq t\leq t_{f}\), \(\phi(t,\tau)\) is the unique solution of the differential equation
\[\frac{\partial}{\partial t}\phi(t,\tau)=A(t)\phi(t,\tau), \tag{6}\]
with initial condition \(\phi(\tau,\tau)=I_{n}\). For the input \(u(t)=\delta(t-\tau)\), the impulse response matrix of the LTV system is
\[h(t,\tau)=\begin{cases}0,&t_{0}\leq t<\tau,\\ C(t)\phi(t,\tau)B(\tau),&\tau\leq t\leq t_{f}.\end{cases} \tag{7}\]
For \(t\in[t_{0},t_{f}]\), the reachability gramian of the LTV system \(\Sigma\) is given by
\[P(t)=\int_{t_{0}}^{t}\phi(t,\tau)B(\tau)(B(\tau))^{T}(\phi(t,\tau))^{T}d\tau. \tag{8}\]
The above gramian is obtained by solving the following differential Lyapunov equation (DLE) from \(t=t_{0}\) to \(t=t_{f}\) with \(P(t_{0})=0\).
\[\frac{dP(t)}{dt}=A(t)P(t)+P(t)(A(t))^{T}+B(t)(B(t))^{T}. \tag{9}\]
The observability gramian for \(t\in[t_{0},t_{f}]\) is given by
\[Q(t)=\int_{t}^{t_{f}}(\phi(\tau,t))^{T}(C(\tau))^{T}C(\tau)\phi(\tau,t)d\tau. \tag{10}\]
This above gramian is computed by solving the following DLE from \(t=t_{f}\) to \(t=t_{0}\) with \(Q(t_{f})=0\).
\[\frac{dQ(t)}{dt}=-(A(t))^{T}Q(t)-Q(t)A(t)-(C(t))^{T}C(t). \tag{11}\]
### Modified adjoint of a continuous-time LTV system and its state transition matrix
The adjoint dynamical system \(\Sigma_{a}\) associated with the LTV system given by (1) is a finite-dimensional state-space realization of order \(n\) and is as follows:
\[\frac{dx_{a}}{dt}(t) =-A^{T}(t)x_{a}(t)-C^{T}(t)u_{a}(t),\] \[y_{a}(t) =B^{T}(t)x_{a}(t), \tag{12}\]
with \(t\) varying from \(t=t_{f}\) to \(t=t_{0}\). Let \(\phi_{a}(t,\tau)\) and \(h_{a}(t,\tau)\) be the STM and the impulse response matrix, respectively, of the adjoint system.
For simulating the adjoint system, the "modified adjoint system", denoted by \(\Sigma_{ma}\), is used [6]. This system has order \(n\), and \(t\) varies from \(t_{0}\) to \(t_{f}\), similar to the original system \(\Sigma\). For the finite time horizon \([t_{0},t_{f}]\), the modified adjoint is given by the following realization.
\[\dot{x}_{ma}(t) =(A(T_{i}-t))^{T}x_{ma}(t)+(C(T_{i}-t))^{T}u_{ma}(t),\] \[y_{ma}(t) =(B(T_{i}-t))^{T}x_{ma}(t), \tag{13}\]
where \(T_{i}:=t_{0}+t_{f}\). Let \(\phi_{ma}(t,\tau)\) and \(h_{ma}(t,\tau)\) be the state transition matrix and impulse response matrix, respectively, of the modified adjoint system. Let \(P_{ma}(t)\) and \(Q_{ma}(t)\) be the reachability and the observability gramian of the modified adjoint system, defined over \([t_{0},t_{f}]\).
The following propositions establish the relation between the STMs of the original LTV system, its adjoint and modified-adjoint.
**Proposition 1**: _If \(\phi_{a}(t,\tau)\), \(\phi(t,\tau)\) and \(\phi_{ma}(t,\tau)\) are the STMs of the LTV systems \(\Sigma_{a}\), \(\Sigma\) and \(\Sigma_{ma}\), respectively, then_
\[\phi_{a}(t,\tau) =(\phi(\tau,t))^{T}. \tag{14}\] \[\phi_{ma}(t,\tau) =(\phi(T_{i}-\tau,T_{i}-t))^{T}. \tag{15}\]
**PROOF.** See Appendix A.
### Relation between the gramians of the modified adjoint system and the original LTV system
In this subsection, the connection of the system gramians of the modified adjoint system \(\Sigma_{ma}\) and the original LTV system \(\Sigma\) is established.
**Theorem 2**.: _For \(t\in[t_{0},t_{f}]\), the reachability gramian \(P_{ma}(t)\) and the observability gramian \(Q_{ma}(t)\) of the modified adjoint system \(\Sigma_{ma}\) is related to the observability gramian \(Q(t)\) and reachability gramian \(P(t)\) of the original system \(\Sigma\) in the following way:_
\[P_{ma}(t) =Q(T_{i}-t)\quad\text{and} \tag{16}\] \[Q_{ma}(t) =P(T_{i}-t). \tag{17}\]
**PROOF.** The reachability gramian of the modified adjoint system \(\Sigma_{ma}\) is
\[P_{ma}(t) =\int_{t_{0}}^{t}\phi_{ma}(t,\tau)B_{ma}(\tau)(B_{ma}(\tau))^{T}( \phi_{ma}(t,\tau))^{T}d\tau. \tag{18}\]
Using Equation (15) and changing the variable of integration to \(z=T_{i}-\tau\) gives the following result
\[P_{ma}(t)=\int_{T_{i}-t}^{t_{f}}\left(\left(\phi(z,T_{i}-t) \right)^{T}(C(z))^{T}C(z)\phi(z,T_{i}-t)dz\right.\] \[=Q(T_{i}-t).\]
The observability gramian of the modified adjoint system \(\Sigma_{ma}\) is
\[Q_{ma}(t)=\int_{t}^{t_{f}}(\phi_{ma}(t,\tau))^{T}(C_{ma}(t))^{T}C_{ma}(t)\phi_{ ma}(t,\tau)d\tau.\]
Similar to the first case, using Equation (15) and replacing \(C_{ma}(\tau)\) as \((B(T_{i}-\tau))^{T}\) gives
\[Q_{ma}(t)=\int_{t}^{t_{f}}\left(\phi_{ma}(\tau,t)\right)^{T}(C_{ ma}(\tau))^{T}C_{ma}(\tau)\phi_{ma}(\tau,t)d\tau\] \[=\int_{t_{0}}^{T_{i}-t}\phi(T_{i}-t,z)B(z)(B(z))^{T}\left(\phi(T_ {i}-t,z)\right)^{T}dz\] \[=P(T_{i}-t).\]
This completes the proof.
## 3 Finite horizon model order reduction for LTV systems
This section presents a finite horizon \(H_{2}\) error norm between a continuous-time LTV system and a reduced-order LTV approximation. Further, the functional derivatives of the error norm are derived and a projection-based model reduction technique is proposed.
### The finite horizon \(H_{2}\) error norm
Consider the continuous-time LTV system \(\Sigma_{r}\) of order \(r\), where \(r<n\):
\[\dot{x}_{r}(t) =A_{r}(t)x_{r}(t)+B_{r}(t)u(t),\] \[y_{r}(t) =C_{r}(t)x_{r}(t). \tag{19}\]
Here, \(A_{r}(\cdot):[t_{0},t_{f}]\rightarrow\mathbb{R}^{n\times n}\), \(t\mapsto A_{r}(t)\), \(B_{r}(\cdot):[t_{0},t_{f}]\rightarrow\mathbb{R}^{n\times m}\), \(t\mapsto B_{r}(t)\) and \(C_{r}(\cdot):[t_{0},t_{f}]\rightarrow\mathbb{R}^{p\times n}\), \(t\mapsto C_{r}(t)\) are continuous and bounded. If \(\Sigma_{r}\) is a good reduced-order approximation of \(\Sigma\), then it is expected that \(y(t)\approx y_{r}(t)\) with respect to a suitable norm. Let \(\phi_{r}(t,\tau)\) and \(h_{r}(t,\tau)\) be the state transition and impulse response matrix, respectively. Let \(P_{r}(t)\) and \(Q_{r}(t)\) be the reachability and observability gramian of the reduced-order system, respectively.
For the finite time horizon \([t_{0},t_{f}]\), the modified adjoint of the reduced-order system \(\Sigma_{r}\) is as follows
\[\dot{x}_{rma}(t) =A_{rma}(t)x_{rma}(t)+B_{rma}(t)u_{ma}(t),\] \[y_{rma}(t) =C_{ma}(t)x_{rma}(t), \tag{20}\]
where \(A_{rma}(t)=(A_{r}(T_{i}-t))^{T}\), \(B_{rma}(t)=(C_{r}(T_{i}-t))^{T}\) and \(C_{rma}(t)=(B_{r}(T_{i}-t))^{T}\). Let \(\phi_{ma}(t,\tau)\) and \(h_{rma}(t,\tau)\) be the state transition matrix and the impulse response matrix, respectively. Let \(P_{rma}(t)\) and \(Q_{rma}(t)\) be the reachability and the observability gramian, respectively.
For a permissible input \(u(t)\), the output \(y(t)\) of the full-order system \(\Sigma\) is \(y(t)=\int_{t_{0}}^{t}h(t,\tau)u(\tau)d\tau\). For the same input \(u(t)\), the output \(y_{r}(t)\) of the reduced-order system \(\Sigma_{r}\) is given by \(y_{r}(t)=\int_{t_{0}}^{t}h_{r}(t,\tau)u(\tau)d\tau\). Taking the norm of the output error \(e(t)=y(t)-y_{r}(t)\), we have
\[\left\|y(t)-y_{r}(t)\right\|_{2} =\left\|\int_{t_{0}}^{t}\left(h(t,\tau)-h_{r}(t,\tau)\right)u( \tau)d\tau\right\|_{2}\] \[\leq\int_{t_{0}}^{t}\left\|h(t,\tau)-h_{r}(t,\tau)\right\|_{F} \left\|u(\tau)\right\|_{2}d\tau.\]
Applying the Cauchy-Schwarz inequality to the right-hand side of the above expression gives
\[\left\|e(t)\right\|_{2}\leq\left(\int_{t_{0}}^{t}\left\|h(t, \tau)-h_{r}(t,\tau)\right\|_{F}^{2}d\tau\right)^{\frac{1}{2}}\left\|u\right\|_{L _{2}^{m}[t_{0},t_{f}]}.\] \[\left\|e\right\|_{L_{2}^{p}[t_{0},t_{f}]}\leq\left(\int_{t_{0}}^ {t_{f}}\int_{t_{0}}^{t}\left\|h(t,\tau)-h_{r}(t,\tau)\right\|_{F}^{2}d\tau dt \right)^{\frac{1}{2}}\left\|u\right\|_{L_{2}^{m}[t_{0},t_{f}]}.\]
Based on the above inequality, we observe that minimizing the term \(\left(\int_{t_{0}}^{t_{f}}\int_{t_{0}}^{t}\left\|h(t,\tau)-h_{r}(t,\tau) \right\|_{F}d\tau dt\right)^{\frac{1}{2}}\) ensures that
the output error norm is minimized. This term is referred to as the finite horizon \(H_{2}\) error norm and is denoted by \(\|\Sigma-\Sigma_{r}\|_{H_{2}[0,t_{f}]}\).
Given initial time \(t_{0}\), and a time-instant \(t\in[t_{0},t_{f}]\), the matrices \(P_{r}(t)\), \(X(t)\), \(X_{ma}(t)\) and \(P_{ma}(t)\) are given by
\[P_{r}(t)=\int_{t_{0}}^{t}\phi_{r}(t,\tau)B_{r}(\tau)(B_{r}(\tau) )^{T}(\phi_{r}(t,\tau))^{T}d\tau, \tag{21}\] \[X(t)=\int_{t_{0}}^{t}\phi(t,\tau)B(\tau)(B_{r}(\tau))^{T}(\phi_{ r}(t,\tau))^{T}d\tau,\] (22) \[X_{ma}(t)=\int_{t_{0}}^{t}\phi_{ma}(t,\tau)B_{ma}(\tau)(B_{ma}( \tau))^{T}(\phi_{ma}(t,\tau))^{T}d\tau,\] (23) \[P_{ma}(t)=\int_{t_{0}}^{t}\phi_{ma}(t,\tau)B_{ma}(\tau)(B_{ma}( \tau))^{T}(\phi_{ma}(t,\tau))^{T}d\tau. \tag{24}\]
The above matrices can be computed by solving the following matrix differential equations from \(t=t_{0}\) to \(t=t_{f}\)
\[\frac{d}{dt}P_{r}(t)=A_{r}(t)P_{r}(t)+P_{r}(t)(A_{r}(t))^{T}+B_{r} (t)(B_{r}(t))^{T}, \tag{25}\] \[\frac{d}{dt}X(t)=A(t)X(t)+X(t)(A_{r}(t))^{T}+B(t)(B_{r}(t))^{T},\] (26) \[\frac{d}{dt}X_{ma}(t)=A_{ma}(t)X_{ma}(t)+X_{ma}(t)(A_{rma}(t))^{T }(t)+\] \[B_{ma}(t)(B_{rma}(t))^{T},\quad\text{and}\] (27) \[\frac{d}{dt}P_{rma}(t)=A_{rma}(t)P_{rma}(t)+P_{rma}(t)(A_{rma}(t ))^{T}+\] \[B_{rma}(t)(B_{rma}(t))^{T}, \tag{28}\]
with \(P_{r}(t_{0})=0_{r\times r}\), \(X(t_{0})=0_{n\times r}\), \(X_{ma}(t_{0})=0_{n\times r}\) and \(P_{rma}(t_{0})=0_{r\times r}\), respectively.
Similarly, given final time \(t_{f}\) and a time-instant \(t\), the matrices \(Q_{r}(t)\) and \(Y(t)\) are given by
\[Q_{r}(t)=\int_{t}^{t_{f}}(\phi_{r}(\tau,t))^{T}(C_{r}(\tau))^{T} C_{r}(\tau)\phi_{r}(\tau,t)d\tau, \tag{29}\] \[Y(t)=\int_{t}^{t_{f}}(\phi_{r}(\tau,t))^{T}(C_{r}(\tau))^{T}C( \tau)\phi(\tau,t)d\tau. \tag{30}\]
The above matrices are computed by solving the following matrix differential equations from \(t=t_{f}\) to \(t=t_{0}\):
\[-\frac{d}{dt}Y(t)=(A(t))^{T}Y(t)+Y(t)A_{r}(t)+(C(t))^{T}C_{r}(t), \quad\text{and} \tag{31}\] \[-\frac{d}{dt}Q_{r}(t)=(A_{r}(t))^{T}Q_{r}(t)+Q_{r}(t)A_{r}(t)+(C_ {r}(t))^{T}C_{r}(t), \tag{32}\]
with \(Y(t_{f})=0_{n\times r}\) and \(Q_{r}(t_{f})=0_{r\times r}\), respectively.
**Lemma 3**.: \(P_{ma}(t)\) _and \(Q_{r}(t)\), given by (24) and (29), respectively, are related as,_
\[P_{rma}(t)=Q_{r}(T_{i}-t). \tag{33}\]
_Similarly, \(X_{ma}(t)\) and \(Y(t)\), given by (23) and (30), respectively, are related as,_
\[X_{ma}(t)=Y(T_{i}-t). \tag{34}\]
**PROOF.** Similar to Theorem 2, the relation between the gramians \(Q_{r}(t)\) and \(P_{ma}(t)\) of the systems \(\Sigma_{r}\) and \(\Sigma_{rma}\), respectively, given by Equation (33), can be established.
Substituting \(\phi_{ma}(t,\tau)=(\phi(T_{i}-\tau,T_{i}-t))^{T}\), \(\phi_{rma}(t,\tau)=(\phi_{r}(T_{i}-\tau,T_{i}-t))^{T}\), \(B_{ma}(\tau)=(C(T_{i}-\tau))^{T}\) and \(C_{rma}(\tau)=(B_{r}(T_{i}-\tau))^{T}\) in (23) results in
\[X_{ma}(t)=\int_{T_{i}-t}^{t_{f}}\left(\phi(\tau,T_{i}-t)\right)^ {T}(C(\tau))^{T}C_{r}(\tau)\phi_{r}(\tau,T_{i}-t)d\tau\] \[=Y(T_{i}-t).\]
**Proposition 4**.: _The square of the finite horizon \(H_{2}\) error norm is expressed using the reachability gramians of \(\Sigma\) and \(\Sigma_{r}\) as follows,_
\[\|\Sigma-\Sigma_{r}\|_{H_{2}[0,t_{f}]}^{2}=\int_{t_{0}}^{t_{f}}Tr( C(t)P(t)(C(t))^{T}-\] \[2C(t)X(t)(C_{r}(t))^{T}+C_{r}(t)P_{r}(t)(C_{r}(t))^{T})dt. \tag{35}\]
_Similarly, the square of the finite horizon \(H_{2}\) error norm is expressed using the observability gramians of \(\Sigma\) and \(\Sigma_{r}\), and the reachability gramians of \(\Sigma_{ma}\) and \(\Sigma_{rma}\) as follows,_
\[\|\Sigma-\Sigma_{r}\|_{H_{2}[0,t_{f}]}^{2}=\int_{t_{0}}^{t_{f}}Tr( (B(t))^{T}Q(t)B(t)-\] \[2(B(t))^{T}Y(t)B_{r}(t)+(B_{r}(t))^{T}Q_{r}(t)B_{r}(t))dt. \tag{36}\] \[=\int_{t_{0}}^{t_{f}}Tr(C_{ma}(t)P_{ma}(t)(C_{ma}(t))^{T}-2C_{ma}( t)X_{ma}(t)\times\] \[(C_{rma}(t))^{T}+C_{rma}(t)P_{rma}(t)(C_{rma}(t))^{T})dt. \tag{37}\]
**PROOF.** The square of the error norm \(\|\Sigma-\Sigma_{r}\|_{H_{2}[0,t_{f}]}^{2}\) involves doubles integration and can be expressed as follows:
\[\|\Sigma-\Sigma_{r}\|_{H_{2}[0,t_{f}]}^{2} =\int_{t_{0}}^{t_{f}}\int_{t_{0}}^{t}\|h(t,\tau)-h_{r}(t,\tau)\|_ {F}^{2}d\tau dt \tag{38}\] \[=\int_{t_{0}}^{t_{f}}\int_{\tau}^{t_{f}}\left\|h(t,\tau)-h_{r}(t, \tau)\right\|_{F}^{2}dtd\tau. \tag{39}\]
The integrand of the double integral in (38) is,
\[\|h(t,\tau)-h_{r}(t,\tau)\|_{F}^{2}\] \[=\mbox{Tr}((C(t)\phi(t,\tau)B(\tau)-C_{r}(t)\phi_{r}(t,\tau)B_{r}( \tau))\times\] \[((B(\tau))^{T}(\phi(t,\tau))^{T}(C(t))^{T}-(B_{r}(\tau))^{T}(\phi_ {r}(t,\tau))^{T}(C_{r}(t))^{T}))\]
Expanding the above expression and applying the double integral \(\int_{0}^{f_{f}}\int_{t_{0}}^{t}(\cdot)d\tau dt\), we obtain the following terms
\[\int_{t_{0}}^{f_{f}}\mbox{Tr}(C(t)(\int_{t_{0}}^{t}\phi(t,\tau)B( \tau)(B(\tau))^{T}(\phi(t,\tau))^{T}d\tau)\times\] \[(C(t))^{T})dt=\int_{t_{0}}^{f_{f}}\mbox{Tr}\left(C(t)P(t)(C(t))^{ T}\right)dt. \tag{40}\] \[\int_{t_{0}}^{f_{f}}\mbox{Tr}(C(t)(\int_{t_{0}}^{t}\phi(t,\tau)B (\tau)(B_{r}(\tau))^{T}(\phi_{r}(t,\tau))^{T}d\tau)\times\] \[(C_{r}(t))^{T})dt=\int_{t_{0}}^{f_{f}}\mbox{Tr}\left(C(t)X(t)(C_ {r}(t))^{T}\right)dt.\] (41) \[\int_{t_{0}}^{f_{f}}\mbox{Tr}(C_{r}(t)(\int_{t_{0}}^{t}\phi_{r}(t,\tau)B_{r}(\tau)(B_{r}(\tau))^{T}(\phi_{r}(t,\tau))^{T}d\tau)\times\] \[(C_{r}(t))^{T})dt=\int_{t_{0}}^{f_{r}}\mbox{Tr}\left(C_{r}(t)P_{r }(t)(C_{r}(t))^{T}\right)dt. \tag{42}\]
Adding (40), (41) and (42), (35) is obtained.
The integrand of the double integral in (39) can be written as,
\[\|h(t,\tau)-h_{r}(t,\tau)\|_{F}^{2}\] \[=\mbox{Tr}(((B(\tau))^{T}(\phi(t,\tau))^{T}(C(t))^{T}-(B_{r}(\tau ))^{T}(\phi_{r}(t,\tau))^{T}\times\] \[(C_{r}(t))^{T})(C(t)\phi(t,\tau)B(\tau)-C_{r}(t)\phi_{r}(t,\tau)B _{r}(\tau)))\]
Expanding the above expression, applying the double integral \(\int_{t_{0}}^{f_{f}}\int_{t}^{f_{r}}(\cdot)dtd\tau\) and following similar steps as the previous case gives the following result.
\[\|\Sigma-\Sigma_{r}\|_{H_{2}[b_{0},f_{f}]}^{2}=\int_{t_{0}}^{f_{f }}\mbox{Tr}((B(t))^{T}Q(t)B(t)-\] \[2(B(t))^{T}Y(t)B_{r}(t)+(B_{r}(t))^{T}Q_{r}(t)B_{r}(t))dt.\]
Changing the variable of integration to \(z=T_{i}-t\) in the above expression, substituting \(C_{ma}(z)=(B(T_{i}-z))^{T}\) and \(C_{ma}(z)=(B_{r}(T_{i}-z))^{T}\) and using equations (16), (33) and (34) gives (37). This completes the proof.
### Functional derivatives of the finite horizon \(H_{2}\) error norm
This section obtains the functional derivatives of the finite horizon \(H_{2}\) error norm. To begin with, we obtain the perturbation in the STM due to perturbation in the state matrix of an LTV system.
**Lemma 5**.: _Consider the reduced-order LTV system given by (19). Let the state matrix \(A_{r}(t)\) be perturbed by a continuous mapping \(\Delta A_{r}(t):[t_{0},t_{f}]\rightarrow\mathbb{R}^{r\times r}\). Let \((A_{r}(t)+\Delta A_{r}(t))\) be the perturbed state matrix of the reduced-order system and \(\hat{\phi}_{r}(t,\tau)\) be the corresponding STM. The perturbation in the STM induced by the perturbation in the state matrix \(\Delta\phi_{r}(t,t_{0})=\hat{\phi}_{r}(t,t_{0})-\phi_{r}(t,t_{0})\) is as follows:_
\[\Delta\phi_{r}(t,t_{0})=\int_{t_{0}}^{t}\phi_{r}(t,\tau)\Delta A_ {r}(\tau)\phi_{r}(\tau,t_{0})d\tau+\] \[\int_{t_{0}}^{t}\int_{t_{0}}^{\tau}\phi_{r}(t,\tau)\Delta A_{r}( \tau)\phi_{r}(\tau,s)\Delta A_{r}(s)\hat{\phi}_{r}(s,t_{0})ds\,d\tau. \tag{43}\]
For \(u\equiv 0\), the solution of the LTV system \(\Sigma_{r}\) given by (19) becomes
\[x_{r}(t)=\hat{\phi}_{r}(t,t_{0})x_{r}(t_{0}). \tag{44}\]
Let \(\hat{x}_{r}(t)\) be the state vector for the new state matrix \((A_{r}(t)+\Delta A_{r}(t))\). The new differential equation is as follows:
\[\frac{d\hat{x}_{r}(t)}{dt}=(A_{r}(t)+\Delta A_{r}(t))\hat{x}_{r}(t).\]
Let \(\hat{\phi}_{r}(t,s)\) be the STM for the new state matrix. For the same initial condition \(x_{r}(t_{0})\), the solution of the above differential equation is
\[\hat{x}_{r}(t)=\hat{\phi}_{r}(t,t_{0})x_{r}(t_{0}). \tag{45}\]
Differentiating \(\Delta x_{r}(t)=\hat{x}_{r}(t)-x_{r}(t)\) with respect to \(t\) results in
\[\frac{d\Delta x_{r}(t)}{dt} =\frac{d\hat{x}_{r}(t)}{dt}-\frac{dx_{r}(t)}{dt}\] \[=A(t)\Delta x_{r}(t)+\Delta A_{r}(t)\hat{x}_{r}(t).\]
For \(t=t_{0}\), \(\Delta x_{r}(t_{0})=\hat{x}_{r}(t_{0})-x_{r}(t_{0})=0\). Thus, the solution of the above differential equation is,
\[\Delta x_{r}(t) =\int_{t_{0}}^{t}\phi_{r}(t,\tau)\Delta A_{r}(\tau)\hat{x}_{r}( \tau)d\tau\] \[=\left(\int_{t_{0}}^{t}\phi_{r}(t,\tau)\Delta A_{r}(\tau)\hat{\phi} _{r}(\tau,t_{0})d\tau\right)x_{r}(t_{0}). \tag{46}\]
Subtracting (45) from (44), gives the following expression for \(\Delta x_{r}(t)\),
\[\Delta x_{r}(t) =\left(\hat{\phi}_{r}(t,t_{0})-\phi_{r}(t,t_{0})\right)x_{r}(t_{0})\] \[=\Delta\phi_{r}(t,t_{0})x_{r}(t_{0}). \tag{47}\]
Comparing (46) and (47) results in the following
\[\Delta\phi_{r}(t,t_{0})x_{r}(t_{0})=\left(\int_{t_{0}}^{t}\phi_{r}(t,\tau) \Delta A_{r}(\tau)\hat{\phi}_{r}(\tau,t_{0})d\tau\right)x_{r}(t_{0}).\]
Since the above equation holds for arbitrary \(x_{r}(t_{0})\),
\[\Delta\phi_{r}(t,t_{0})=\int_{t_{0}}^{t}\phi_{r}(t,\tau)\Delta A_{r} (\tau)\hat{\phi}_{r}(\tau,t_{0})d\tau \tag{49}\] \[=\int_{t_{0}}^{t}\phi_{r}(t,\tau)\Delta A_{r}(\tau)\phi_{r}(\tau,t _{0})d\tau+\] \[\int_{t_{0}}^{t}\phi_{r}(t,\tau)\Delta A_{r}(\tau)\Delta\phi_{r}( \tau,t_{0})d\tau. \tag{50}\]
From (49), we get \(\Delta\phi_{r}(\tau,t_{0})=\int_{t_{0}}^{\tau}\phi_{r}(\tau,s)\Delta A_{r}(s) \hat{\phi}_{r}(s,t_{0})ds\). Using this expression in the second term on the right-hand side of (50) gives
\[\int_{t_{0}}^{t}\phi_{r}(t,\tau)\Delta A_{r}(\tau)\Delta\phi_{r}( \tau,t_{0})d\tau\] \[=\int_{t_{0}}^{t}\phi_{r}(t,\tau)\Delta A_{r}(\tau)\int_{t_{0}}^{ \tau}\phi_{r}(\tau,s)\Delta A_{r}(s)\hat{\phi}_{r}(s,t_{0})dsd\tau\] \[=\int_{t_{0}}^{t}\int_{t_{0}}^{\tau}\phi_{r}(t,\tau)\Delta A_{r}( \tau)\phi_{r}(\tau,s)\Delta A_{r}(s)\hat{\phi}_{r}(s,t_{0})dsd\tau.\]
Substituting the above expression in (50) gives (43). This completes the proof of the lemma.
Lemma 5 is essential for the derivation of the functional derivatives of the error norm \(\|\Sigma-\Sigma_{r}\|_{H_{2}[t_{0},t_{f}]}^{2}\) and also leads to the following corollary.
**Corollary 6**.: _Let \(\Delta_{1}\phi_{r}(t,t_{0})=\int_{t_{0}}^{t}\phi_{r}(t,\tau)\Delta A_{r}(\tau )\phi_{r}(\tau,t_{0})d\tau\). Let \(L(e^{A_{r}(t-t_{0})},\Delta A_{r}(t-t_{0}))\) be the Frechet derivative of the matrix exponential \(e^{A_{r}t}\) along \(\Delta A_{r}(t-t_{0})\)[1]. If \(A_{r}(t)=A_{r}\) and \(\Delta A_{r}(t)=\Delta A_{r}\)\(\forall t\in[t_{0},t_{f}]\), then \(\Delta_{1}\phi_{r}(t,t_{0})=L(e^{A_{r}(t-t_{0})},\Delta A_{r}(t-t_{0}))\)._
**Proof.** For the given assumptions, we have
\[\Delta_{1}\phi_{r}(t,t_{0})=\int_{t_{0}}^{t}\phi_{r}(t,\tau) \Delta A_{r}(\tau)\phi_{r}(\tau,t_{0})d\tau\] \[=\int_{t_{0}}^{t}e^{A_{r}(t-t)}\Delta A_{r}e^{A_{r}(\tau-t_{0})}d\tau\] \[=\int_{0}^{t-t_{0}}e^{A_{r}(t-t_{0}-t)}\Delta A_{r}e^{A_{r}t}dl\] \[=\int_{0}^{t-t_{0}}e^{\left(A_{r}(t-t_{0})\left(1-\frac{t}{t-t_{ 0}}\right)\right)}\Delta A_{r}(t-t_{0})e^{A_{r}(t-t_{0})\left(\frac{t}{t-t_{0} }\right)}\frac{dl}{t-t_{0}}\] \[=\int_{0}^{1}e^{A_{r}(t-t_{0})(1-s)}\Delta A_{r}(t-t_{0})e^{A_{r} (t-t_{0})s}ds\] \[=L(e^{A_{r}(t-t_{0})},\Delta A_{r}(t-t_{0})).\]
This proves the result.
Let \(M_{i}=\{f_{i}|f_{i}:[t_{0},t_{f}]\rightarrow\mathbb{R}^{m_{i}\times n_{i}}\) is continuous and bounded\(\}\) for \(i=1,2,\ldots,k\). Consider \(F\) as \(F:M_{1}\times M_{2}\times\ldots\times M_{k}\rightarrow\mathbb{R}\).
**Definition 7** ([12], Appendix A): _The functional derivative of \(F\) with respect to \(f_{i}\in M_{i}\) is a function given by \(\frac{\partial F}{\partial f_{i}}:[t_{0},t_{f}]\rightarrow\mathbb{R}^{m \times n}\) which satisfies_
\[\left\langle\frac{\partial F}{\partial f_{i}},\Delta f_{i}\right\rangle =\int_{t_{0}}^{t_{f}}Tr\left(\left(\frac{\partial F}{\partial f_{ i}}(t)\right)^{T}\Delta f_{i}(t)\right)dt\] \[=\lim_{\varepsilon\to 0}\frac{F[f_{i}+\varepsilon\Delta f_{i}]-F[f_{ i}]}{\varepsilon}, \tag{51}\]
_where \(\varepsilon\) is a scalar and \(\Delta f_{i}:[t_{0},t_{f}]\rightarrow\mathbb{R}^{m_{i}\times n_{i}}\) is an function in \(M_{i}\)._
Thus, given \(\Delta f_{i}\in M_{i}\), the functional derivative of \(F\) with respect to \(f_{i}\) is the function for which the inner product between \(\frac{\partial F}{\partial f_{i}}\) and \(\Delta f_{i}\) is the directional derivative of the functional \(F\) in the direction of \(\Delta f_{i}\).
**Theorem 8**.: _Consider \(J[A_{r},B_{r},C_{r}]=\|\Sigma-\Sigma_{r}\|_{H_{2}[0,t_{f}]}^{2}\) where \(A_{r}(t)\), \(B_{r}(t)\) and \(C_{r}(t)\) are given by (19). The functional derivatives of \(J\) with respect to \(A_{r}(t)\), \(B_{r}(t)\) and \(C_{r}(t)\), respectively, are as follows:_
\[\frac{\partial J}{\partial A_{r}}(t) =2(Q_{r}(t)P_{r}(t)-(Y(t))^{T}X(t)), \tag{52}\] \[\frac{\partial J}{\partial B_{r}}(t) =2(Q_{r}(t)B_{r}(t)-(Y(t))^{T}B(t))\quad\text{and}\] (53) \[\frac{\partial J}{\partial C_{r}}(t) =2(C_{r}(t)P_{r}(t)-C(t)X(t)). \tag{54}\]
**Proof.** The inner product of \(\frac{\partial J}{\partial B_{r}}\) and an arbitrary matrix-valued perturbation \(\Delta B_{r}:[t_{0},t_{f}]\rightarrow\mathbb{R}^{r\times m}\) is as follows:
\[\left\langle\frac{\partial J}{\partial B_{r}},\Delta B_{r}\right\rangle= \int_{t_{0}}^{t_{f}}\mathrm{Tr}\left(\left(\frac{\partial J}{\partial B_{r}}(t) \right)^{T}\Delta B_{r}(t)\right)dt \tag{55}\] \[=\lim_{\varepsilon\to 0+}\frac{1}{\varepsilon}\left(J[A_{r},B_{r}+ \varepsilon\Delta B_{r},C_{r}]-J[A_{r},B_{r},C_{r}]\right).\]
Substituting the expression of \(J\) given by (36) in the above equation and using the identity \(Tr(A^{T}B)=Tr(B^{T}A)\) results in
\[\lim_{\varepsilon\to 0+}\frac{1}{\varepsilon}\left(J[A_{r},B_{r}+ \varepsilon\Delta B_{r},C_{r}]-J[A_{r},B_{r},C_{r}]\right)\] \[=2\int_{t_{0}}^{t_{f}}\mathrm{Tr}\left(\left(Q_{r}(\tau)B_{r}( \tau)-(Y(\tau))^{T}B(\tau)\right)^{T}\Delta B_{r}(\tau)\right)d\tau\] \[=\left\langle 2\left(Q_{r}(\tau)B_{r}(\tau)-(Y(\tau))^{T}B(\tau) \right),\Delta B_{r}(\tau)\right\rangle.\]
Since \(\Delta B_{r}\) is arbitrary, comparing the above expression with (55), (53) is obtained.
The inner product of \(\frac{\partial J}{\partial C_{r}}\) and an arbitrary matrix-valued perturbation \(\Delta B_{r}:[t_{0},t_{f}]\rightarrow\mathbb{R}^{r\times m}\) is as follows:
\[\left\langle\frac{\partial J}{\partial B_{r}},\Delta B_{r} \right\rangle=\int_{t_{0}}^{t_{f}}\mathrm{Tr}\left(\left(\frac{\partial J}{ \partial B_{r}}(t)\right)^{T}\Delta B_{r}(t)\right)dt \tag{56}\] \[=\lim_{\varepsilon\to 0+}\frac{1}{\varepsilon}\left(J[A_{r},B_{r}+ \varepsilon\Delta B_{r},C_{r}]-J[A_{r},B_{r},C_{r}]\right).\]
Substituting the expression of \(J\) given by (36) in the above equation and using the identity \(Tr(A^{T}B)=Tr(B^{T}A)\) results in
\[\lim_{\varepsilon\to 0+}\frac{1}{\varepsilon}\left(J[A_{r},B_{r}+ \varepsilon\Delta B_{r},C_{r}]-J[A_{r},B_{r},C_{r}]\right)\] \
perturbation \(\Delta C_{r}:[t_{0},t_{f}]\rightarrow\mathbb{R}^{P\times r}\) is as follows
\[\left\langle\frac{\partial J}{\partial C_{r}},\Delta C_{r}\right\rangle =\int_{t_{0}}^{t_{f}}\mathrm{Tr}\left(\left(\frac{\partial J}{\partial C_{r}}(t )\right)^{T}\Delta C_{r}(t)\right)dt \tag{55}\] \[=\lim_{\varepsilon\to 0+}\frac{1}{\varepsilon}\left(J[A_{r},B_ {r},C_{r}+\varepsilon\Delta C_{r}]-J[A_{r},B_{r},C_{r}]\right).\]
Considering \(J\) given by (35) in the above limit and using the identity \(Tr(A^{T}B)=Tr(B^{T}A)\) gives
\[\lim_{\varepsilon\to 0+}\frac{1}{\varepsilon}\left(J[A_{r},B_ {r},C_{r}+\varepsilon\Delta C_{r}]-J[A_{r},B_{r},C_{r}]\right)\] \[=2\int_{t_{0}}^{t_{f}}\mathrm{Tr}\left(\left(C_{r}(t)P_{r}(t)-C( t)X(t)\right)\left(\Delta C_{r}(t)\right)^{T}\right)dt\] \[=2\int_{t_{0}}^{t_{f}}\mathrm{Tr}\left(\left(C_{r}(t)P_{r}(t)-C( t)X(t)\right)^{T}\Delta C_{r}(t)\right)dt.\]
Since \(\Delta C_{r}\) is arbitrary, comparing the above expression with (55) results in (53).
The inner product of \(\frac{\partial J}{\partial A_{r}}\) and an arbitrary matrix-valued perturbation \(\Delta A_{r}:[t_{0},t_{f}]\rightarrow\mathbb{R}^{r\times r}\) is as follows
\[\left\langle\frac{\partial J}{\partial A_{r}},\Delta A_{r}\right\rangle =\int_{t_{0}}^{t_{f}}\mathrm{Tr}\left(\left(\frac{\partial J}{ \partial A_{r}}(t)\right)^{T}\Delta A_{r}(t)\right)dt \tag{56}\] \[=\lim_{\varepsilon\to 0+}\frac{1}{\varepsilon}\left(J[A_ {r}+\varepsilon\Delta A_{r},B_{r},C_{r}]-J[A_{r},B_{r},C_{r}]\right).\]
Let \(A_{r}(t)\) be perturbed by \(\varepsilon\Delta A_{r}\). Therefore, by Equation (43), we get the following
\[\Delta\phi_{r}(t,t_{0})=\varepsilon\int_{t_{0}}^{t}\phi_{r}(t,\tau)\Delta A_{r }(\tau)\phi_{r}(\tau,t_{0})d\tau+\varepsilon^{2}\phi\,. \tag{57}\]
where \(\phi=\int_{t_{0}}^{t}\int_{t_{0}}^{\tau}\phi_{r}(t,\tau)\Delta A_{r}(\tau) \phi_{r}(\tau,s)\Delta A_{r}(s)\hat{\phi}_{r}(s,t_{0})dsd\tau\). Considering the expression of \(J\) given by Equation (35), we get the following
\[J[A_{r}+\varepsilon\Delta A_{r},B_{r},C_{r}]-J[A_{r},B_{r},C_{r}] =\int_{t_{0}}^{t_{f}}\mathrm{Tr}(C_{r}(t)\times\] \[\Delta P_{r}(t)(C_{r}(t))^{T})dt-2\int_{t_{0}}^{t_{f}}\mathrm{Tr} \left(C(t)\Delta X(t)(C_{r}(t))^{T}\right)dt, \tag{58}\]
where \(\Delta P_{r}(t)\) and \(\Delta X(t)\) are the perturbations in \(P_{r}(t)\) and \(X(t)\), respectively, due to the perturbation of the state matrix \(A_{r}(t)\). Using the expression of \(P_{r}(t)\) given by Equation (21), the first term in the right-hand side of the Equation (58) can be simplified as follows,
\[\int_{t_{0}}^{t_{f}}\mathrm{Tr}(C_{r}(t)\int_{t_{0}}^{t}\Delta \phi_{r}(t,\tau)B_{r}(\tau)(B_{r}(\tau))^{T}(\phi_{r}(t,\tau))^{T}d\tau(C_{r} (t))^{T}\] \[+C_{r}(t)\int_{t_{0}}^{t}\phi_{r}(t,\tau)B_{r}(\tau)(B_{r}(\tau) )^{T}(\Delta\phi_{r}(t,\tau))^{T}d\tau(C_{r}(t))^{T}+\] \[C_{r}(t)\int_{t_{0}}^{t}\Delta\phi_{r}(t,\tau)B_{r}(\tau)(B_{r}( \tau))^{T}(\Delta\phi_{r}(t,\tau))^{T}d\tau(C_{r}(t))^{T})dt\] \[=2\int_{t_{0}}^{t_{f}}\mathrm{Tr}(C_{r}(t)\int_{t_{0}}^{t}\Delta \phi_{r}(t,\tau)B_{r}(\tau)(B_{r}(\tau))^{T}(\phi_{r}(t,\tau))^{T}d\tau\times\] \[(C_{r}(t))^{T})dt+\int_{t_{0}}^{t_{f}}\mathrm{Tr}(C_{r}(t)\int_{t _{0}}^{t}\Delta\phi_{r}(t,\tau)B_{r}(\tau)(B_{r}(\tau))^{T}\times\] \[(\Delta\phi_{r}(t,\tau))^{T}d\tau(C_{r}(t))^{T})dt.\]
Substituting \(\Delta\phi_{r}(t,\tau)\) from (57) in the above expression results in
\[2\varepsilon\int_{t_{0}}^{t_{f}}\mathrm{Tr}((C_{r}(t))^{T}C_{r}(t )\int_{t_{0}}^{t}\int_{\tau}^{t}\phi_{r}(t,s)\Delta A_{r}(s)\phi_{r}(s,\tau)B_{r }(\tau)\times\] \[(B_{r}(\tau))^{T}(\phi_{r}(t,\tau))^{T}dsd\tau)dt+\mathrm{Tr}( \Psi), \tag{59}\]
where \(\mathrm{Tr}(\Psi)=\mathcal{O}(\varepsilon^{2})\).
Consider the first term of the expression (59). Exchanging the order of integration of the variables \(s\) and \(\tau\) results in
\[2\varepsilon\int_{t_{0}}^{t_{f}}\mathrm{Tr}((C_{r}(t))^{T}C_{r}(t )\int_{t_{0}}^{t}\int_{t_{0}}^{s}\phi_{r}(t,s)\Delta A_{r}(s)\phi_{r}(s,\tau)B_{ r}(\tau)\times\] \[(B_{r}(\tau))^{T}(\phi_{r}(s,\tau))^{T}(\phi_{r}(t,s))^{T}d\tau ds )dt\] \[=2\varepsilon\mathrm{Tr}\int_{t_{0}}^{t_{f}}\int_{t_{0}}^{t}(\phi _{r}(t,s))^{T}(C_{r}(t))^{T}C_{r}(t)\phi_{r}(t,s)\Delta A_{r}(s)\times\] \[\int_{t_{0}}^{s}\phi_{r}(s,\tau)B_{r}(\tau)(B_{r}(\tau))^{T}(\phi _{r}(s,\tau))^{T}d\tau dsdt\] \[=2\varepsilon\mathrm{Tr}\int_{t_{0}}^{t_{f}}\int_{t_{0}}^{t}(\phi _{r}(t,s))^{T}(C_{r}(t))^{T}C_{r}(t)\phi_{r}(t,s)\Delta A_{r}(s)\times\] \[P_{r}(s)dsdt.\]
Further, exchanging the order of integration of the variables \(t\) and \(s\) in the above expression yields
\[2\varepsilon\mathrm{Tr}\int_{t_{0}}^{t_{f}}P_{r}(s)\left(\int_{s}^{ t_{f}}(\phi_{r}(t,s))^{T}(C_{r}(t))^{T}C_{r}(t)\phi_{r}(t,s)dt\right)\times\] \[\Delta A_{r}(s)ds\] \[=2\varepsilon\int_{t_{0}}^{t_{f}}\mathrm{Tr}\left(\left(Q_{r}(t_{f}, s)P_{r}(s,t_{0})\right)^{T}\Delta A_{r}(s)\right)ds.\]
The expression (59) can be simplified as
\[2\varepsilon\int_{t_{0}}^{t_{f}}\mathrm{Tr}\left(\left(Q_{r}(t_{f},s)P_{r}(s,t_{0}) \right)^{T}\Delta A_{r}(s)\right)ds+\mathrm{Tr}(\Psi). \tag{60}\]
Similarly, using \(X(t)\) given by (22) and substituting \(\Delta\phi_{r}(t,\tau)\) from (57), the second term in the right-hand side of Equation
(58) is simplified as follows
\[2\epsilon\mathrm{Tr}\int_{l_{0}}^{l_{f}}(C_{r}(t))^{T}C(t)\int_{l_{ 0}}^{t}\phi(t,\tau)B(\tau)(B_{r}(\tau))^{T}\int_{\tau}^{t}(\phi_{r}(s,\tau))^{T} \times\] \[(\Delta A_{r}(s))^{T}(\phi_{r}(t,s))^{T}ds\,d\tau\,dt+\mathrm{Tr}(\eta)\] \[=2\epsilon\mathrm{Tr}\int_{l_{0}}^{l_{f}}(C_{r}(t))^{T}C(t)\int_ {l_{0}}^{t}\int_{\tau}^{t}\phi(t,s)\phi(s,\tau)B(\tau)(B_{r}(\tau))^{T}\times\] \[(\phi_{r}(s,\tau))^{T}(\Delta A_{r}(s))^{T}(\phi_{r}(t,s))^{T}ds \,d\tau\,dt+\mathrm{Tr}(\eta), \tag{61}\]
where \(\mathrm{Tr}(\eta)=\mathcal{O}(\epsilon^{2})\). For the first term of the above expression, exchanging the order of integration of \(s\) and \(\tau\) results in
\[2\epsilon\mathrm{Tr}\int_{l_{0}}^{l_{f}}(C_{r}(t))^{T}C(t)\int_ {l_{0}}^{t}\phi(t,s)\int_{l_{0}}^{s}\phi(s,\tau)B(\tau)(B_{r}(\tau))^{T}\times\] \[(\phi_{r}(s,\tau))^{T}d\tau(\Delta A_{r}(s))^{T}(\phi_{r}(t,s))^{T }dsdt\] \[=2\epsilon\mathrm{Tr}\int_{l_{0}}^{l_{f}}\int_{l_{0}}^{t}(\phi_{r }(t,s))^{T}(C_{r}(t))^{T}C(t)\phi(t,s)X(s)\times\] \[(\Delta A_{r}(s))^{T}dsdt.\]
Further, changing the order of integration of \(s\) and \(t\) gives
\[2\epsilon\mathrm{Tr}\int_{l_{0}}^{l_{f}}\int_{s}^{l_{f}}(\phi_{ r}(t,s))^{T}(C_{r}(t))^{T}C(t)\phi(t,s)dtX(s)(\Delta A_{r}(s))^{T}ds\] \[=2\epsilon\mathrm{Tr}\int_{l_{0}}^{l_{f}}(Y(s)X(s))^{T}\Delta A_ {r}(s)ds.\]
Thus, the expression (61) simplifies to
\[2\epsilon\mathrm{Tr}\int_{l_{0}}^{l_{f}}(Y(s)X(s))^{T}\Delta A_ {r}(s)ds+\mathrm{Tr}(\eta). \tag{62}\]
Substituting (60) and (62) in (58) results in
\[J[A_{r}+\epsilon\Delta A_{r},B_{r},C_{r}]-J[A_{r},B_{r},C_{r}]= 2\epsilon\int_{l_{0}}^{l_{f}}\mathrm{Tr}((Q_{r}(s)P_{r}(s)-\] \[Y(s)X(s))^{T}\Delta A_{r}(s))ds+\mathrm{Tr}(\Psi-\eta),\]
where \(\mathrm{Tr}(\Psi-\eta)=\mathcal{O}(\epsilon^{2})\). Dividing the above expression by \(\epsilon\) and taking limit as \(\epsilon\to 0\) yields
\[\lim_{\epsilon\to 0+\epsilon}\frac{1}{\epsilon}(J[A_{r}+ \epsilon\Delta A_{r},B_{r},C_{r}]-J[A_{r},B_{r},C_{r}])\] \[=2\int_{l_{0}}^{l_{f}}\mathrm{Tr}((Q_{r}(s)P_{r}(s)-Y(s)X(s))^{T }\Delta A_{r}(s))ds.\]
As \(\Delta A_{r}(t)\) is arbitrary, comparing the above expression with (56) results in (51).
The functional derivatives obtained above are used in the following theorem.
**Theorem 9**.: _Let the continuous time-varying matrices \(A_{r}^{*}(t)\), \(B_{r}^{*}(t)\) and \(C_{r}^{*}(t)\) be a stationary point of the functional \(J[A_{r},B_{r},C_{r}]\). Let \(P_{r}^{*}(t)\), \(Q_{r}^{*}(t)\), \(X^{*}(t)\) and \(Y^{*}(t)\) be the solutions of (25), (32), (26) and (31), respectively, for \(A_{r}(t)=A_{r}^{*}(t)\), \(B_{r}(t)=B_{r}^{*}(t)\) and \(C_{r}(t)=C_{r}^{*}(t)\). If \(P_{r}^{*}(t)\) and \(Q_{r}^{*}(t)\) are invertible at every instant \(t\in[t_{0},t_{f}]\), then_
\[A_{r}^{*}(t) =(W_{r}(t))^{T}\left(A(t)V_{r}(t)-\frac{dV_{r}(t)}{dt}\right),\quad \mathrm{or}\] \[=\left((W_{r}(t))^{T}A(t)+\frac{d}{dt}(W_{r}(t))^{T}\right)V_{r}( t),\] \[B_{r}^{*}(t) =(W_{r}(t))^{T}B(t),\quad\text{and}\] \[C_{r}^{*}(t) =C(t)V_{r}(t),\]
_where \((W_{r}(t))^{T}V_{r}(t)=I_{r}\) with \(W_{r}(t)=Y^{*}(t)\left(Q_{r}^{*}(t)\right)^{-1}\) and \(V_{r}(t)=X^{*}(t)\left(P_{r}^{*}(t)\right)^{-1}\)._
Let \((A_{r}^{*}(t),B_{r}^{*}(t),C_{r}^{*}(t))\) be a stationary point of the functional \(J[A_{r},B_{r},C_{r}]\). Let \(\frac{\partial J}{\partial A_{r}}^{*}(t)=\frac{\partial J}{\partial A_{r}} \big{|}_{(A_{r}^{*}(t),B_{r}^{*}(t),C_{r}^{*}(t))}\), \(\frac{\partial J}{\partial B_{r}}^{*}(t)=\frac{\partial J}{\partial B_{r}} \big{|}_{(A_{r}^{*}(t),B_{r}^{*}(t),C_{r}^{*}(t))}\) and \(\frac{\partial J}{\partial C_{r}}^{*}(t)=\frac{\partial J}{\partial C_{r}} \big{|}_{(A_{r}^{*}(t),B_{r}^{*}(t),C_{r}^{*}(t))}\). For arbitrary \(\Delta A_{r}(t)\), \(\Delta B_{r}(t)\) and \(\Delta C_{r}(t)\) with appropriate dimensions, continuous over \([t_{0},t_{f}]\), the following relations hold.
\[\left\langle\frac{\partial J}{\partial A_{r}}^{*},\Delta A_{r}\right\rangle=\left\langle \frac{\partial J}{\partial B_{r}}^{*},\Delta B_{r}\right\rangle=\left\langle \frac{\partial J}{\partial C_{r}}^{*},\Delta C_{r}\right\rangle=0.\]
As \(\frac{\partial J}{\partial A_{r}}^{*}(t)\), \(\frac{\partial J}{\partial B_{r}}^{*}(t)\) and \(\frac{\partial J}{\partial C_{r}}^{*}(t)\) are continuous in \([t_{0},t_{f}]\), we get,
\[\frac{\partial J}{\partial A_{r}}^{*}(t) =Q_{r}^{*}(t)P_{r}^{*}(t)-(Y^{*}(t))^{T}X^{*}(t)=0, \tag{64}\] \[\frac{\partial J}{\partial B_{r}}^{*}(t) =Q_{r}^{*}(t)B_{r}^{*}(t)-(Y^{*}(t))^{T}B(t)=0,\] (65) \[\frac{\partial J}{\partial C_{r}}^{*}(t) =C_{r}^{*}(t)P_{r}^{*}(t)-C(t)X^{*}(t)=0. \tag{66}\]
From (65) and (66), we have
\[B_{r}^{*}(t) =\left(Y^{*}(t)\left(Q_{r}^{*}(t)\right)^{-1}\right)^{T}B(t),\quad \text{and}\] \[=\left(W_{r}(t)\right)^{T}B(t)\quad\text{and}\] \[C_{r}^{*}(t) =C(t)X^{*}(t)\left(P_{r}^{*}(t)\right)^{-1}=C(t)V_{r}(t),\]
where \(W_{r}(t)=Y^{*}(t)\left(Q_{r}^{*}(t)\right)^{-1}\) and \(V_{r}(t)=X^{*}(t)\left(P_{r}^{*}(t)\right)^{-1}\). Using (64), we get
\[\left(W_{r}(t)\right)^{T}V_{r}(t)=I_{r}. \tag{67}\]
Left multiplying (26) by \((W(t))^{T}\), substituting
\(V_{r}(t)P_{r}^{*}(t)\) and using (67) gives
\[(W_{r}(t))^{T}\frac{d}{dt}\left(V_{r}(t)P_{r}^{*}(t)\right)=(W_{r}(t) )^{T}A(t)V_{r}(t)P_{r}^{*}(t)+\] \[(W_{r}(t))^{T}V_{r}(t)P_{r}^{*}(t)(A_{r}^{*}(t))^{T}+(W_{r}(t))^{T }B(t)(B_{r}^{*}(t))^{T}\] \[\Rightarrow\frac{d}{dt}P_{r}^{*}(t)=\left((W_{r}(t))^{T}A(t)V_{r} (t)-W_{r}(t))^{T}\frac{d}{dt}V_{r}(t)\right)P_{r}^{*}(t)\] \[+P_{r}^{*}(t)(A_{r}^{*}(t))^{T}+B_{r}^{*}(t)(B_{r}^{*}(t))^{T}.\]
Comparing the above matrix differential equation with (25) results in
\[A_{r}^{*}(t)=\left(W_{r}(t)\right)^{T}A(t)V_{r}(t)-W_{r}(t) \right)^{T}\frac{d}{dt}V_{r}(t).\]
Similarly, taking transpose of (31), right multiplying it by \(V(t)\), substituting \(Y^{*}(t)=W_{r}(t)Q_{r}^{*}(t)\), and using (67) gives
\[-\frac{d}{dt}(Q_{r}^{*}(t))^{T}=(Q_{r}^{*}(t))^{T}\left((W_{r}(t ))^{T}A(t)+\frac{d}{dt}(W_{r}(t))^{T}\right)V(t)\] \[+(A_{r}^{*}(t))^{T}(Q_{r}^{*}(t))^{T}+(C_{r}^{*}(t))^{T}C_{r}^{*} (t).\]
The following result is obtained by comparing the above matrix differential equation with (32).
\[A_{r}^{*}(t)=\left((W_{r}(t))^{T}A(t)+\frac{d}{dt}(W_{r}(t))^{T} \right)V(t).\]
**Remark 10**: _The conditions (64), (65) and (66) are first-order necessary conditions for optimality of the finite horizon \(H_{2}\) error norm \(J=\left\|\Sigma-\Sigma_{r}\right\|_{H_{2}[0,t_{f}]}^{2}\). As the conditions are expressed using a gramian framework, they can be considered as the generalization of the Lyapunov-based \(H_{2}\) optimality conditions for continuous-time LTI systems [15] to LTV systems._
### A projection-based model reduction scheme
This section proposes an iterative algorithm for obtaining the optimal values of the reduced-order matrices (with respect to the finite horizon \(H_{2}\) error norm) starting from non-optimal values. The starting values are obtained by reducing the original LTV system by the finite horizon balanced truncation algorithm (Algorithm 1, [8]). Then, \(P_{r}(t)\), \(X(t)\), \(Q_{r}(t)\) and \(Y(t)\) are computed (explained in detail in Remark 11). Further, \(A_{r}(t)\), \(B_{r}(t)\) and \(C_{r}(t)\) are updated by computing \(W_{r}(t)=Y(t)\left(Q_{r}(t)\right)^{-1}\), \(V_{r}(t)=X(t)\left(P_{r}(t)\right)^{-1}\) and using (63). In this manner, \(A_{r}(t)\), \(B_{r}(t)\) and \(C_{r}(t)\) are iteratively calculated. If the iterations converge, they converge to the optimal values of the reduced-order matrices. An outline of the iterative method is given in Algorithm 1. The iterations are stopped when there is no significant change in the output error \(\delta\) for successive iterations.
```
Input:\(A(t),B(t),C(t)\); A finite time-interval \([t_{0},t_{f}]\); Initial \(A_{r}(t),B_{r}(t),C_{r}(t)\), obtained by finite horizon balanced truncation over \([t_{0},t_{f}]\); The system input \(u(t)\) over \([t_{0},t_{f}]\); Output:\(A_{r}(t),B_{r}(t),C_{r}(t)\) satisfying (64)-(66); while(not converged)do 1. Compute \(V_{r}(t)=X(t)(P_{r}(t))^{-1}\) and \(W_{r}(t)=Y(t)\left(Q_{r}(t)\right)^{-1}\); 2. Update the reduced-order model as follows: \[A_{r}(t) =(W_{r}(t))^{T}\left(A(t)V_{r}(t)-\frac{dV_{r}(t)}{dt}\right) \mathrm{or},\] \[=\left((W_{r}(t))^{T}A(t)+\frac{d}{dt}(W_{r}(t))^{T}\right)V_{r}(t),\] \[B_{r}(t) =(W_{r}(t))^{T}B(t),\quad\mathrm{and}\quad C_{r}(t)=C(t)V_{r}(t);\] 3. Simulate the updated reduced-order model with initial condition \(x_{r}(t_{0})=0\) and input \(u(t)\) over \([t_{0},t_{f}]\) to obtain \(y_{r}(t)\) and compute \(\delta=\left\|y-y_{r}\right\|_{L^{2}[t_{0},t_{f}]}\).
```
**Algorithm 1**Finite horizon TSIA for LTV systems
**Remark 11**: _The above algorithm is the generalization of the TSIA algorithm proposed by [17] for continuous-time LTI systems to continuous-time LTV systems._
## 4 Numerical example
Assume the following LTV system with initial condition \(x(0)=\left[0\ \ 0\right]^{T}\).
\[\frac{d}{dt}x(t) =\begin{bmatrix}t&2e^{-t}\\ 1&te^{-t}\end{bmatrix}x(t)+\begin{bmatrix}1\\ 1\end{bmatrix}u(t),\] \[y(t) =\begin{bmatrix}1&1\end{bmatrix}x(t), \tag{68}\]
where \(x(t)=\left[x_{1}(t)\ x_{2}(t)\right]^{T}\). The input for this example is a step input. For an interval of \([-0.5,2.5]\) s, we obtain the time-varying gramians \(P(t)\) and \(Q(t)\) by assuming \(P(-0.5)=0.001I_{2}\) and \(Q(2.5)=0.001I_{2}\), respectively. Using \(\mathrm{diag}\{\sigma_{1}(t),\sigma_{2}(t)\}=\left(\lambda\left(P(t)Q(t) \right)\right)^{\frac{1}{2}}\), the Hankel singu
lar values of the above system are computed and displayed in Fig. 1. From the figure, it is clear that \(\sigma_{1}(t)\) is considerably greater than \(\sigma_{2}(t)\) over \([0,2]\) s. Hence, we obtain an order one approximation of the LTV system for \([0,2]\) s using the finite horizon TSIA algorithm. For initializing the TSIA algorithm, we use an order one LTV approximation of the original system obtained by applying the finite horizon balanced truncation algorithm for \([0,2]\) s. For each iteration of the TSIA algorithm, the relative error between the outputs of the original system and the reduced-order approximation for a step input is computed. The iterations are stopped when the error, \(\delta\), is minimum for the given step input. In our case, the minimum value of \(\delta\) is obtained after nine iterations and is \(0.0013\). The outputs of the original system, \((y(t))\) and the reduced-order approximations obtained by balanced truncation, \((y_{r,BT}(t))\) and the proposed TSIA algorithm, \((y_{r,TSIA}(t))\) are displayed in Fig. 2. Also, the absolute errors between the outputs of the original system and the reduced-order systems, i.e., \(|y(t)-y_{r,RT}(t)|\) and \(|y(t)-y_{r,TSIA}(t)|\) are compared in Fig. 3.
## 5 Conclusion
This paper has proposed a finite horizon \(H_{2}\) error norm between the continuous-time LTV system and its reduced-order LTV approximation. The functional derivatives of the proposed error norm have been obtained. Further, they have been used to derive conditions for optimality of the finite horizon \(H_{2}\) error norm. Finally, a projection-based iterative scheme for model reduction of continuous-time LTV systems has been proposed based on the optimality conditions. The performance of the iterative scheme has been illustrated via a numerical example.
## Appendix A Proof of Proposition 1
For \(u\equiv 0\), the differential equation of the LTV system \(\Sigma\) becomes,
\[\frac{d}{dt}x(t)=A(t)x(t), \tag{1}\]
Let \(x(t_{0})\) be the state at time \(t=t_{0}\). For \(t\geq t_{0}\), \(x(t)=\phi(t,t_{0})x(t_{0})\).
Similarly, for \(u_{a}\equiv 0\), the differential equation of the adjoint LTV system \(\Sigma_{u}\) is
\[\frac{d}{dt}x_{a}(t)=-\left(A(t)\right)^{T}x_{a}(t), \tag{2}\]
Let \(x_{a}(t_{f})\) be the state at time \(t=t_{f}\). For \(t\leq t_{f}\), \(x_{a}(t)=\phi_{a}(t,t_{f})x_{a}(t_{f})\). Using (1) and (2), we have,
\[\frac{d}{dt}\left((x(t))^{T}x_{a}(t)\right)=\left(\frac{d}{dt}x( t)\right)^{T}x_{a}(t)+(x(t))^{T}\frac{d}{dt}x_{a}(t)\] \[=\left(x(t)\right)^{T}\left(A(t)\right)^{T}x_{a}(t)-\left(x(t) \right)^{T}\left(A(t)\right)^{T}x_{a}(t)=0.\]
This shows that the inner product of \(x(t)\) and \(x_{a}(t)\) is constant for all \(t\). Let \(k\in\mathbb{R}\) be the constant. Thus, we get
\[\left(x(t)\right)^{T}x_{a}(t)=k\] \[\Rightarrow\left(x(t_{0})\right)^{T}\left(\phi(t,t_{0})\right)^{T }\phi_{a}(t,t_{f})x_{a}(t_{f})=k.\quad\forall\,t\in[t_{0},t_{f}]\]
The above expression is true if \((\phi(t,t_{0}))^{T}\phi_{a}(t,t_{f})=M\) where \(M\) is a constant matrix. We obtain \(M\) by evaluating
Figure 1: Hankel singular values of the LTV system for \([-0.5,2.5]\) s.
Figure 3: Comparison of absolute output errors of the reduced-order models obtained by BT and TSIA.
Figure 2: Comparison of outputs of the original and the reduced-order approximation.
\((\phi(t,t_{0}))^{T}\phi_{a}(t,t_{f})\) at \(t=t_{0}\) as follows,
\[M=(\phi(t_{0},t_{0}))^{T}\phi_{a}(t_{0},t_{f})=\phi_{a}(t_{0},t_{f}).\]
Thus, we have
\[(\phi(t,t_{0}))^{T}\phi_{a}(t,t_{f})=\phi_{a}(t_{0},t_{f})\] \[\Rightarrow\phi_{a}(t,t_{0})=(\phi(t,t_{0}))^{-T}\,.\]
Using the above result, the STM \(\phi_{a}(t,\tau)\) of \(\Sigma_{a}\) is expressed as,
\[\phi_{a}(t,\tau) =\phi_{a}(t,t_{0})\phi_{a}(t_{0},\tau)\] \[=(\phi(t,t_{0}))^{-T}\,(\phi(\tau,t_{0}))^{T}\] \[=(\phi(t,\tau))^{-T}\,.\]
Thus, we obtain (14).
Further, let \(x_{a}(t)\) and \(x_{ma}(t)\) be the state vectors for the adjoint system \(\Sigma_{a}\) and the modified adjoint system \(\Sigma_{ma}\), respectively. By definition, the state vectors are related as
\[x_{ma}(t)=x_{a}(T_{i}-t).\]
For \(t=t_{0}\), \(x_{ma}(t_{0})=x_{a}(t_{f})\). The above equation can be rewritten as follows,
\[\phi_{ma}(t,t_{0})x_{ma}(t_{0})=\phi_{a}(T_{i}-t,t_{f})x_{a}(t_{f})\] \[\Rightarrow\left(\phi_{ma}(t,t_{0})-\phi_{a}(T_{i}-t,t_{f})\right) x_{ma}(t_{0})=0.\]
Since the above expression is true for arbitrary value of \(x_{ma}(t_{0})\), it follows that
\[\phi_{ma}(t,t_{0})=\phi_{a}(T_{i}-t,t_{f}).\] (A.3)
Using the properties of the STM given by (3) and (5), the STM of the modified adjoint system becomes
\[\phi_{ma}(t,\tau) =\phi_{ma}(t,t_{0})\left(\phi_{ma}(\tau,t_{0})\right)^{-1}\] \[=\phi_{a}(T_{i}-t,t_{f})\left(\phi_{a}(T_{i}-\tau,t_{f})\right)^{ -1}\] \[=\phi_{a}(T_{i}-t,T_{i}-\tau).\] (A.4)
Using (14) along with the above expression gives the following result
\[\phi_{ma}(t,\tau) =\left(\phi(T_{i}-t,T_{i}-\tau)\right)^{-T}\] \[=\left(\phi(T_{i}-\tau,T_{i}-t)\right)^{T}.\] (A.5)
Combining (A) and (A), we get (15).
|
2306.01504 | Système de recommandations basé sur les contraintes pour les
simulations de gestion de crise | In the context of the evacuation of populations, some citizens/volunteers may
want and be able to participate in the evacuation of populations in difficulty
by coming to lend a hand to emergency/evacuation vehicles with their own
vehicles. One way of framing these impulses of solidarity would be to be able
to list in real-time the citizens/volunteers available with their vehicles
(land, sea, air, etc.), to be able to geolocate them according to the risk
areas to be evacuated, and adding them to the evacuation/rescue vehicles.
Because it is difficult to propose an effective real-time operational system on
the field in a real crisis situation, in this work, we propose to add a module
for recommending driver/vehicle pairs (with their specificities) to a system of
crisis management simulation. To do that, we chose to model and develop an
ontology-supported constraint-based recommender system for crisis management
simulations. | Ngoc Luyen Le, Jinfeng Zhong, Elsa Negre, Marie-Hélène Abel | 2023-06-02T12:51:48Z | http://arxiv.org/abs/2306.01504v1 | # Systeme de recommandations base sur les contraintes pour les simulations de gestion de crise
###### Abstract
In the context of the evacuation of populations, some citizens/volunteers may want and be able to participate in the evacuation of populations in difficulty by coming to lend a hand to emergency/evacuation vehicles with their own vehicles. One way of framing these impulses of solidarity would be to be able to list in real-time the citizens/volunteers available with their vehicles (land, sea, air, etc.), to be able to geolocate them according to the risk areas to be evacuated, and adding them to the evacuation/rescue vehicles. Because it is difficult to propose an effective real-time operational system on the field in a real crisis situation, in this work, we propose to add a module for recommending driver/vehicle pairs (with their specificities) to a system of crisis management simulation. To do that, we chose to model and develop an ontology-supported constraint-based recommender system for crisis management simulations.
Knowledge graph, Constraint-based Recommender System, Ontology, Simulation, Crisis management
## 1 Introduction
Dans le contexte de l'evacuation des populations, les resources publiques traditionelles telles que les ambulances et les helicopteres de gendarnerie (avec des conducteurs professionnels) peuvent etre limitees et mal positionnees pour atteindre toutes les personnes dans le besoin. Dans de telles situations, il est necessaire d'explorer des ressources d'evacuation alternatives. Les ressources citoyennes, en revanche, sont generalement plus dispersees et donc plus accessibles. De plus, de nombreux citoyens/benevoles1 peuvent etre disposes aider a l'evacuation en utilisant leurs propres vehicules. Par exemple, un propriatiere de minibus avec une capacite de 9 passagers pourrait potentiellement evacuer 8 personnes supplementaires, augmentant considerablement la capacite du processus d'evacuation. De meme, un propriatiere de bateau avec une capacite de 6 passagers pourrait aider a evacuer 5 personnes en cas d'inondation. Malheureusment, les recherches existantes sur la gestion de crise (par exemple, la simulation d'evacuation) [3, 5, 7, 9, 11, 12] se sont principalement concentrees sur l'utilisation de ressources publiques telles que les ambulances et les camions de pompiers. Cependant, dans certains cas, ces ressources peuvent ne pas etre disponibles en raison d'une demande elevee ou de la localisation eloignee de la zone touchee. De plus, la localisation des ressources publiques peut egalement etre impactee par une crise, aggravant la penurie de ressources.
En realisant ce travail, nous apportons les contributions suivantes : (i) nous etudions le developpement d'une ontologie pour aider a organiser des vocabulaires partages, standardiser les connaissances liees a la gestion de crise et faciliter la misse en cuvre d'un systeme de recommandations base sur des contraintes. En utilisant cette ontologie, nous pou
von's rationaliser le processus de reutilisation des informations afin d'ameliorer l'efficacite du systeme de recommandation base sur des contraintes ; (ii) nous formulons le probleme de distribution de vehicules lors de la mise a l'abri des populations comme un probleme de recommandation, ce qui nous permet d'incoroporer differentes techniques de recommandation pour alloquer efficacement les ressources. Plus precisement, notre systeme de simulation de crise base sur une ontologie pour la mise a l'abri des populations vise a consolider les ressources citoyennes pour aider a mettre les populations a l'abri lors d'une crise, en particulier dans les situations ou les ressources publiques sont insuffisantes. Le reste de cet article est organise comme suit. La section suivante presente la construction de l'ontologie et notre modele de systeme de recommandations base sur des contraintes pour les simulations de gestion de crise. Dans la troisieme section, nous presentons notre prototype et son application sur un cas d'utilisation detaille. Enfin, nous concluons en proposant quelques pistes de travaux futurs.
## 2 Notre approche
### Formulation du probleme
Nos travaux visent a modeliser et proposer un systeme de gestion des ressources conducteur/vehicle pour l'evacuation des populations touchees par une crise. Deux problemes cles sont abordes : \((P1)\) l'organisation des donnees et informations relatives aux ressources conducteur/vehicle, et \((P2)\) la recommandation de solutions optimales en tenant compte des contraintes de capacite, de temps de reponse et de contexte. Le probleme \((P1)\) se concentre sur le choix d'un modele approprie pour organiser les donnees et l'information dans le contexte de la gestion de crise. La modelisation ontologique est utilisee pour capturer et representer les concepts et les relations du domaine de la gestion de crise. Le probleme \((P2)\) concerne la conception et le developement d'un systeme de recommandations capable de proposer des solutions d'allocation de ressources adaptees a chaque situation. Les technologies de recommandation bases sur les connaissances et l'ontologie permetent de prendre en compte les exigences specifiques des points de secours 1 et de calculer des recommandations pertinentes en utilisant les connaissances sur le contexte et les ressources disponibles.
Footnote 1: Un point de secours est un lieu specifiquement désigne, au sein d’une situation de crise, ou les personnes peuvent se rendre pour obtenir de l’adie, des consis médicaux ou ê evacuees.
**Definition 1**: _Un systeme de recommandations base sur des contraintes pour l'allocation des ressources est defini en utilisant 4 ensembles : l'ensemble des ressources mobiles 2\(\mathcal{R}\) avec leurs caracteristiques/attributs, l'ensemble des points de secours et leurs besoins \(\mathcal{P}\), l'ensemble des abris \(\mathcal{S}\) avec leurs caracteristiques/attributs, et l'ensemble des contraintes \(\mathcal{C}\). Une recommandation de solution pertinente est calculee en fonction de l'element concret, des ensembles \(\mathcal{R}\), \(\mathcal{P}\), et \(\mathcal{S}\) de sorte que les contraintes specifiees \(\mathcal{C}\) soient satisfiees \(\mathcal{C}\)_
Footnote 2: Les ressources mobiles se limitent aux ressources citoyennes dans l’acide de notre trav travail.
**Definition 2**: _Une tache de recommandation pour l'allocation des ressources mobiles est definie comme un probleme de satisfaction de contraintes \((\mathcal{R},\mathcal{P},\mathcal{S},\mathcal{C})\) base sur l'attribution et le calcul du nombre de ressources mobiles dans l'ensemble \(\mathcal{R}\) allouees a un point de secours dans \(\mathcal{P}\) de sorte qu'il satisfi et ne vole aucune des contraintes dans \(\mathcal{C}\)._
Une recommandation de solution optimale pour l'allocation des ressources mobiles dans \(\mathcal{R}\) proposera une liste de ressources mobiles disponibles utilisees pour transferer les evacues des points de secours \(\mathcal{P}\) vers les abris \(\mathcal{S}\) avec un temps de deplacement minimum. Dans la prochaine section, nous detailerons le developpement d'une ontologie en gestion de crise pour les ressources et les facteurs lies.
### Construction de l'ontologie
Dans notre etude, nous avons utilise la Methodologie Agile pour le Developpement d'ontologie (AMOD) [1], qui comprend trois phases distinctes : preliminaire, developpement et post-developpement, chacune contribuant a la realisation progressive de l'ontologie. Dans la phase preliminaire, l'objectif principal de la construction de l'ontologie est de fournir un modele standard avec une terminologie et un vocabulaire pour collecter des informations sur les ressources disponibles (par exemple, les vehicules, les conducteurs) qu'une organisation (par exemple, le Conseil Municipal) mobilisera pour evacuer les personnes affectees lors d'une crise. Les concepts centraux de l'ontologie sont definis sur la base des entites importantes avec leurs caracteristiques dans le contexte de la gestion de crise.
Pendant la phase de developpement, nous organismos des sprints3 et choississons de developp
Figure 1: Un extrait de l’ontologie concernant les ressources conducteur/vehicle, les lieux et les personnes concernees dans la gestion d’une crise.
s'appuyant sur l'ontologie _ISyCri_[2] en utilisant des concepts lies a la description de la crise, des personnes touchees et des ressources. Plus precisement, comme illustre par la Figure 1, nous adaptons et developpons notre modele ontologique autour de trois principales entites : les ressources, les personnes et les lieux. Premierement, les ressources sont distinguees selon qu'elles sont humaines, materielles ou mobiles. Dans notre cas, les ressources humaines sont les citoyens/benevoles qui participent aux operations de secours et d'evacuation. Tandis que les ressources materialelles incluent les categories de vehicules et leurs informations descriptives. Une ressource mobile sera represente par une association par defaut d'un vechicule et d'un conducteur (c'est-a-dire une paier de ressources conducteur/vehicle). Deuxiement, l'identification et l'organisation des lieux joue un role extremement important dans notre cas. Chaque lieu doit etre specifiquement identifievec des informations de localisation. En general, les lieux sont separes en points de secours et abris. Les points de secours sont des sites ou les personnes affectees se regroupent et sont acheminees vers un abri par une ressource mobile.
Enfin, les populations peuvent etre distinguees entre les populations affectees et les ressources humaines. Les populations affectees sont les populations vulnerables lors de la crise, et elles ont besoin d'etre deplacees vers un abri. Tandis que les ressources humaines peuvent etre des conducteurs qui utilisent leur vehicule pour participer aux activites d'evacuation. En general, la representation d'une personne par l'ontologie est utile pour rassembler les ressources humaines et les informations sur les personnes affectees dans les etapes prealables et postfieures a la crise.
### Systeme de recommandation
Pendant la mise a l'abri des populations, la distribution efficace des conducteurs/vehicules citoyens/benevoles disponibles vers les zones touchees est un sujet de recherche crucial. Ce probleme peut etre considere comme un probleme de recommandation, ou les conducteurs/vehicules citoyens/benevoles sont traites comme des elements a recommender et les points de secours sont traitees comme des utilisateurs auxquels les conducteurs/vehicules citoyens/benevoles doivent etre recommandes. Dans cette section, nous fourissons une description detaillee du systeme de recommandations base sur les contraintes pour les simulations de gestion de crise. Les systemes de recommandations bases sur les contraintes generent des recommandations en identifiant les elements qui repondent a un ensemble de contraintes explicites predefinies. Dans notre cas, le systeme de recommandations vise a generer des paires de conducteurs/vehicules pour chaque point de secours, en veillant a ce que les conducteurs/vehicules affectes a chaque point de secours aient une capacite suffisante pour evacuer la population tout en minimisant le temps necessaire pour atteindre les points de secours (pour plus de details [10]).
Lorsque plusieurs solutions sont disponibles, notre algorithme renvoie celle qui utilise moins de vehicules pour reduire le temps total necessaire et le risque d'embouteillages. Nous calculons \(T_{CV-RP}\) avec _OSMN_[4], un package Python qui permet de telecharger des donnees geospatiales depuis _OpenStreetMap_ : le systeme enumere exhaustivement toutes les solutions possibles et selectionne celle qui utilise le nombre minimal de vehicules pour soulager les embouteillages. Pour reduire le temps de calcul requis pour generer la liste de recommandations, nous pre-calculons le temps estime entre chaque point. Nous avons essaye _Google Maps API_, et il s'est avere que son utilisation prenait plus de temps que OpenStreetMap pour calculer le temps estime entre deux points. Nous avons donc adopte ce dernier. Plus precisement, nous avons utilise les outils de recherche operationnelle (OR-Tools) [8] pour l'optimisation combinatoire, concus pour trouver la solution optimale a un probleme a partir d'un ensemble extremement large de solutions possibles. Il convient de noter que les ressources publiques peuvent etre considerees comme un cas particulier dans notre contexte. Les vehicules publiques sont generalement gares a des points fixes, tandis que les conducteurs/vehicules sont souvent disperses. Par consequent, les vehicules publiques tels que les ambulances peuvent egalement etre integres a notre contexte.
## 3 Prototype et cas d'etude
Suivant les directives de recherche en science de la conception proposees par [6] : La recherche en science de la conception doit produire un artefact viable sous la forme d'un construit, d'un modele, d'une methode, ou d'une instanciation. Dans cette section, nous presentons un prototype d'un systeme qui aide a recommander des ressources conducteur/vehicule pendant une crise. Nous presenterons d'abord l'architecture du systeme qui met en couvre le systeme de recommandations base sur des contraintes, comment le systeme fonctionne, puis nous decrivons un scenario pour illustrer l'utilite de notre systeme.
Comme illustre par la Figure 2, notre systeme est base sur une architecture a quatre couches. La couche d'interaction comprend une interface mobile qui recuirele la disponibilite des conducteurs citoyens/benevoles pendant une crise, et une interface web permettant aux decideurs d'interagir avec le systeme et de preciser les informations de chaque point de secours. La couche d'intelligence, le ceuur du systeme, calcule la liste des recommandations satisfaisant certaines contraintes et les affche aux decideurs. La couche de service calcule les coordonnes de chaque point de secours a partir de sa localisation geographique et estime le temps necessaire pour que chaque paire conducteur/vehicule atteigne le point de secours. Enfin, la couche de donnees contient une base de connaissances soutenue par une ontologie pour le domaine de la gestion de crise, modeli-sant et stockant toutes les informations et donnees necessaires. A titre d'illustration, prenons une crise d'inondation qui se produit dans la ville de Compiegne et qui necessite une evacuation rapide des personnes vulnerables vers des abris. Imaginons que le conseil municipal dispose (1) d'une liste de conducteurs citoyens/benevoles associes et de leur
vehicle personnel pouvant etre solilicites en situation d'urgence; (2) d'abris d'accueil d'urgence. A l'aide d'une interface Web, les decideurs devant gerer l'evacuation fournissent au systeme les informations requises pour chaque point de secours, notamment le nombre de personnes et de personnes handicapes a evacuer, ainsi que leur niveau de priorite. Le systeme consulte alors sa base de connaissances afin d'identifier les ressources conducteurs/vehicules disponibles. En utilisant les donnees d'OpenStreetMap, il calcule les temps de deplacement estimes entre les vehicules et les points de secours, en cherchant a les minimiser. Enfin, le systeme recommande aux gestionnaires de l'evacuation la liste optimale des paires conducteur/vehicule afin de les aider dans leur prise de decision. Cette approche permet d'optimiser l'allocation des ressources pour evacuer les personnes vulnerables dans les delais les plus courts vers les abris.
## 4 Conclusion et perspectives
Cet article presente un systeme de recommandations destine aider les decideurs a allouer des paires de conducteur/vehicule citoyens/benevoles, lorsque les ressources publiques sont insuffisantes. Le systeme, structure en quatre couches modulaires interconnectees, utilise une ontologie pour la structuration et le stockage des donnees, applique OpenStreetMap pour le calcul du temps et de la distance entre deux points geographiques, genere des recommandations pour chaque point de secours et facilite les interactions entre les decideurs et les systeme de recommandations. Pour l'avenir, nous envisageons d'enrichir l'ontologie pour mieux gerer les crises, d'ajouter davantage de contraintes pour une modelisation plus realiste de la crise, de construire un systeme dynamique pour la reutilisation des ressources en temps reel et d'integer notre systeme dans une simulation basee sur des agents pour en evaluer les aspects socio-techniques dans differents scenarios de gestion de crise.
## Remerciements
Cette recherche a ete financee par l'Agence National de la Recherche (ANR) et par l'entreprise Vivocaz au titre du projet France Relance - preservation de l'emploi R&D (ANR-21-PRRD-0072-01).
|
2305.18505 | Provable Reward-Agnostic Preference-Based Reinforcement Learning | Preference-based Reinforcement Learning (PbRL) is a paradigm in which an RL
agent learns to optimize a task using pair-wise preference-based feedback over
trajectories, rather than explicit reward signals. While PbRL has demonstrated
practical success in fine-tuning language models, existing theoretical work
focuses on regret minimization and fails to capture most of the practical
frameworks. In this study, we fill in such a gap between theoretical PbRL and
practical algorithms by proposing a theoretical reward-agnostic PbRL framework
where exploratory trajectories that enable accurate learning of hidden reward
functions are acquired before collecting any human feedback. Theoretical
analysis demonstrates that our algorithm requires less human feedback for
learning the optimal policy under preference-based models with linear
parameterization and unknown transitions, compared to the existing theoretical
literature. Specifically, our framework can incorporate linear and low-rank
MDPs with efficient sample complexity. Additionally, we investigate
reward-agnostic RL with action-based comparison feedback and introduce an
efficient querying algorithm tailored to this scenario. | Wenhao Zhan, Masatoshi Uehara, Wen Sun, Jason D. Lee | 2023-05-29T15:00:09Z | http://arxiv.org/abs/2305.18505v3 | # How to Query Human Feedback Efficiently in RL?
###### Abstract
Reinforcement Learning with Human Feedback (RLHF) is a paradigm in which an RL agent learns to optimize a task using pair-wise preference-based feedback over trajectories, rather than explicit reward signals. While RLHF has demonstrated practical success in fine-tuning language models, existing empirical work does not address the challenge of how to efficiently sample trajectory pairs for querying human feedback. In this study, we propose an efficient sampling approach to acquiring exploratory trajectories that enable accurate learning of hidden reward functions before collecting any human feedback. Theoretical analysis demonstrates that our algorithm requires less human feedback for learning the optimal policy under preference-based models with linear parameterization and unknown transitions, compared to the existing literature. Specifically, our framework can incorporate linear and low-rank MDPs. Additionally, we investigate RLHF with action-based comparison feedback and introduce an efficient querying algorithm tailored to this scenario.
## 1 Introduction
Reinforcement learning algorithms train agents to optimize rewards of interests. However, setting an appropriate numerical reward can be challenging in practical applications (e.g., design a reward function for a robot arm to learn to play table tennis), and optimizing hand-crafted reward functions can lead to undesirable behavior when the reward function does not align with human intention. To overcome this challenge, there has been a recent surge of interest in preference-based reinforcement learning with Human Feedback (RLHF). In reinforcement learning with human feedback (RLHF), the agent does not receive a numerical reward signal, but rather receives feedback from a human expert
in the form of preferences, indicating which state-action trajectory is preferred in a given pair of trajectories. RLHF has gained considerable attention in various domains, including NLP (Ziegler et al., 2019; Stiennon et al., 2020; Wu et al., 2021; Nakano et al., 2021; Ouyang et al., 2022; Glaese et al., 2022; Ramamurthy et al., 2022; Liu et al., 2023), robot learning (Christiano et al., 2017; Brown et al., 2019; Shin et al., 2023), and recommender systems (Xue et al., 2022).
Despite the promising applications of RLHF in various areas, there are only a few provably efficient algorithms (also known as PAC RL) for this purpose (Pacchiano et al., 2021; Chen et al., 2022). These algorithms iterate through the following processes: collecting new trajectories from the environment, obtaining human feedback on the trajectories, and learning hidden reward functions from the human feedback. However, this approach can be slow and expensive in practice as it requires humans in the loop of learning process, which is not as easy as it may sound. Putting human in the loop of the entire learning process typically means that we need human involved in the loop of hyperparameter tuning and model selection as well. For example, interactive decision-making algorithms such as DAgger (Ross et al., 2011) that put human in the loop of training can become impractical when the expert is human, which has been observed in prior works (Ross et al., 2013; Laskey et al., 2016) when applying DAgger to some real-world robotics applications. In contrast, in InstructGPT (Ouyang et al., 2022), the majority of preference data are collected by crowdsourcing prompts from the entire world and the supervised policies, therefore most of the human labeling process does not depend on the training steps afterward. Another line of work (Zhu et al., 2023) focuses on purely offline RL algorithms to learn a near-optimal policy from offline trajectories with good coverage (e.g., offline data that covers some high-quality policies' traces). Nevertheless, it does not address the question of how to obtain such high-quality offline data a priori (Chen and Jiang, 2019).
We propose a new method that lies in between purely online and purely offline methods for RLHF. Our algorithm first collects state-action trajectories from the environment without human feedback. In this step, we perform experimental design to acquire exploratory trajectories that facilitate the subsequent learning of reward functions. In the second step, we collect preference feedback on the collected trajectories from human experts. In the third step, we aim to learn the underlying hidden reward functions using the collected trajectories in the first step and preference feedback in the second step. In the fourth step, we learn the optimal policy by solving the offline RL problem under the learned reward function using the pre-collected trajectory data. Our approach can be understood as performing experimental design for RLHF, which allows us to separate the data-collection process from the process of querying human feedback, eliminating the need for constantly keeping human in the training loop. For instance, we only need to keep human experts in step 2 above, while we can freely perform hyperparameter tuning / model selection for the rest steps without requiring human experts sitting next to the computers. This process can significantly reduce the burden from human experts.
Our contributions can be summarized as follows:
* We propose an efficient experimental design algorithm for RLHF. Our algorithm is specif
ically designed for linear reward parametrization, which is commonly used in models such as the Bradley-Terry-Luce model, and can handle unknown transitions. This flexibility allows us to handle non-tabular transition models like low-rank MDPs (Agarwal et al., 2020; Uehara et al., 2021) and linear MDPs (Jin et al., 2020). To the best of the our knowledge, existing works with statistical guarantees cannot incorporate these models efficiently. Notably, our experimental design algorithm does not depend on any information of the reward and is reward-agnostic. Therefore, the collected trajectories can indeed be reused for learning many reward functions at the same time.
* Our key idea is to decouple the interaction with the environment and the collection of human feedback. This decoupling not only simplifies the process of obtaining human feedback in practice but also results in a significant reduction in the sample complexity associated with human feedback compared to existing works (Pacchiano et al., 2021; Chen et al., 2022). This improvement is particularly valuable as collecting human feedback is often a resource-intensive process.
* To circumvent the scaling with the maximum per-trajectory reward in the trajectory-based comparison setting, we further investigate preference-based RL with action-based comparison and propose a provably efficient algorithm for this setting. We show that in this case the sample complexity only scales with the bound of the advantage functions of the optimal policy, which can be much smaller than the maximum per-trajectory reward, a standard assumption used in the imitation learning literature (Ross et al., 2011; Agarwal et al., 2019).
### Related Works
We refer the readers to Wirth et al. (2017) for an overview of Preference-based RL (PbRL). PbRL has been well-explored in bandit setting under the notion of dueling bandits (Yue et al., 2012; Zoghi et al., 2014; Dudik et al., 2015), where the goal is to find the optimal arm in the bandit given human preference over action pairs. For MDPs, in addition to Pacchiano et al. (2021); Chen et al. (2022), which we compare in the introduction, Novoseller et al. (2020); Xu et al. (2020) have also developed algorithms with sample complexity guarantees. Novoseller et al. (2020) proposes a double posterior sampling algorithm with an asymptotic regret sublinear in the horizon \(H\). Xu et al. (2020) proposes a PAC RL algorithm but relies on potentially strong assumptions such as Strong Stochastic Transitivity. Note both of Novoseller et al. (2020); Xu et al. (2020) are limited to the tabular setting.
Our algorithm shares a similar concept with reward-free RL which focuses on exploration in the state-action space without using explicit rewards. Reward-free RL has been studied in many MDPs such as tabular MDPs (Jin et al., 2020), linear MDPs (Wang et al., 2020), low-rank MDPs (Agarwal et al., 2020, 2023) and several other models (Chen et al., 2022; Zanette et al., 2020; Qiu et al., 2021). The goal of reward-free RL is to gather exploratory state-action data to address the challenge of unknown _transitions_ before _observing rewards_. In contrast, our approach aims to design a single exploration distribution from
which we can draw trajectory pairs to solicit human feedback for learning reward functions. Our setting can be considered as an experimental design for RLHF.
## 2 Preliminaries
We introduce our formulation of Markov decision processes (MDPs) and RLHF.
### MDPs with Linear Reward Parametrization
We consider a finite-horizon MDP \(\mathcal{M}=(\mathcal{S},\mathcal{A},P^{*},r^{*},H)\), where \(\mathcal{S}\) is the state space, \(\mathcal{A}\) is the action space, \(P^{*}=\{P_{h}^{*}\}_{h=1}^{H}\) is the ground-truth transition dynamics, \(r^{*}=\{r_{h}^{*}\}_{h=1}^{H}\) is the ground-truth reward function, and \(H\) is the horizon. Specifically, for each \(h\in[H]\left([H]:=(1,\cdots,H)\right)\), \(P_{h}^{*}:\mathcal{S}\times\mathcal{A}\rightarrow\Delta(\mathcal{S})\) and \(r_{h}^{*}:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\) represent the transition and reward function at step \(h\), respectively. Moreover, we use \(P_{1}(\cdot)\) to denote the initial state distribution. Here, both \(r^{*},P^{*}\) are unknown to the learner. In this work, we assume that the cumulative reward of any trajectory \(\tau=(s_{h},a_{h})_{h=1}^{H}\) does not exceed \(r_{\max}\), i.e., \(\sum_{h=1}^{H}r_{h}(s_{h},a_{h})\leq r_{\max}\).
Policies and value functions.A policy \(\pi=\{\pi_{h}\}_{h=1}^{H}\) where \(\pi_{h}:\mathcal{S}\rightarrow\Delta(\mathcal{A})\) for each \(h\in[H]\) characterizes the action selection probability for every state at each step. In this paper, we assume the policy belongs to a policy class \(\Pi\), which can be infinite. Given a reward function \(r\) and policy \(\pi\), the associated value function and Q function at time step \(h\) are defined as follows: \(V_{h}^{r,\pi}(s)=\mathbb{E}_{\pi,P^{*}}[\sum_{h^{\prime}=h}^{H}r_{h}(s_{h},a_{ h})|s_{h}=s],Q_{h}^{r,\pi}(s,a)=\mathbb{E}_{\pi,P^{*}}[\sum_{h^{\prime}=h}^{H}r_{h}(s_{h},a_{h})|s_{h}=s,a_{h}=a]\). Here, \(\mathbb{E}_{\pi,P^{*}}[\cdot]\) represents the expectation of the distribution of the trajectory induced by a policy \(\pi\) and the transition \(P^{*}\). We use \(V^{r,\pi}\) to denote the expected cumulative rewards of policy \(\pi\) with respect to reward function \(r\) under \(P^{*}\), i.e., \(V^{r,\pi}:=\mathbb{E}_{s\sim P_{1}^{*}}V_{1}^{r,\pi}(s)\), and use \(V^{r,*}\) to denote the maximal expected cumulative rewards with respect to reward function \(r\) under \(P^{*}\), i.e., \(V^{r,*}:=\max_{\pi\in\Pi}V^{r,\pi}\). In particular, let \(\pi^{*}\) denote the best policy in \(\Pi\) with respect to \(r^{*}\), i.e., \(\arg\max_{\pi\in\Pi}V^{r^{*},\pi}\). In contrast, we denote the globally optimal policy by \(\pi_{\text{g}}:=\arg\max_{\pi\in\Pi_{\text{Mar}}}V^{r^{*},\pi}\) where \(\Pi_{\text{Mar}}\) is the set of all Markovian policies. Note that when \(\Pi\neq\Pi_{\text{Mar}}\), \(\pi^{*}\) might not be optimal compared to \(\pi_{\text{g}}\).
Linear reward parametrization.To learn the unknown reward function, it is necessary to make structural assumptions about the reward. We consider a setting where the true reward function possesses a linear structure:
**Assumption 1** (Linear Reward Parametrization).: _We assume MDP has a linear reward parametrization with respect to (w.r.t.) known feature vectors \(\phi_{h}(s,a)\in\mathbb{R}^{d}\). Specifically, for each \(h\in[H]\), there exists an unknown vector \(\theta_{h}^{*}\in\mathbb{R}^{d}\) such that \(r_{h}^{*}(s,a)=\phi_{h}(s,a)^{\top}\theta_{h}^{*}\) for all \((s,a)\in\mathcal{S}\times\mathcal{A}\). For technical purposes, we suppose for all \(s\in\mathcal{S},a\in\mathcal{A},h\in[H]\), we have \(\|\phi_{h}(s,a)\|\leq R,\|\theta_{h}^{*}\|\leq B\)._
Note when \(d=|\mathcal{S}||\mathcal{A}|\) and setting \(\phi_{h}(s,a)\) as one-hot encoding vectors, we can encompass the tabular setting. Linear reward parametrization is commonly used in the literature of preference-based RL with statistical guarantees (Pacchiano et al., 2021; Zhu et al., 2023).
Notation.We use \(r^{*}(\tau):=\sum_{h=1}^{H}r_{h}^{*}(s_{h},a_{h})\) to denote the ground-truth cumulative reward of trajectory \(\tau\). In particular, using the linearity of the rewards, we can write \(r^{*}(\tau)=\langle\phi(\tau),\theta^{*}\rangle\) where we denote \(\phi(\tau):=[\phi_{1}(s_{1},a_{1})^{\top},\cdots,\phi_{H}(s_{H},a_{H})^{\top}] ^{\top},\theta^{*}:=[\theta_{1}^{*\top},\cdots,\theta_{H}^{*\top}]^{\top}\). We use \(\phi(\pi)\) to denote \(\mathbb{E}_{\tau\sim(\pi,P^{*})}[\phi(\tau)]\) for simplicity. We also use \(\Theta(B)\) to denote the set \(\{\theta\in\mathbb{R}^{d}:\|\theta\|\leq B\}\) and \(\Theta(B,H)\) to denote the set \(\{\theta\in\mathbb{R}^{Hd}:\theta=[\theta_{1}^{\top},\cdots,\theta_{H}^{\top }]^{\top},\theta_{h}\in\Theta(B),\forall h\in[H]\}\cap\{\theta\in\mathbb{R}^{ Hd}:\langle\phi(\tau),\theta^{*}\rangle\leq r_{\max},\forall\tau\}\). We use the notation \(f=O(g)\) when there exists a universal constant \(C>0\) such that \(f\leq Cg\) and \(\widetilde{O}(g):=O(g\log g)\).
### Reinforcement Learning with Human Feedback
In this paper, we consider a framework for RLHF that mainly consists of the following four steps:
* **Step 1**: Collect a dataset of trajectory pairs \(\mathcal{D}_{\mathrm{reward}}=(\tau^{n,0},\tau^{n,1})_{n=1}^{N}\) in a reward-agnostic fashion, where \(\tau^{n,i}=\{s_{h}^{n,i},a_{h}^{n,i},s_{h+1}^{n,i}\}_{h=1}^{H}\) for \(n\in[N]\) and \(i\in(0,1)\).
* **Step 2**: Obtain preference feedback from human experts for each pair of trajectories in \(\mathcal{D}_{\mathrm{reward}}\). Namely, if trajectory \(\tau^{n,1}\) is preferred over \(\tau^{n,0}\), then assign \(o^{n}=1\), otherwise assign \(o^{n}=0\).
* **Step 3**: Estimate the ground truth reward using the dataset \(\mathcal{D}_{\mathrm{reward}}\) and preference labels \(\{o^{n}\}_{n=1}^{N}\).
* **Step 4**: Run RL algorithms (either online or offline) using the learned reward function and obtain a policy \(\widehat{\pi}\) that maximizes the cumulative learned reward.
The above framework has been applied in practical applications, such as PEBBLE (Lee et al., 2021). However, these algorithms lack provable sample efficiency guarantees. In particular, it remains unclear in the literature how to collect the trajectories in **Step 1** to enable accurate estimation of the ground truth reward in step 3, possibly for many different reward functions simultaneously. In our work, we strive to develop a concrete algorithm that adheres to the above framework with strong theoretical guarantees. We also emphasize that in our setting, step 1 is reward-agnostic, and the collected dataset can be re-used for learning many different rewards as long as they are linear in the same feature \(\phi\).
Preference model.In this work, we assume the preference label follows the Bradley-Terry-Luce (BTL) model (Bradley and Terry, 1952) in Step 2, i.e., we have the following assumption:
**Assumption 2**.: _Suppose for any pair of trajectory \((\tau^{0},\tau^{1})\), we have_
\[\mathbb{P}(o=1)=\mathbb{P}(\tau^{1}\succ\tau^{0})=\sigma(r^{*}(\tau^{1})-r^{*}( \tau^{0}))=\frac{\exp(r^{*}(\tau^{1}))}{\exp(r^{*}(\tau^{0}))+\exp(r^{*}(\tau^{1 }))},\]
_where \(o\) is the human preference over \((\tau^{0},\tau^{1})\) and \(\sigma(\cdot)\) is the sigmoid function._
Our analysis will leverage the quantity \(\kappa:=\sup_{|x|\leq r_{\max}}|1/\sigma^{\prime}(x)|=2+\exp(2r_{\max})+\exp(-2r _{\max})\) to measure the difficulty of estimating the true reward from the BTL preference model.
## 3 Algorithm: Regime
We propose an algorithm specifically designed for the RLHF setting when the transitions are unknown. In order to handle unknown transitions, we use the following mild oracle:
**Definition 1** (Reward-free RL oracle).: _A reward-free learning oracle \(\mathcal{P}(\Pi,\epsilon,\delta)\) can return an estimated model \(\widehat{P}\) such that with probability at least \(1-\delta\), we have for all policy \(\pi\in\Pi\) and \(h\in[H],s\in\mathcal{S},a\in\mathcal{A}\), \(\|\widehat{P}_{1}(\cdot)-P_{1}^{*}(\cdot)\|_{1}\leq\epsilon^{\prime},\mathbb{ E}_{\pi,P^{*}}[\|\widehat{P}_{h}(\cdot|s,a)-P_{h}^{*}(\cdot|s,a)\|_{1}]\leq \epsilon^{\prime}\) where \(\|\cdot\|_{1}\) denotes total variation distance (i.e., \(\ell_{1}\)-norm)._
This oracle necessitates accurate model learning through interactions with the environment. The required guarantee is relatively mild since we do not require a point-wise error guarantee, but rather an expectation-based guarantee under the ground truth transition. This oracle holds true not only in tabular MDPs (Jin et al., 2020), but also in low-rank MDPs (Agarwal et al., 2020, 2023), where the only assumption is the low-rank property of the transition dynamics, and features could be _unknown_ to the learner. Low-rank MDPs find wide application in practical scenarios, including blocked MDPs (Du et al., 2019; Zhang et al., 2022, 2020, 2020, 2021, 2022).
### Algorithm
The algorithm is described in Algorithm 1. Given a learned model \(\hat{P}\), we use \(\hat{\phi}(\pi)=\mathbb{E}_{\tau\sim(\pi,\hat{P})}[\phi(\tau)]\) to estimate \(\phi(\pi):=\mathbb{E}_{\tau\sim(\pi,P^{*})}[\phi(\tau)]\). The algorithm mainly consists of four steps as follows.
Step 1: Collection of state-action trajectories by interacting with the environment (Line 4-11).To learn the ground truth reward function, we collect exploratory state-action trajectories that cover the space spanned by \(\phi(\cdot)\) before collecting any human feedback. To achieve this, at each iteration, we identify a set of explorative policy pairs that are not covered by existing data. We measure the extent to which the trajectory generated by \((\pi_{0},\pi_{1})\) can be covered by computing the norm of \(\widehat{\phi}(\pi_{0})-\widehat{\phi}(\pi_{1})\) on the metric induced by the inverse covariance matrix \(\Sigma_{n}^{-1}\) at time step \(n\). After iterating this procedure \(N\) times
and obtaining sets of policies \(\{(\pi^{n,0},\pi^{n,1})\}_{n=1}^{N}\), we sample \(N\) exploratory trajectory pairs by executing the policy pairs \((\pi^{n,0},\pi^{n,1})\) for \(n\in[N]\). Notably, this trajectory-collection process is reward-agnostic and thus the collected samples can be used to learn multiple rewards in multi-task RL.
Step 2: Collection of preference feedback by interacting with human experts (Line 12).If trajectory \(\tau^{n,1}\) is preferred over \(\tau^{n,0}\), then assign \(o^{n}=1\), otherwise assign \(o^{n}=0\).
Step 3: Reward learning via MLE (Line 13).We adopt the widely-used maximum likelihood estimation (MLE) approach to learn the reward function, which has also been employed in other works Ouyang et al. (2022); Christiano et al. (2017); Brown et al. (2019); Shin et al. (2023); Zhu et al. (2023). Specifically, we learn the reward model by maximizing the log-likelihood \(L(\theta,\mathcal{D}_{\mathrm{reward}},\{o^{n}\}_{n=1}^{N})\):
\[\sum_{n=1}^{N}\log\Big{(}o^{n}\cdot\sigma(\langle\theta,\phi(\tau^{n,1})-\phi (\tau^{n,0})\rangle)+(1-o^{n})\cdot\sigma(\langle\theta,\phi(\tau^{n,0})-\phi( \tau^{n,1})\rangle)\Big{)}. \tag{1}\]
Step 4: RL with respect to learned rewards (Line 14).We obtain the near-optimal policy that maximizes the cumulative learned rewards.
Our algorithm differs significantly from the algorithms proposed in Pacchiano et al. (2021); Chen et al. (2022b). In their algorithms, they repeat the following steps: (a) collect new trajectories from the environment using policies based on the current learned reward
and transition models, (b) collect human feedback for the obtained trajectories, (c) update the reward and transition models. A potential issue with this approach is that every time human feedback is collected, agents need to interact with the environment, causing a wait time for humans. In contrast, our algorithm first collects exploratory trajectories without collecting any human feedback in Step 1. Then, we query human feedback and learn the reward model in Step 2-3. As a result, we decouple the step of collecting exploratory data from that of collecting human feedback. Hence, in our algorithm, we can efficiently query human feedback in parallel, mirroring common practice done in InstructGPT. Moreover, our algorithm's design leads to lower sample complexity for both trajectory pairs and human feedback than Pacchiano et al. (2021); Chen et al. (2022b), as demonstrated in the following discussion.
**Remark 1**.: _In Step 4 (Line 14), it is not necessary to plan under \(\widehat{P}\) learned in Line 3. Instead, any sample-efficient RL algorithm can be employed w.r.t. the learned reward._
### Analysis
Now we provide the sample complexity of Algorithm 1 as shown in the following theorem.
**Theorem 1**.: _Let_
\[\epsilon^{\prime}\leq\frac{\epsilon}{6BRH^{2}},\quad\lambda\geq 4HR^{2},\quad N \geq\widetilde{\mathcal{O}}\Big{(}\frac{\lambda\kappa^{2}B^{2}R^{2}H^{4}d^{2} \log(1/\delta)}{\epsilon^{2}}\Big{)},\]
_Then under Assumption 1 and 2, with probability at least \(1-\delta\), we have_
\[V^{r^{*},\widehat{\pi}}\geq V^{r^{*},*}-\epsilon.\]
_Note the sample complexity in Theorem 1 does not depend on the complexity of \(\Pi\) and thus we can learn arbitrary policy classes. When \(\Pi=\Pi_{\mathrm{Mar}}\), we have \(\pi^{*}=\pi_{\mathrm{g}}\) and thus we can compete against the global optimal policy._
Since the sample complexity of human feedback, denoted by \(N_{\mathrm{hum}}\), is equal to \(N\), Theorem 1 shows that the sample complexity of human feedback required to learn an \(\epsilon\)-optimal policy scales with \(\tilde{O}(1/\epsilon^{2})\) and is polynomial in the norm bounds \(B,R\), the horizon \(H\), and the dimension of the feature space \(d\). Notably, the _sample complexity of human feedback_\(N_{\mathrm{hum}}\)_only depends on the structural complexity of the reward function, regardless of the underlying transition model_. This is because while our theorem requires that the learned transition model is accurate enough (\(\epsilon^{\prime}\leq\frac{\epsilon}{6BRH^{2}}\)), we do not need human feedback to learn the transition model for this purpose. This property of our algorithm is particularly desirable when collecting human feedback is much more expensive than collecting trajectories from the environment. Existing works with sample-efficient guarantees, such as Pacchiano et al. (2021); Chen et al. (2022b), do not have this property (e.g., they query human feedback on all collected trajectories including those that are used for learning the transition dynamics). Our algorithm's favorable property can be attributed to the careful
design of the algorithm, where the step of collecting trajectories and learning transitions is reward-agnostic and thus separated from the step of collecting human feedback and learning rewards.
As the most relevant work, we compare our results with Pacchiano et al. (2021), which considers online learning in RLHF with unknown tabular transition models and linear reward parameterization. Let \(N_{\mathrm{tra}}\) and \(N_{\mathrm{hum}}\) denote the number of required trajectory pairs and human feedback, respectively. Then, to obtain an \(\epsilon\)-optimal policy, the algorithm in Pacchiano et al. (2021, Theorem 2) requires:
\[N_{\mathrm{tra}}=N_{\mathrm{hum}}=\widetilde{\mathcal{O}}\bigg{(}\frac{| \mathcal{S}|^{2}|\mathcal{A}|d+\kappa^{2}d^{2}}{\epsilon^{2}}\bigg{)}.\]
Here we omit the dependence on \(B,R,H\) to facilitates the comparison. In contrast, under the setting considered in Pacchiano et al. (2021), by leveraging the reward-free learning oracle from Jin et al. (2020) for tabular MDPs, our algorithm achieves the following sample complexity:
\[N_{\mathrm{tra}}=\widetilde{\mathcal{O}}\bigg{(}\frac{|\mathcal{S}|^{2}| \mathcal{A}|}{\epsilon^{2}}+\frac{\kappa^{2}d^{2}}{\epsilon^{2}}\bigg{)},N_{ \mathrm{hum}}=\widetilde{\mathcal{O}}\bigg{(}\frac{\kappa^{2}d^{2}}{\epsilon^ {2}}\bigg{)},\]
where the number of required trajectory-pairs comes from Jin et al. (2020)[Lemma 3.6]. We observe that our algorithm achieves a better sample complexity for both trajectory pairs and human feedbacks than the previous work. In particular, our algorithm has the advantage that \(N_{\mathrm{hum}}\) depends only on the feature dimension \(d\) and not on \(|\mathcal{S}|\) or \(|\mathcal{A}|\) which is the structural complexity of the underlying transition. This improvement is significant since obtaining human feedback is often costly. Lastly, we note that a similar comparison can be made to the work of Chen et al. (2022), which considers reward and transition models with bounded Eluder dimension. In summary, our key proposal for achieving improved sample complexity here is to _only query expensive human feedback after exploration and transition dynamics learning_.
Proof sketch.The proof of Theorem 1 consists of three steps. We first prove that the estimated feature vector \(\widehat{\phi}(\pi)\) is close to \(\phi(\pi)\) for all policies \(\pi\). Then we can show that the computed explorative policies \(\{(\pi^{n,0},\pi^{n,1})\}_{n=1}^{N}\) can cover the feature space sufficiently via the Elliptical Potential Lemma (Lemma 1). In the end we bound the estimation error of \(\widehat{\theta}\) and the suboptimality of \(\widehat{\pi}\) using the guarantee of MLE. The complete proof is deferred to Appendix A and B.1.
## 4 Regime in Linear MDPs
So far, we have considered RLHF given reward-free RL oracle satisfying Definition 1. Existing works have shown the existence of such a model-based reward-free RL oracle in low-rank MDPs (Agarwal et al., 2020, 2023). However, these results have not been extended to
linear MDPs (Jin et al., 2020) where model-free techniques are necessary. Linear MDPs are relevant to our setting because linear reward parametrization naturally holds in linear MDPs. Unfortunately, a direct reduction from linear MDPs to low-rank MDPs may introduce a dependence on the cardinality of \(\mathcal{S}\) without assuming strong inductive bias in the function class. In this section, we propose a model-free algorithm that can overcome this dependence by making slight modifications to Algorithm 1. We begin by providing the definition of linear MDPs.
**Assumption 3** (Linear MDPs (Jin et al., 2020)).: _We suppose MDP is linear with respect to some known feature vectors \(\phi_{h}(s,a)\in\mathbb{R}^{d}(h\in[H],s\in\mathcal{S},a\in\mathcal{A})\). More specifically, if for each \(h\in[H]\), there exist \(d\) unknown signed measures \(\mu^{*}_{h}=(\psi^{(1)}_{h},\cdots,\psi^{(d)}_{h})\) over \(\mathcal{S}\) and an unknown vector \(\theta^{*}_{h}\in\mathbb{R}^{d}\) such that \(P^{*}_{h}(\cdot|s,a)=\phi_{h}(s,a)^{\top}\mu^{*}_{h}(\cdot)\) and \(r^{*}_{h}(s,a)=\phi_{h}(s,a)^{\top}\theta^{*}_{h}\) for all \((s,a)\in\mathcal{S}\times\mathcal{A}\). For technical purposes, we suppose the norm bound \(\|\mu^{*}_{h}(s)\|_{2}\leq\sqrt{d}\) for any \(s\in\mathcal{S}\)._
In addition, we use \(\mathcal{N}_{\Pi}(\epsilon)\) to denote the covering number of \(\Pi\), which is defined as follows:
**Definition 2** (\(\epsilon\)-covering number).: _The \(\epsilon\)-covering number of the policy class \(\Pi\), denoted by \(\mathcal{N}_{\Pi}(\epsilon)\), is the minimum integer \(n\) such that there exists a subset \(\Pi^{\prime}\subset\Pi\) with \(|\Pi^{\prime}|=n\) and for any \(\pi\in\Pi\) there exists \(\pi^{\prime}\in\Pi^{\prime}\) such that \(\max_{s\in\mathcal{S},h\in[H]}\|\pi_{h}(\cdot|s)-\pi^{\prime}_{h}(\cdot|s)\|_ {1}\leq\epsilon\)._
### Algorithm
The reward-free RL oracle that satisfies Definition 1 for learning accurate transitions may be excessively strong for linear MDPs. Upon closer examination of Algorithm 1, it becomes apparent that the learned transition model is solely used for estimating \(\phi(\pi)\). Therefore, our approach focuses on achieving a precise estimation of \(\phi(\pi)\) which can be done via model-free policy evaluation.
Our main algorithm is described in Algorithm 2 with subroutines for estimating \(\widehat{\phi}(\pi)\). The overall structure of the primary algorithm resembles that of Algorithm 1. The key distinction lies in the part to accurately estimate \(\widehat{\phi}(\pi)\) within the subroutines, without relying on the abstract reward-free RL oracle (Definition 1). In the following, we provide a brief explanation of these subroutines. The detailed descriptions of these subroutines is shown in Algorithm 3 and 4.
Collecting exploratory data to learn transitions.Being inspired by the approach in Jin et al. (2020); Wang et al. (2020), we construct an exploratory dataset by running LSVI-UCB (Jin et al., 2020) with rewards equivalent to the bonus. Specifically, in the \(k\)-th iteration, we recursively apply the least square value iteration with a bonus term \(\{b^{k}_{h}(s,a)\}_{h=1}^{H}\), which is introduced to induce exploration. This process yields an exploratory policy \(\pi^{k}\) based on exploratory rewards \(\{r^{k}_{h}\}_{h=1}^{H}\), where \(r^{k}_{h}=b^{k}_{h}/H\). We then collect a trajectory by
executing policy \(\pi^{k}\). By repeating this procedure for \(K\) iterations, we accumulate an exploratory dataset. Notably, since the exploratory reward needs to be bounded, we clip both the bonus and the estimated Q function. Here \(\mathrm{Clip}_{[a,b]}(x)\) means \(\min\{\max\{a,x\},b\}\). The detailed algorithm is provided in Algorithm 3. It is important to note that this step involves generating \(K\) trajectories through interactions with the environment.
Estimating \(\phi(\pi)\) using the exploratory data.Let \((\phi(\pi))_{h,j}\) denote the \(j\)-th entry of \(\phi_{h}(\pi):=\mathbb{E}_{\pi}[\phi_{h}(s_{h},a_{h})]\). Then to estimate \(\phi(\pi)\), we only need to estimate \((\phi(\pi))_{h,j}\) for all \(h\in[H],j\in[d]\). Note that for all \(\pi\in\Pi\), we have \(\phi(\pi)=\left[\mathbb{E}_{\pi,P^{*}}[\phi_{1}(s_{1},a_{1})^{\top}],\cdots, \mathbb{E}_{\pi,P^{*}}[\phi_{H}(s_{H},a_{H})^{\top}]\right]^{\top}\). Here, the key observation is that \((\phi(\pi))_{h,j}/R\) is exactly the expected cumulative rewards with respect to the following reward function \(r^{h,j}_{h^{\prime}}(s,a)=\phi_{h^{\prime}}(s,a)^{\top}\theta^{h,j}_{h^{\prime}}\) for all \(h^{\prime}\in[H]\) (up to an \(R\) factor) where \(\theta^{h,j}_{h^{\prime}}=\frac{1}{R}\cdot e_{j}\) for \(h^{\prime}=h\) and \(\theta^{h,j}_{h^{\prime}}=0\), otherwise (\(h^{\prime}\neq h\)). Here \(e_{j}\) is the one-hot encoding vector whose \(j\)-th entry is \(1\). Therefore, with the collected dataset, we can run the least square policy evaluation with the reward function \(r^{h,j}\) and let the estimation \((\widehat{\phi}(\pi))_{h,j}\) be \(R\widehat{V}^{\pi}(r^{h,j})\). The detail is in Algorithm 4.
```
Input: Regularization parameter \(\lambda\), feature estimation sample complexity \(K\). Call Algorithm 3 with generating \(K\) trajectories by interacting with the environment. Call Algorithm 4 with reward function \((r^{h,j}_{h^{\prime}})_{h^{\prime}\in[H]}\) to estimate \((\widehat{\phi}(\pi))_{h,j}\) for all \(\pi\in\Pi,h\in[H],j\in[d]\) using \(K\) trajectories. Let \(\widehat{\phi}(\pi)=[\widehat{\phi}_{1}(\pi),\cdots,\widehat{\phi}_{H}(\pi)]\) where the \(j\)-th entry of \(\widehat{\phi}_{h}(\pi)\) is \((\widehat{\phi}(\pi))_{h,j}\). for\(n=1,\cdots,N\)do Compute \((\pi^{n,0},\pi^{n,1})\leftarrow\arg\max_{\pi^{0},\pi^{1}\in\Pi}\|\widehat{\phi}( \pi^{0})-\widehat{\phi}(\pi^{1})\|_{\widehat{\Sigma}^{-1}_{n}}\). Update \(\widehat{\Sigma}_{n+1}=\widehat{\Sigma}_{n}+(\widehat{\phi}(\pi^{0})-\widehat {\phi}(\pi^{1}))(\widehat{\phi}(\pi^{0})-\widehat{\phi}(\pi^{1}))^{\top}\). endfor for\(n=1,\cdots,N\)do Collect a pair of trajectories \(\tau^{n,0},\tau^{n,1}\) from the environment by \(\pi^{n,0},\pi^{n,1}\), respectively. Add \((\tau^{n,0},\tau^{n,1})\) to \(\mathcal{D}_{\mathrm{reward}}\). endfor Obtain the preference labels \(\{o^{(n)}\}_{n=1}^{N}\) from human experts. Run MLE \(\widehat{\theta}\leftarrow\arg\min_{\theta\in\Theta(B,H)}L_{\lambda}(\theta, \mathcal{D}_{\mathrm{reward}},\{o^{(n)}\}_{n=1}^{N})\). Return \(\widehat{\pi}=\arg\max_{\pi\in\Pi}\widehat{V}^{\pi}(\widehat{r})\) where \(\widehat{V}^{\pi}(\widehat{r})\) is obtained by calling Algorithm 4 with reward function \(\widehat{r}=\{\widehat{r}_{h}\}_{h=1}^{H}\) for all \(\pi\) where \(\widehat{r}_{h}(s,a)=\langle\phi_{h}(s,a),\widehat{\theta}\rangle\).
```
**Algorithm 2****REGIME-lin**
### Analysis
Now we present the sample complexity of Algorithm 2. The proof is deferred to Appendix C.1.
**Theorem 2**.: _Let_
\[\lambda_{\rm ex}=\lambda_{\rm pl}=R^{2},\] \[\beta_{\rm ex}=C_{\beta}dHR\sqrt{\log(dKHR/\delta)},\beta_{\rm pl}=C _{\beta}dHR\sqrt{\log(dKHR\mathcal{N}_{\Pi}(\epsilon^{\prime})/\delta)}\] \[\lambda\geq 4HR^{2},K\geq\widetilde{\mathcal{O}}\Big{(}\frac{H^{8}B^{2 }R^{4}d^{4}\log(\mathcal{N}_{\Pi}(\epsilon^{\prime})/\delta)}{\epsilon^{2}} \Big{)},N\geq\widetilde{\mathcal{O}}\Big{(}\frac{\lambda\kappa^{2}B^{2}R^{2}H^{ 4}d^{2}\log(1/\delta)}{\epsilon^{2}}\Big{)},\]
_where \(\epsilon^{\prime}=\frac{\epsilon}{72BR^{2}H\sqrt{d^{H}K^{H}-1}}\), \(C_{\beta}>0\) is a universal constant and \(\kappa=2+\exp(2r_{\rm max})+\exp(-2r_{\rm max})\). Then under Assumption 1 and 3, with probability at least \(1-\delta\), we have_
\[V^{r^{*},\hat{\pi}}\geq V^{r^{*},*}-\epsilon.\]
_Furthermore, by selecting a policy class \(\Pi\) properly, we have_
\[V^{r^{*},\hat{\pi}}\geq V^{r^{*},\pi_{g}}-2\epsilon.\]
_by replacing \(\log(\mathcal{N}_{\Pi}(\epsilon^{\prime})/\delta)=Hd\log\Big{(}\frac{12WR}{ \epsilon^{\prime}}\Big{)}\) where \(W=\frac{\big{(}B+(H+\epsilon)\sqrt{d}\big{)}H\log|\mathcal{A}|}{\epsilon}\)._
The first statement says Algorithm 2 can learn an \(\epsilon\)-optimal policy with the number of trajectory-pairs and human feedbacks as follows:
\[N_{\rm tra}=K+N=\widetilde{\mathcal{O}}\bigg{(}\frac{d^{4}\log\mathcal{N}_{\Pi}( \epsilon^{\prime})+\kappa^{2}d^{2}}{\epsilon^{2}}\bigg{)},N_{\rm hum}= \widetilde{\mathcal{O}}\bigg{(}\frac{\kappa^{2}d^{2}}{\epsilon^{2}}\bigg{)}.\]
Since the sample complexity depends on the covering number of \(\Pi\), we need to carefully choose the policy class. When we choose \(\Pi\) to be the log-linear policy class:
\[\Pi=\Big{\{}\pi=\{\pi_{h}^{\zeta}\}_{h=1}^{H}:\pi_{h}^{\zeta}(a|s)=\frac{\exp( \zeta_{h}^{\top}\phi_{h}(s,a))}{\sum_{a^{\prime}\in\mathcal{A}}\exp(\zeta_{h}^ {\top}\phi_{h}(s,a^{\prime}))},\zeta_{h}\in\mathbb{B}(d,W),\forall s\in \mathcal{S},a\in\mathcal{A},h\in[H]\Big{\}},\]
where \(\mathbb{B}(d,W)\) is the \(d\)-dimensional ball centered at the origin with radius \(W\), although \(\pi^{*}\neq\pi_{\rm g}\), we can show that the value of \(\pi^{*}\) is close to the value of \(\pi_{\rm g}\) up to \(\epsilon\) by setting sufficiently large \(W\). More specifically, we have the following proposition:
**Proposition 1**.: _Let \(W=\frac{\big{(}B+(H+\epsilon)\sqrt{d}\big{)}H\log|\mathcal{A}|}{\epsilon}\), then under Assumption 1 and 3, we have_
\[V^{r^{*},\pi_{\rm g}}-\max_{\pi\in\Pi}V^{r^{*},\pi}\leq\epsilon,\]
_where \(\pi_{\rm g}\) is the global optimal policy._
At the same time, we can bound the covering bumber of the log-linear policy class:
**Proposition 2**.: _Let \(\Pi\) be the log-linear policy class. Then under Assumption 1, for any \(\epsilon\leq 1\), we have \(\log\mathcal{N}_{\Pi}(\epsilon)\leq Hd\log\left(\frac{12WR}{\epsilon}\right)\)._
This immediately leads to the second statement in Theorem 1. Consequently, to learn an \(\epsilon\)-global-optimal policy, it is concluded that the number of required trajectory pairs and human feedback for Algorithm 2 does not depend on \(|\mathcal{S}|\) at all.
Finally, we compare our work to Chen et al. (2022), as it is the only existing work that addresses provable RLHF with non-tabular transition models. Their model-based algorithm exhibits sample complexities that depend on the Eluder dimension associated with the transition models and particularly take linear mixture models as the example. However, we focus on linear MDPs in this work. Although these two models do not capture each other, naively applying model-based approach in linear MDP will cause \(|\mathcal{S}|\) dependence. Consequently, our Algorithm 2 is the first provable RLHF algorithm capable of achieving polynomial sample complexity that is independent of \(|\mathcal{S}|\) in linear MDPs.
## 5 Regime with Action-Based Comparison
The drawback of the current results is that the sample complexity is dependent on \(\kappa\), which can exhibit exponential growth in \(r_{\max}\) under the BTL model. This is due to the fact that \(\sup_{|x|\leq r_{\max}}|1/\sigma^{\prime}(x)|=O(\exp(r_{\max}))\). Such dependence on \(r_{\max}\) is undesirable, especially when rewards are dense and \(r_{\max}\) scales linearly with \(H\). Similar limitations are present in existing works, such as Pacchiano et al. (2021); Chen et al. (2022). To address this challenge, we consider the action-based comparison model (Zhu et al., 2023) in this section. Here, we assume that humans compare two actions based on their optimal Q-values. Given a tuple \((s,a^{0},a^{1},h)\), the human provides feedback \(o\) following
\[\mathbb{P}(o=1|s,a^{0},a^{1},h)=\mathbb{P}(a^{1}\succ a^{0}|s,h)= \sigma(A^{*}_{h}(s,a^{1})-A^{*}_{h}(s,a^{0})), \tag{2}\]
where \(A^{*}_{h}\) is the advantage function of the optimal policy. Similar to trajectory-based comparisons with linear reward parametrization, we assume linearly parameterized advantage functions:
**Assumption 4** (Linear Advantage Parametrization).: _An MDP has linear advantage functions with respect to some known feature vectors \(\phi_{h}(s,a)\in\mathbb{R}^{d}(h\in[H],s\in\mathcal{S},a\in\mathcal{A})\). More specifically, if for each \(h\in[H]\), there exists an unknown vector \(\xi^{*}_{h}\in\mathbb{R}^{d}\) such that \(A^{*}_{h}(s,a)=\phi_{h}(s,a)^{\top}\xi^{*}_{h}\) for all \((s,a)\in\mathcal{S}\times\mathcal{A}\). For technical purposes, we assume for all \(s\in\mathcal{S},a\in\mathcal{A},h\in[H]\), we have \(\|\phi_{h}(s,a)\|\leq R,\|\xi^{*}_{h}\|\leq B.\)_
Generally, the value of \(|A^{*}_{h}(s,a)|\) tends to be much smaller than \(H\) since a large value of \(|A^{*}_{h}(s,a)|\) implies that it may be difficult to recover from a previous incorrect action even under the best policy \(\pi^{*}\)(Ross et al., 2011; Agarwal et al., 2019). Therefore, by defining \(B_{\mathrm{adv}}=\sup_{(s,a)}|A^{*}_{h}(s,a)|\), we expect that \(B_{\mathrm{adv}}\) will be much smaller than \(H\), even in scenarios with dense rewards.
In the following discussion, we will use \(Z(B,h)\) to denote the convex set \(\{\zeta\in\mathbb{R}^{d}:\|\zeta\|\leq B,\langle\phi_{h}(s,a),\zeta\rangle\leq B _{\mathrm{adv}},\forall s\in\mathcal{S},a\in\mathcal{A}\}\). We consider the setting where \(\Pi=\Pi_{\mathrm{Mar}}\) and assume the transition model is known for brevity. In the case of unknown transition models, we can employ the same approach as described in Section 3 with reward-free RL oracles.
### Algorithm
We present our algorithm for action-based comparison models in Algorithm 5. The overall construction is similar to that of Algorithm 1, but with modifications to accommodate the changes in the preference model. We provide a detailed description of each step of our algorithm as follows.
Step 1: Collection of exploratory trajectories (Line 2-17 ).Similar to Algorithm 1, we generate a set of exploratory policy pairs. Our sampling procedure is designed for action-based comparisons.
Step 2: Collection of preference feedback (Line 18).If \(a^{1}\) is preferred over \(a^{0}\), the algorithm assigns \(o^{n}=1\); otherwise, it assigns \(o^{n}=0\) according to the model in Eq. 2.
Step 3: Advantage function learning via MLE (Line 19).Similar to Algorithm 1, we use MLE to learn the advantage function. More specifically, we learn it by maximizing the log-likelihood:
\[L(\xi,\mathcal{D}^{h}_{\mathrm{adv}},\{o^{h,n}\}_{n=1}^{N})= \sum_{n=1}^{N}\log\Big{(}o^{h,n}\cdot\sigma(\langle\xi,\phi_{h}(s^{h,n},a^{h, n,1})-\phi(s^{h,n},a^{h,n,0})\rangle)\\ +(1-o^{h,n})\cdot\sigma(\langle\xi,\phi_{h}(s^{h,n},a^{h,n,0})- \phi(s^{h,n},a^{h,n,1})\rangle)\Big{)},\]
where \(\mathcal{D}^{h}_{\mathrm{adv}}=\{s^{h,n},a^{h,n,0},a^{h,n,1}\}_{n=1}^{N}\).
Step 4: Policy output (Line 22).We select the action with the highest learned advantage for each state, i.e., output the greedy policy based on the learned advantage function.
### Analysis
**Theorem 3**.: _Let_
\[\lambda\geq 4R^{2},\qquad N\geq\widetilde{\mathcal{O}}\Big{(}\frac{\lambda \kappa_{\mathrm{adv}}^{2}B^{2}R^{2}H^{2}d^{2}\log(1/\delta)}{\epsilon^{2}} \Big{)},\]
_where \(\kappa_{\mathrm{adv}}=\sup_{|x|\leq B_{\mathrm{adv}}}|1/\sigma^{\prime}(x)|\) in REGIME-action. Then under Assumption 4, with probability at least \(1-\delta\), we have \(V^{r^{*},\tilde{\pi}}\geq V^{r^{*},*}-\epsilon\)._
Theorem 3 demonstrates that for the action-based comparison model, the number of required human feedbacks scales with \(\kappa_{\mathrm{adv}}\) instead of \(\kappa\). This implies that when \(\sigma\) is a commonly used sigmoid function, the sample complexity is exponential in \(B_{\mathrm{adv}}\) rather than \(r_{\mathrm{max}}\). Crucially, \(B_{\mathrm{adv}}\) is always less than or equal to \(r_{\mathrm{max}}\), and as mentioned earlier, \(B_{\mathrm{adv}}\) can be \(o(H)\) even in dense reward settings where \(r_{\mathrm{max}}=\Theta(H)\). Consequently, we achieve superior sample complexity compared to the trajectory-based comparison setting.
## 6 Summary
We consider the problem of how to query human feedback efficiently in RLHF, i.e., the experimental design problem in RLHF. In particular, we design a reward-agnostic trajec
tory collection algorithm for human feedback querying when the transition dynamics is unknown. Our algorithm provably requires less human feedback to learn the true reward and optimal policy than existing literature. Our results also go beyond the tabular cases and cover common MDPs models including linear MDPs and low-rank MDPs. Further, we consider the action-based comparison setting and propose corresponding algorithms to circumvent the exponential scaling with \(r_{\max}\) of trajectory-based comparison setting. |
2305.07425 | Largest hyperbolic actions of 3--manifold groups | The set of equivalence classes of cobounded actions of a group G on different
hyperbolic metric spaces carries a natural partial order. Following
Abbott--Balasubramanya--Osin, the group G is H--accessible if the resulting
poset has a largest element. In this paper, we prove that every non-geometric
3--manifold has a finite cover with H--inaccessible fundamental group and give
conditions under which the fundamental group of the original manifold is
H--inaccessible. We also prove that every Croke--Kleiner admissible group (a
class of graphs of groups that generalizes fundamental groups of 3--dimensional
graph manifolds) has a finite index subgroup that is H--inaccessible. | Carolyn Abbott, Hoang Thanh Nguyen, Alexander J. Rasmussen | 2023-05-12T12:48:22Z | http://arxiv.org/abs/2305.07425v1 | # Largest hyperbolic actions of 3-manifold groups
###### Abstract.
The set of equivalence classes of cobounded actions of a group \(G\) on different hyperbolic metric spaces carries a natural partial order. Following Abbott-Balasubramanya-Osin, the group \(G\) is \(\mathcal{H}\)_-accessible_ if the resulting poset has a largest element. In this paper, we prove that every non-geometric 3-manifold has a finite cover with \(\mathcal{H}\)-inaccessible fundamental group and give conditions under which the fundamental group of the original manifold is \(\mathcal{H}\)-inaccessible. We also prove that every Croke-Kleiner admissible group (a class of graphs of groups that generalizes fundamental groups of 3-dimensional graph manifolds) has a finite index subgroup that is \(\mathcal{H}\)-inaccessible.
2010 Mathematics Subject Classification: 57M50, 20F65, 20F67
## 1. Introduction
A fixed group \(G\) will admit many different cobounded actions on different hyperbolic metric spaces. Abbott, Balasubramanya, and Osin in [1] show that the set of equivalence classes of cobounded hyperbolic actions of a group \(G\) carries a natural partial order; see Section 2.1. The resulting poset is called the _poset of hyperbolic structures_ on \(G\), denoted \(\mathcal{H}(G)\). Roughly speaking, one action is larger than another if the smaller space can be formed by equivariantly collapsing some subspaces of the larger. The motivation is that the larger an action is in this partial order, the more information about the geometry of the group it should provide.
The posets \(\mathcal{H}(G)\) remain mysterious, especially for groups with features of non-positive curvature, which tend to have uncountable posets of hyperbolic structures [1]. First steps towards understanding these posets were made in [1], [2], and [2]. While [1] and [2] give complete descriptions of \(\mathcal{H}(G)\) for the groups in question, it appears essentially impossible to do this, given current technology, for groups with strong features of non-positive curvature. Nonetheless, one aspect of the poset that can be understood in many instances is the (non-)existence of a _largest_ element of \(\mathcal{H}(G)\), that is, an element that is greater than or equal to every other element of the poset. When a largest element exists, we say that the group \(G\) is \(\mathcal{H}\)_-accessible_; otherwise, the group is \(\mathcal{H}\)_-inaccessible_. The first and third authors show that many groups with features of non-positive curvature are \(\mathcal{H}\)-inaccessible [2]. In this paper, we follow this direction by extending these results to a large class of 3-manifold groups.
We first consider fundamental groups of non-geometric 3-manifolds with empty or toroidal boundary. If \(M\) is a compact, orientable, irreducible non-geometric 3-manifold, then there exists a non-empty minimal union \(\mathcal{T}\) of disjoint essential tori in \(M\) such that each connected component of \(M\setminus\mathcal{T}\) is Seifert fibered or hyperbolic. This is called the _torus decomposition of \(M\)_, and the connected components of \(M\setminus\mathcal{T}\) are called _pieces_. Our first result shows that if \(M\) contains certain types of pieces, then \(\pi_{1}(M)\) is \(\mathcal{H}\)-inaccessible. To describe these pieces, we introduce the class of _non-elementary_ Seifert fibered manifolds, which are those whose base orbifolds are orientable and hyperbolic. Non-elementary graph manifolds are
###### Abstract.
We study the \(\mathcal{H}\)-inaccessible groups of 3-manifolds with empty or toroidal boundary. We show that a proper \(\mathcal{H}\)-inaccessible group is a \(\mathcal{H}\)-inaccessible group. We show that a proper \(\mathcal{H}\)-inaccessible group is a \(\mathcal{H}\)-inaccessible group. We show that a proper \(\mathcal{H}\)-inaccessible group is a \(\mathcal{H}\)-inaccessible group. We also show that a proper \(\mathcal{H}\)-inaccessible group is a \(\mathcal{H}\)-inaccessible group.
A non-geometric \(3\)-manifold \(M\) always has a double cover in which all Seifert fibered pieces are non-elementary, and hence passing to a further finite cover if necessary, we obtain a finite cover \(M^{\prime}\to M\) such that \(M^{\prime}\) is either a non-elementary graph manifold or a non-elementary mixed manifold. Combining Theorems 1.1 and 1.3 yields the following corollary.
**Corollary 1.5**.: _If \(M\) is a non-geometric 3-manifold then \(\pi_{1}(M)\) has a finite index subgroup \(H\) such that every finite index subgroup \(K\leq H\) is \(\mathcal{H}\)-inaccessible._
So far, we have only discussed non-geometric 3-manifolds. In some cases, we can also understand the \(\mathcal{H}\)-accessibility of (finite-index subgroups) of geometric 3-manifold groups.
**Proposition 1.6**.: _Every \(3\)-manifold with Nil or Sol geometry has a finite cover whose fundamental group is \(\mathcal{H}\)-inaccessible. The fundamental group of a closed hyperbolic \(3\)-manifold is \(\mathcal{H}\)-accessible, while the fundamental group of a finite-volume cusped hyperbolic \(3\)-manifold is \(\mathcal{H}\)-inaccessible._
Proof.: If a 3-manifold \(M\) has the geometry of Sol, then \(M\) is a torus bundle over a 1-dimensional orbifold (an interval with reflection boundary points or a circle) and thus \(M\) has a double cover that is a torus bundle with Anosov monodromy. The \(\mathcal{H}\)-inaccessibility of the fundamental group of this bundle then follows from work of the first and third authors [1].
If the geometry of \(M\) is Nil, \(M\) is a Seifert fibered 3-manifold, and \(M\) is finitely covered by a torus bundle \(M^{\prime}\) with unipotent monodromy, and the only possible hyperbolic actions are lineal and elliptic. On the other hand, the abelianization of \(\pi_{1}(M^{\prime})\) is virtually \(\mathbb{Z}^{2}\), so this yields infinitely many homomorphisms \(\mathbb{Z}^{2}\to\mathbb{R}\) modulo scaling, and infinitely many inequivalent actions on \(\mathbb{R}\) by translations. Since such lineal actions are incomparable by [1, Theorem 2.3], \(\pi_{1}(M^{\prime})\) is \(\mathcal{H}\)-inaccessible.
The fundamental group of a closed hyperbolic 3-manifold is a hyperbolic group, and so is \(\mathcal{H}\)-accessible. The result for a finite-volume cusped hyperbolic 3-manifold is Corollary 4.4.
In Section 4.4, we consider general finitely generated 3-manifold groups. This includes fundamental groups of reducible 3-manifolds, certain geometric 3-manifolds, and 3-manifolds with non-toroidal boundary. In Proposition 4.10, we characterize the \(H\)-accessibility of many such 3-manifold groups. In particular, any finitely generated fundamental group of a hyperbolic 3-manifold without rank-1 cusps is \(\mathcal{H}\)-accessible.
Many basic questions about posets of hyperbolic structures are still open. Surprisingly, it is still unknown whether \(\mathcal{H}\)-inaccessibility of a finite index normal subgroup of \(G\) passes to \(\mathcal{H}\)-inaccessibility of the ambient group \(G\). In the setting of non-geometric 3-manifolds, the only cases in which we are unable to determine the \(\mathcal{H}\)-(in)accessibility of the fundamental group are a manifold all of whose Seifert fibered pieces are elementary and a graph manifold whose underlying graph contains a single vertex. We suspect that these manifolds are also \(\mathcal{H}\)-inaccessible, but the techniques in this paper do not apply. We thus ask the following question.
**Question 1.7**.: Let \(M\) be a graph manifold with underlying graph containing only one vertex. Is \(\pi_{1}(M)\)\(\mathcal{H}\)-inaccessible?
### Acknowledgments
Abbott was partially supported by NSF grants DMS-1803368 and DMS-2106906. Nguyen was partially supported by Project ICRTM04_2021.07 of the International Centre for Research and Postgraduate Training in Mathematics, VietNam. Rasmussen was partially supported by NSF grants DMS-1840190 and DMS-2202986.
## 2. Preliminaries
### \(\mathcal{H}\)-accessibility
In this section, we review the partial order on cobounded group actions introduced in [1]. Fix a group \(G\). If \(G\) acts coboundedly on two metric spaces \(X\) and \(Y\), we say \(G\curvearrowright X\)_is dominated by \(G\curvearrowright Y\)_, written \(G\curvearrowright X\preceq G\curvearrowright Y\), if there exists a coarsely \(G\)-equivariant coarsely Lipschitz map \(Y\to X\). The preorder \(\preceq\) induces an equivalence relation \(G\curvearrowright X\sim G\curvearrowright Y\) if and only if \(G\curvearrowright X\preceq G\curvearrowright Y\) and \(G\curvearrowright Y\preceq G\curvearrowright X\). It descends to a partial order \(\preccurlyeq\) on the set of equivalence classes. We denote the equivalence class of an action by \([G\curvearrowright X]\).
**Definition 2.1**.: Given a group \(G\), the _poset of hyperbolic structures on \(G\)_ is defined to be
\[\mathcal{H}(G):=\{[G\curvearrowright X]\mid G\curvearrowright X\text{ is cobounded and }X\text{ is hyperbolic}\},\]
equipped with the partial order \(\preccurlyeq\).
By [1, Proposition 3.12], this is equivalent to the original definition of \(\mathcal{H}(G)\) in terms of generating sets. We say that an element of a poset is _largest_ when it is greater than or equal to every other element of the poset. Such an element is unique.
**Definition 2.2**.: A group \(G\) is _\(\mathcal{H}\)-accessible_ if the poset \(\mathcal{H}(G)\) has a largest element. Otherwise, it is _\(\mathcal{H}\)-inaccessible_.
The following lemma gives a simple criterion to check if a group is \(\mathcal{H}\)-inaccessible.
**Lemma 2.3** ([1, Lemma 1.4]).: _Let \(G\) be a group. Suppose that there are commuting elements \(a,b\in G\) and hyperbolic actions \(G\curvearrowright X\) and \(G\curvearrowright Y\) such that_
1. \(a\) _acts loxodromically and_ \(b\) _acts elliptically in the action_ \(G\curvearrowright X\)_; and_
2. \(b\) _acts loxodromically in the action_ \(G\curvearrowright Y\)_._
_Then there does not exist a hyperbolic action \(G\curvearrowright Z\) such that \(G\curvearrowright X\preceq G\curvearrowright Z\) and \(G\curvearrowright Y\preceq G\curvearrowright Z\)._
We will typically apply this lemma to spaces \(X\) and \(Y\) that are quasi-isometric to lines. These actions on quasi-lines will be constructed using quasimorphisms, which, in turn, will be constructed using hyperbolic actions of \(G\) with very weak proper discontinuity properties (see [1]).
**Definition 2.4**.: Let \(G\curvearrowright X\) be a hyperbolic action and \(g\in G\) be loxodromic with fixed points \(\{g^{\pm}\}\subset\partial X\). The element \(g\) is _WWPD_ if the orbit of the pair \((g^{+},g^{-})\) is discrete in the space \(\partial X\times\partial X\setminus\Delta\), where \(\Delta=\{(x,x)\mid x\in\partial X\}\) is the diagonal subset. A WWPD element \(g\) is _WWPD\({}^{+}\)_ if any \(h\in G\) that stabilizes \(\{g^{\pm}\}\) as a _set_ also fixes \(\{g^{\pm}\}\) pointwise.
The following proposition follows from [1, Corollary 3.2].
**Proposition 2.5**.: _Suppose that \(G\curvearrowright X\) is a hyperbolic action with a WWPD\({}^{+}\) element \(g\in G\). There is a homogeneous quasimorphism \(q\colon G\to\mathbb{R}\) such that \(q(g)\neq 0\) and \(q(h)=0\) for any element \(h\in G\) that acts elliptically on \(X\)._
Homogeneous quasimorphisms, in turn, give rise to actions on quasi-lines.
**Proposition 2.6** ([1, Lemma 4.15]).: _Let \(q\colon G\to\mathbb{R}\) be a non-zero homogeneous quasimorphism. Then there is an action of \(G\) on a quasi-line \(\mathcal{L}\) such that \(q(g)\neq 0\) if and only if \(g\) acts loxodromically on \(\mathcal{L}\)._
### Projection axioms
We will use the Bestvina-Bromberg-Fujiwara projection complex machinery developed in [1] to obtain actions on quasi-trees. In this section, we review this machinery.
**Definition 2.7**.: Let \(\mathbb{Y}\) be a collection of geodesic spaces equipped with projection maps
\[\{\pi_{Y}\colon\mathbb{Y}\setminus\{Y\}\to Y\}_{Y\in\mathbb{Y}}.\]
Let \(d_{Y}(X,Z)=\operatorname{diam}(\pi_{Y}(X)\cup\pi_{Y}(Z))\) for \(X\neq Y\neq Z\in\mathbb{Y}\). The pair \((\mathbb{Y},\{\pi_{Y}\}_{Y\in\mathbb{Y}})\) satisfies the _projection axioms_ for a _projection constant_\(\xi\geq 0\) if
1. \(\operatorname{diam}(\pi_{Y}(X))\leq\xi\) whenever \(X\neq Y\);
2. if \(X,Y,Z\) are distinct and \(d_{Y}(X,Z)>\xi\), then \(d_{X}(Y,Z)\leq\xi\); and
3. for \(X\neq Z\), the set \(\{Y\in\mathbb{Y}\ \mid\ d_{Y}(X,Z)>\xi\}\) is finite.
For a fixed \(K>0\) and a pair \((\mathbb{Y},\pi_{Y})\) satisfying the projection axioms for some constant \(\xi\), Bestvina, Bromberg, and Fujiwara construct a _quasi-tree of spaces_\(\mathcal{C}_{K}(\mathbb{Y})\) in [1]. If \(\mathbb{Y}\) admits an action of the group \(G\) so that \(\pi_{gY}(gX)=g\pi_{Y}(X)\) for any \(g\in G\) and \(X,Y\in\mathbb{Y}\), then \(G\) acts by isometries on \(\mathcal{C}_{K}(\mathbb{Y})\); see Section 4.4 in [1]. Moreover, they show that if \(K>4\xi\) and \(\mathbb{Y}\) is a collection of metric spaces uniformly quasi-isometric to \(\mathbb{R}\), then \(\mathcal{C}_{K}(\mathbb{Y})\) is an unbounded quasi-tree [1, Theorem 4.14].
The following is a useful example to keep in mind throughout the paper and is discussed in detail in the introduction of [1].
**Example 2.8**.: Let \(G\) be a discrete group of isometries of \(\mathbb{H}^{2}\) and \(g_{1},\ldots,g_{k}\in G\) a finite collection of loxodromic elements with axes \(\gamma_{1},\ldots,\gamma_{k}\), respectively. Let \(\mathbb{Y}\) be the set of all \(G\)-translates of \(\gamma_{1},\ldots,\gamma_{k}\), and given \(Y\in\mathbb{Y}\), define \(\pi_{Y}\) to be closest point projection in \(\mathbb{H}^{2}\). Since all translates of each \(\gamma_{i}\) are convex, this is a well-defined \(1\)-Lipschitz map, and it follows from hyperbolicity that the projection of one translate of an axis onto another has uniformly bounded diameter. In [1] it is verified that the pair \((\mathbb{Y},\pi_{Y})\) satisfies the projection axioms for some constant \(\xi\).
Given a pair \((\mathbb{Y},\{\pi_{Y}\}_{Y\in\mathbb{Y}})\) satisfying the projection axioms and three domains \(X,Y,Z\in\mathbb{Y}\), there is a notion of distance between a point of \(X\) and a point of \(Z\) from the point of view of \(Y\), which we now describe. Let \(x\in X\) and \(z\in Z\). If \(X,Y,Z\) are all distinct, then define \(d_{Y}(x,z):=d_{Y}(X,Z)\). If \(Y=X\) and \(Y\neq Z\), then define \(d_{Y}(x,z):=\operatorname{diam}(\{x\}\cup\pi_{Y}(z))\), where the diameter is measured in \(Y\). Finally, if \(X=Y=Z\), then let \(d_{Y}(x,z)\) be the distance in \(Y\) between \(x\) and \(z\). The spaces \(Y\in\mathbb{Y}\) naturally embed into \(\mathcal{C}_{K}(\mathbb{Y})\), so the distance \(d_{\mathcal{C}_{K}(\mathbb{Y})}(x,z)\) is also defined.
We have the following upper bound on distance in \(\mathcal{C}_{K}(\mathbb{Y})\). Set \([t]_{K}=t\) if \(t\geq K\) and \([t]_{K}=0\) if \(t<K\).
**Proposition 2.9** ([1, Lemma 4.4]).: _Let \((\mathbb{Y},\pi_{Y\in\mathbb{Y}})\) satisfy the projection axioms with constant \(\xi\). For any \(K\) sufficiently large,_
\[d_{\mathcal{C}_{K}(\mathbb{Y})}(x,z)\leq 6K+4\sum_{Y\in\mathbb{Y}}[d_{Y}(x,z)]_{K}.\]
This distance formula was originally stated with a modified distance function. In [1] the distance defined above was denoted \(d_{Y}^{\pi}\), and the modified distance was denoted \(d_{Y}\). However, since \(d_{Y}(x,z)\leq d_{Y}^{\pi}(x,z)\) for all point \(x,z\), the inequality holds with the distance \(d_{Y}^{\pi}\), as well. As we will not need the modified distance function in this paper, we use the simpler notation \(d_{Y}\) for \(d_{Y}^{\pi}\) and choose to state the proposition with this distance.
### Croke-Kleiner admissible groups
In this section, we review Croke-Kleiner admissible groups [10] and the associated Bass-Serre space, a notion defined in [11].
**Definition 2.10**.: A _graph of groups_\(\mathcal{G}=(\Gamma,\{G_{\mu}\},\{G_{\alpha}\},\{\tau_{\alpha}\})\) is a connected graph \(\Gamma\) together with a group \(G_{\sigma}\) for each \(\sigma\in V(\Gamma)\cup E(\Gamma)\) (here \(V(\Gamma)\) and \(E(\Gamma)\) denote vertices and edges), and an injective homomorphism \(\tau_{\alpha}\colon G_{\alpha}\to G_{\mu}\) for each oriented edge \(\alpha\), where \(\mu\) denotes the terminal vertex of \(\alpha\).
**Definition 2.11**.: A graph of groups \(\mathcal{G}=(\Gamma,\{G_{\mu}\},\{G_{\alpha}\},\{\tau_{\alpha}\})\) is called _admissible_ if the following hold.
1. \(\mathcal{G}\) is a finite graph with at least one edge.
2. Each vertex group \(G_{\mu}\) has center \(Z(G_{\mu})\cong\mathbb{Z}\), \(H_{\mu}:=G_{\mu}/Z(G_{\mu})\) is a non-elementary hyperbolic group, and every edge group \(G_{\alpha}\) is isomorphic to \(\mathbb{Z}^{2}\).
3. Let \(\alpha_{1}\) and \(\alpha_{2}\) be distinct edges oriented towards a vertex \(\mu\), and for \(i=1,2\) let \(K_{i}\subset G_{\mu}\) be the image of the edge homomorphism \(G_{\alpha_{i}}\to G_{\mu}\). Then for every \(g\in G_{\mu}\), \(gK_{1}g^{-1}\) is not commensurable with \(K_{2}\), and for every \(g\in G_{\mu}\setminus K_{i}\), \(gK_{i}g^{-1}\) is not commensurable with \(K_{i}\).
4. For every edge group \(G_{\alpha}\) with \(\alpha=[\alpha^{-},\alpha^{+}]\) (oriented from \(\alpha^{-}\) to \(\alpha^{+}\)), the subgroup of \(G_{\alpha}\) generated by \(\tau_{\alpha}^{-1}(Z(G_{\alpha^{+}}))\) and \(\tau_{\overline{\alpha}}^{-1}(Z(G_{\alpha^{-}}))\) has finite index in \(G_{\alpha}\).
A group \(G\) is _admissible_ if it is the fundamental group of an admissible graph of groups. Such groups are often called _Croke-Kleiner admissible groups_.
**Lemma 2.12** ([11, Lemma 4.2]).: _Let \(\mathcal{G}=(\Gamma,\{G_{\mu}\},\{G_{\alpha}\},\{\tau_{\alpha}\})\) be a Croke-Kleiner admissible group. For each edge \(\alpha=[\alpha^{-},\alpha^{+}]\) of \(\mathcal{G}\), denote_
\[C_{\alpha}=\tau_{\alpha}(\tau_{\bar{\alpha}}^{-1}(Z_{\alpha^{-}})),\]
_which is a subgroup of \(G_{\alpha^{+}}\). Each vertex group \(G_{\mu}\) has an infinite generating set \(S_{\mu}\) so that the following holds._
1. \(\operatorname{Cay}(G_{\mu},S_{\mu})\) _is quasi-isometric to a line._
2. _The inclusion map_ \(Z_{\mu}\to\operatorname{Cay}(G_{\mu},S_{\mu})\) _is a_ \(Z_{\mu}\)_-equivariant quasi-isometry._
3. _For each edge_ \(\alpha\) _with_ \(\alpha^{+}=\mu\) _we have that_ \(C_{\alpha}\) _is uniformly bounded in_ \(\operatorname{Cay}(G_{\mu},S_{\mu})\)_._
**Remark 2.13**.: The quasi-line \(\operatorname{Cay}(G_{\mu},S_{\mu})\) satisfies:
* The center \(Z_{\mu}\) of \(G_{\mu}\) acts loxodromically on \(\operatorname{Cay}(G_{\mu},S_{\mu})\).
* If \(\omega\) is an adjacent vertex to \(\mu\) in \(\Gamma\), then each cyclic subgroup of \(G_{\mu}\) conjugate to \(Z_{\omega}\) acts elliptically on \(\operatorname{Cay}(G_{\mu},S_{\mu})\).
Let \(\mathcal{G}\) be a graph of finitely generated groups, and let \(G\curvearrowright T\) be the action of \(G=\pi_{1}(\mathcal{G})\) on the associated Bass-Serre tree of \(\mathcal{G}\) (we refer the reader to Section 2.5 in [10] for a brief discussion). For each vertex \(v\) of the Bass-Serre tree \(T\), let \(\check{v}\) denote the vertex \(\mu\) of \(\Gamma\) so that \(v\) represents \(gG_{\mu}\) for some \(g\) in \(G\). For each vertex group \(G_{\mu}\) and edge group \(G_{\alpha}\), fix once and for all finite symmetric generating sets \(J_{\mu}\) and \(J_{\alpha}\) respectively, such that \(J_{\alpha}=J_{\bar{\alpha}}\) and \(\tau_{\alpha}\left(J_{\alpha}\right)\subseteq J_{\alpha^{+}}\).
We briefly sketch the description of the Bass-Serre space \(X\) for the graph of groups \(\mathcal{G}\) and refer the reader to [11] for a full description of the space. Given a vertex \(v\) of \(T\), the associated vertex space \(X_{v}\) of \(X\) is a graph isometric to \(\operatorname{Cay}(G_{\check{v}},J_{\check{v}})\). If \(e\) is a (directed) edge in \(T\), then the associated edge space \(X_{e}\) is isometric to \(\operatorname{Cay}(G_{\check{e}},J_{\check{e}})\). Edges are added between the vertex and edge spaces so that the maps \(\tau_{\check{e}}\) induce isometric embeddings of the edge spaces into the vertex spaces, which we denote by \(\tau_{e}\colon X_{e}\to X_{e^{+}}\) and \(\tau_{\bar{e}}\colon X_{e}\to X_{e^{-}}\).
Suppose \(\mathcal{G}\) is an admissible graph of groups with Bass-Serre tree \(T\) and Bass-Serre space \(X\). For each vertex \(\mu\) of \(\Gamma\), let \(S_{\mu}\) be given by Lemma 2.12. Without loss of generality, we can assume that \(J_{\mu}\) is contained in \(S_{\mu}\), where \(J_{\mu}\) is the fixed generating set of \(G_{\mu}\).
**Definition 2.14** (Subspaces \(L_{v}\) and \(\mathcal{H}_{v}\)).: Suppose the vertex \(v\in T\) represents \(gG_{\tilde{v}}\). Let \(L_{v}\) be the graph with vertex set \(gG_{\tilde{v}}\) and with an edge connecting \(x,y\in gG_{\tilde{v}}\) if \(x^{-1}y\in S_{\tilde{v}}\). In particular, \(L_{v}\) is isometric to \(\operatorname{Cay}(G_{\tilde{v}},S_{\tilde{e}})\), which is a quasi-line by Lemma 2.12.
Let \(\mathcal{H}_{v}\) be the graph with vertex set \(gG_{\tilde{v}}\) and with an edge connecting \(x,y\in gG_{\tilde{v}}\) if \(x^{-1}y\in J_{\tilde{v}}\cup Z_{\tilde{v}}\). It is isometric to \(\operatorname{Cay}(G_{\tilde{v}},J_{\tilde{v}}\cup Z_{\tilde{v}})\).
Since \(L_{v}\) and \(\mathcal{H}_{v}\) are each obtained from \(X_{v}\) by adding extra edges, there are distance non-increasing maps \(p_{v}\colon X_{v}\to L_{v}\) and \(i_{v}\colon X_{v}\to\mathcal{H}_{v}\) that are the identity on vertices. The space \(\mathcal{H}_{v}\) is constructed to represent the geometry of \(H_{\tilde{v}}=G_{\tilde{v}}/Z_{\tilde{v}}\) and is relatively hyperbolic:
**Lemma 2.15** ([12, Lemma 2.15]).: \(\mathcal{H}_{v}\) _is hyperbolic relative to the collection_
\[\mathcal{P}_{v}=\{\ell_{e}:=i_{v}(\tau_{e}(X_{e}))\,|\,e\in E(T)\text{ such that }e^{+}=v\}.\]
It follows from [16] that there is a coarse closest point projection map
\[\operatorname{proj}_{\ell_{e}}\colon\mathcal{H}_{v}\to\ell_{e}\]
that is coarsely Lipschitz with constants independent of \(e\) and \(v\).
**Remark 2.16**.: As peripheral subsets in a relatively hyperbolic space, the sets \(\{\ell_{f}\,|\,f\in E(T)\text{ and }f^{+}=v\}\) together with the maps \(\operatorname{proj}_{\ell_{f}}\) satisfy the projection axioms for a constant \(\xi_{0}\).
We now show that if \(e\) is an edge from \(u\) to \(v\), the various maps defined above can be composed to form a quasi-isometry between the quasi-line \(\ell_{e}\in\mathcal{P}_{u}\) and the quasi-line \(L_{v}\). Let \(\psi_{e}\colon\ell_{\bar{e}}\to L_{v}\) be the map from [12] which is defined as the restriction to \(\ell_{\bar{e}}\) of the composition
\[p_{v}\circ\tau_{e}\circ\tau_{\bar{e}}^{-1}\circ i_{u}^{-1}. \tag{1}\]
In [12], the authors prove that \(\psi_{e}\) is coarsely Lipschitz and note that \(\psi_{e}\) is, in fact, a quasi-isometry. We prove a more general result:
**Lemma 2.17**.: _Let \(\ell_{1},\ell_{2}\) be two quasi-lines, and suppose a group \(G\) acts coboundedly on both \(\ell_{1}\) and \(\ell_{2}\). Any \(G\)-equivariant coarsely Lipschitz map \(\psi\colon\ell_{1}\to\ell_{2}\) is a quasi-isometry._
Proof.: Since there is a \(G\)-equivariant coarsely Lipschitz map from \(\ell_{1}\) to \(\ell_{2}\), we have \([G\curvearrowright\ell_{1}]\succcurlyeq[G\curvearrowright\ell_{2}]\) in the poset \(\mathcal{H}(G)\). However, since \(G\curvearrowright\ell_{1}\) and \(G\curvearrowright\ell_{2}\) are both lineal, [1, Theorem 4.22] implies that these actions must be equivalent. Thus there is a coarsely \(G\)-equivariant quasi-isometry \(\Phi\colon\ell_{1}\to\ell_{2}\). We will show that \(\Phi\) and \(\psi\) differ by a uniformly bounded amount, which will then show that \(\psi\) is also a quasi-isometry.
Fix a basepoint \(x_{0}\in\ell_{1}\). Since \(G\curvearrowright\ell_{1}\) is cobounded, there is a constant \(B\) such that for any \(x\in\ell_{1}\), there is some \(g\in G\) such that \(d_{\ell_{1}}(x,gx_{0})\leq B\). Since \(\psi\) is coarsely Lipschitz and \(\Phi\) is a quasi-isometry, there is a constant \(A\), depending on \(B\) and the coarse Lipschitz constants for \(\Phi\) and \(\psi\), such that \(d_{\ell_{2}}(\Phi(x),\Phi(gx_{0}))\leq A\) and \(d_{\ell_{2}}(\psi(x),\psi(gx_{0}))\leq A\). Moreover, since \(\Phi\) is coarsely \(G\)-equivariant, there is a constant \(C\) such that \(d_{\ell_{2}}(\Phi(gx_{0}),g\Phi(x_{0}))\leq C\). Let \(D=d_{\ell_{2}}(\Phi(x_{0}),\psi(x_{0}))\).
By the triangle inequality and \(G\)-equivariance of \(\psi\), we have
\[d_{\ell_{2}}(\Phi(x),\psi(x)) \leq d_{\ell_{2}}(\Phi(x),g\Phi(x_{0}))+d_{\ell_{2}}(g\Phi(x_{0}),g\psi(x_{0}))+d_{\ell_{2}}(\psi(gx_{0}),\psi(x))\] \[\leq(A+C)+D+A,\]
completing the proof.
We now complete the proof that \(\psi_{e}\) is a quasi-isometry.
**Lemma 2.18**.: _There are constants \(\lambda\geq 1\) and \(c\geq 0\) depending only on \(\mathcal{G}\) such that the following holds. For any oriented edge \(e\) in the Bass-Serre tree \(T\) of \(\mathcal{G}\), the map \(\psi_{e}\colon\ell_{\bar{e}}\to L_{v}\) is a \((\lambda,c)\)-quasi-isometry._
Proof.: In Lemma 6.16 of [10], the authors prove that the map \(\psi_{e}\) is coarsely Lipschitz. Moreover, from the definitions of \(\ell_{\bar{e}}\) and \(L_{v}\) as Cayley graphs with respect to infinite generating sets, \(G_{\bar{e}}\) acts by isometries on both, and \(\psi_{e}\) is \(G_{\bar{e}}\)-equivariant. Therefore, \(\psi_{e}\) is a quasi-isometry by Lemma 2.17. As there are only finitely many \(G\)-orbits of edges in \(T\), we can choose the constants of these quasi-isometries to be independent of the edge \(e\).
## 3. Croke-Kleiner admissible groups and \(\mathcal{H}\)-inaccessibility
In this section, we prove Theorem 1.4: every Croke-Kleiner admissible group has a finite index subgroup that is \(\mathcal{H}\)-inaccessible.
Fix a Croke-Kleiner admissible group \(\mathcal{G}=(\Gamma,\{G_{\mu}\},\{G_{\alpha}\},\{\tau_{\alpha}\})\). We partition the vertex set \(T^{0}\) of the Bass-Serre tree into two disjoint collections of vertices \(\mathcal{V}_{1}\) and \(\mathcal{V}_{2}\) such that if \(v\) and \(v^{\prime}\) are in \(\mathcal{V}_{i}\) then \(d_{T}(v,v^{\prime})\) is even. Since any automorphism of \(T\) either preserves \(\mathcal{V}_{1}\) and \(\mathcal{V}_{2}\) setwise or interchanges them, we have:
**Lemma 3.1** ([11, Lemma 4.6]).: _Let \(\mathcal{G}=(\Gamma,\{G_{\mu}\},\{G_{\alpha}\},\{\tau_{\alpha}\})\) be a Croke-Kleiner admissible group. There exists a subgroup \(G^{\prime}\leq G=\pi_{1}(\mathcal{G})\) of index at most \(2\) in \(G\) so that \(G^{\prime}\) preserves \(\mathcal{V}_{1}\) and \(\mathcal{V}_{2}\) and \(G^{\prime}\) is also a Croke-Kleiner admissible group._
Let \(G^{\prime}\) be the finite index subgroup of \(G\) given by Lemma 3.1. In light of Lemma 2.3, to show that \(G^{\prime}\) is \(\mathcal{H}\)-inaccessible, it suffices to construct commuting \(a_{i}\in G^{\prime}\) and actions \(G^{\prime}\curvearrowright X_{i}\) for \(i=1,2\) such that \(a_{i}\) is elliptic with respect to the action \(G^{\prime}\curvearrowright X_{3-i}\) and loxodromic with respect to the action \(G^{\prime}\curvearrowright X_{i}\). Our spaces \(X_{i}\) will be quasi-trees of metric spaces.
### Construction of group actions
For notational simplicity, we replace \(G\) by its index \(\leq 2\) subgroup \(G^{\prime}\). For each vertex \(v\) in the Bass-Serre tree \(T\), let \(L_{v}\) be the quasi-line from Definition 2.14. Recall that \(gL_{v}=L_{gv}\) for any group element \(g\) in \(G\).
Let \(\mathbb{L}_{1}\) be the collection of quasi-lines \(\{L_{v}\}_{v\in\mathcal{V}_{1}}\) and \(\mathbb{L}_{2}\) be the collection of quasi-lines \(\{L_{v}\}_{v\in\mathcal{V}_{2}}\). We define a projection of \(L_{v}\) to \(L_{v^{\prime}}\) in \(\mathbb{L}_{i}\) as follows.
**Definition 3.2** (Projection maps in \(\mathbb{L}_{i}\)).: For any two distinct vertices \(v,v^{\prime}\in\mathcal{V}_{i}\), let \(e^{\prime}=[w,u]\) and \(e=[u,v]\) denote the last two (oriented) edges in \([v^{\prime},v]\). The _projection_ from \(L_{v^{\prime}}\) into \(L_{v}\) is
\[\Pi_{L_{v}}(L_{v^{\prime}}):=\psi_{e}(\operatorname{proj}_{\ell_{\bar{e}}}( \ell_{e^{\prime}})),\]
where \(\psi_{e}\colon\ell_{\bar{e}}\to L_{v}\) and \(\operatorname{proj}_{\ell_{\bar{e}}}\colon\mathcal{H}_{u}\to\ell_{\bar{e}}\) are the maps introduced in Section 2.
The fact that \(d(v,v^{\prime})\) is even is not necessary for Definition 3.2, only that \(d(v,v^{\prime})\geq 2\).
We will verify that the \(\mathbb{L}_{i}\) with these projection maps satisfy the projection axioms (see Definition 2.7) for \(i=1,2\). Let \(d_{L_{a}}(L_{b},L_{c})\) be the projection distance \(\operatorname{diam}\bigl{(}\Pi_{L_{a}}(L_{b})\cup\Pi_{L_{a}}(L_{b})\bigr{)}\).
**Lemma 3.3**.: _There exists a constant \(\lambda>0\) such that \(\operatorname{diam}(\Pi_{L_{v}}(L_{v^{\prime}}))\leq\lambda\) for any distinct \(v,v^{\prime}\in\mathcal{V}_{i}\) for \(i=1,2\). Moreover, if \(a,b,c\in\mathcal{V}_{i}\) are distinct vertices with \(d_{T}(a,[b,c])\geq 2\), then \(\Pi_{L_{a}}(L_{c})=\Pi_{L_{a}}(L_{b})\)._
Proof.: By Remark 2.16, there is a uniform bound on the diameter of \(\operatorname{proj}_{\ell_{\varepsilon}}(\ell_{e^{\prime}})\). Combined with the fact that \(\psi_{e}\) is uniformly coarsely Lipschitz, this gives the constant \(\lambda\). Considering the convex hull of \(\{a\}\cup[b,c]\), we see that, orienting \([c,a]\) and \([b,a]\) towards \(a\), the last two edges of \([c,a]\) are the same as the last two edges of \([b,a]\). Hence by definition, \(\Pi_{L_{a}}(L_{c})=\Pi_{L_{a}}(L_{b})\).
Let \(v\) be a vertex of the Bass-Serre tree \(T\). By Remark 2.16, the collection \(\{\ell_{f}=i_{v}(\tau_{f}(X_{f}))\,|\,f\in E(T)\text{ such that}\,f^{+}=v\}\) satisfies the projection axioms with a constant \(\xi_{0}\). Let \(d_{\ell}\) denote the projection distances with respect to \(\operatorname{proj}_{\ell}\). The following lemma follows immediately from Lemma 2.18 and the definitions of \(d_{\ell_{e}}\) and \(d_{L_{v}}\).
**Lemma 3.4**.: _There exists a constant \(\lambda>0\) such that the following holds. Let \(u,v,w\) be distinct vertices in \(\mathcal{V}_{1}\) contained in \(\operatorname{Lk}(o)\) for some vertex \(o\) in \(\mathcal{V}_{2}\). Let \(e=[w,o]\), \(e_{1}=[u,o]\), and \(e_{2}=[v,o]\). Then_
\[\frac{1}{\lambda}d_{\ell_{e}}(\ell_{e_{1}},\ell_{e_{2}})-\lambda\leq d_{L_{w} }(L_{u},L_{v})\leq\lambda d_{\ell_{e}}(\ell_{e_{1}},\ell_{e_{2}})+\lambda.\]
We are now ready to verify the projection axioms.
**Proposition 3.5**.: _There exists \(\xi>0\) such that for each \(i\in\{1,2\}\), \(\mathbb{L}_{i}\) together with the projection maps \(\operatorname{proj}_{\ell}\) satisfies the projection axioms._
Proof.: We verify the projection axioms for \(\mathbb{L}_{1}\). The case for \(\mathbb{L}_{2}\) is identical. The constant \(\xi\) will be defined explicitly during the proof.
**Axiom 1:** This follows from Lemma 3.3.
**Axiom 2:** Let \(u,v,w\) be distinct vertices in \(\mathcal{V}_{1}\). In the course of the proof, we will compute a constant \(\xi>0\) such that if \(d_{L_{w}}(L_{u},L_{v})>\xi\), then \(d_{L_{u}}(L_{w},L_{v})\leq\xi\).
Since \(d_{L_{w}}(L_{u},L_{v})>0\), it follows from Lemma 3.3 that either \(w\) lies on \([u,v]\) or \(d_{T}(w,[u,v])=1\). If \(w\) lies on \([u,v]\), then since \(u,w,v\in\mathcal{V}_{1}\), we have \(d_{T}(u,[w,v])\geq 2\) and \(d_{T}(v,[u,w])\geq 2\). Axiom 2 thus follows from Lemma 3.3.
On the other hand, suppose that \(d(w,[u,v])=1\). Let \(o\in[u,v]\) be adjacent to \(w\) and consider the vertices \(u^{\prime},v^{\prime}\in\operatorname{Lk}(o)\cap[u,v]\) which lie in \([u,o]\) and \([o,v]\), respectively. If \(u\neq u^{\prime}\), then \(d_{L_{u}}(L_{w},L_{v})=0\), and so we may assume without loss of generality that \(u=u^{\prime}\). Furthermore, \(\pi_{L_{u}}(L_{v})=\pi_{L_{u}}(L_{v^{\prime}})\) by definition. Thus to prove the upper bound on \(d_{L_{u}}(L_{w},L_{v})\), it suffices to assume that \(v=v^{\prime}\), in which case \(u,v,w\) all lie in \(\operatorname{Lk}(o)\), where \(o\in\mathcal{V}_{2}\).
Let \(e=[w,o]\), \(e_{1}=[u,o]\), and \(e_{2}=[v,o]\). It follows from Lemma 3.4 that
\[\frac{1}{\lambda}d_{\ell_{e}}(\ell_{e_{1}},\ell_{e_{2}})-\lambda\leq d_{L_{w} }(L_{u},L_{v})\leq\lambda d_{\ell_{e}}(\ell_{e_{1}},\ell_{e_{2}})+\lambda.\]
Again applying Lemma 3.4 with the roles of \(u,v,w\) exchanged, we have that
\[\frac{1}{\lambda}d_{\ell_{e_{1}}}(\ell_{e},\ell_{e_{2}})-\lambda\leq d_{L_{u} }(L_{w},L_{v})\leq\lambda d_{\ell_{e_{1}}}(\ell_{e},\ell_{e_{2}})+\lambda.\]
Since \(\{\ell_{f}\,|\,f\in E(T)\text{ and }f^{+}=o\}\) satisfies the projection axioms with constant \(\xi_{0}\), it follows that \(d_{\ell_{e}}(\ell_{e_{1}},\ell_{e_{2}})>\xi_{0}\) implies that \(d_{\ell_{e_{1}}}(\ell_{e},\ell_{e_{2}})\leq\xi_{0}\). Since there are finitely many choices for \(o\) up to the action \(G^{\prime}\), the constant \(\xi_{0}\) may be chosen independently of \(o\). Thus, setting \(\xi=\lambda\xi_{0}+\lambda\), the above inequalities show that \(d_{L_{W}}(L_{u},L_{v})>\xi\) implies that \(d_{L_{u}}(L_{w},L_{v})\leq\xi\). This verifies Axiom 2.
**Axiom 3:** For distinct \(u,v\in\mathcal{V}_{1}\), we will prove the set
\[\{w\in\mathcal{V}_{1}\mid d_{L_{w}}(L_{u},L_{v})>\xi\}\]
is finite. By Lemma 3.3, any such vertex \(w\) is either contained in the interior of \([u,v]\) or satisfies \(d(w,[u,v])=1\). The first case yields at most \(d(u,v)-1\) choices for \(w\).
Suppose \(d(w,[u,v])=1\). As in the proof of Axiom 2, we can assume that \(u,v,w\) lie in \(\operatorname{Lk}(o)\) for some vertex \(o\) in \(\mathcal{V}_{2}\). Let \(e_{1}=[u,o],e_{2}=[v,o]\), and \(e=[w,o]\). By Lemma 3.4, we have \(d_{L_{w}}(L_{u},L_{v})\leq\lambda d_{\ell_{e}}(\ell_{e_{1}},\ell_{e_{2}})+\lambda\). Since \(\xi=\lambda\xi_{0}+\lambda\), it follows that
\[\{w\in\operatorname{Lk}(o)\,\big{|}\,d_{L_{w}}(L_{u},L_{v})>\xi\}\subset\{w \in\operatorname{Lk}(o)\,\big{|}\,d_{\ell_{e}}(\ell_{e_{1}},\ell_{e_{2}})> \xi_{0}\}.\]
The projection axioms for \(\{\ell_{f}\,|\,f\in E(T)\text{ and }f^{+}=v\}\) imply that the latter set is finite, and so the former set must also be finite. Since there are finitely many possibilities for \(o\), this verifies Axiom 3.
**Lemma 3.6**.: _For each \(i=1,2\), the action of \(G=\pi_{1}(\mathcal{G})\) on the collection \(\mathbb{L}_{i}=\{L_{v}\mid v\in\mathcal{V}_{i}\}\) satisfies_
\[\Pi_{gL_{v}}(gL_{u})=g\Pi_{L_{v}}(L_{u})\]
_for any \(v\in\mathcal{V}_{i}\) and any \(g\in G\)._
Proof.: This follows immediately from the definition of \(\Pi\) and the fact that the maps proj and \(\psi\) are \(G\)-equivariant in the sense that \(\operatorname{proj}_{\overline{ge}}(\ell_{gf})=g\cdot\operatorname{proj}_{ \tilde{e}}(\ell_{f})\) and \(\psi_{ge}(gx)=g\psi_{e}(x)\).
We are now ready to prove Theorem 1.4.
Proof of Theorem 1.4.: Let \(G^{\prime}\) be the finite index subgroup of \(G\) given by Lemma 3.1, which is also a Croke-Kleiner admissible group. Without loss of generality, we replace \(G\) by \(G^{\prime}\) for the rest of the proof.
By Proposition 3.5, the collection of quasi-lines \(\mathbb{L}_{i}=\{L_{v}\mid v\in\mathcal{V}_{i}\}\) satisfies the projection axioms with a constant \(\xi\) for \(i=1,2\). Fix \(K>4\xi\). The unbounded quasi-trees of metric spaces \(\mathcal{C}_{K}(\mathbb{L}_{1})\) and \(\mathcal{C}_{K}(\mathbb{L}_{2})\) are themselves quasi-trees, and they admit unbounded isometric actions \(G\curvearrowright\mathcal{C}_{K}(\mathbb{L}_{1})\) and \(G\curvearrowright\mathcal{C}_{K}(\mathbb{L}_{2})\).
Since the underlying graph of \(G\) is bipartite, we can choose an edge \(\tilde{e}\) in \(\Gamma\) which is not a loop. Choosing the orientation of \(\tilde{e}\) correctly, \(\mu=\tilde{e}^{-}\) and \(\omega=\tilde{e}^{+}\) have lifts in \(T\) belonging to \(\mathcal{V}_{1}\) and \(\mathcal{V}_{2}\) respectively. By construction, elements of \(Z_{\mu}\) and \(Z_{\omega}\) are loxodromic and elliptic in the action on \(\mathcal{C}_{K}(\mathbb{L}_{1})\), respectively, and elliptic and loxodromic in the action on \(\mathcal{C}_{K}(\mathbb{L}_{2})\), respectively. By Lemma 2.3, we conclude that the group \(G\) is \(\mathcal{H}\)-inaccessible.
Every graph manifold has a finite cover that is a graph manifold \(N\) containing at least two Seifert fibered spaces such that each Seifert fibered piece has orientable, hyperbolic base orbifold. We call such a graph manifold _non-elementary_ in Section 4. Since \(\pi_{1}(N)\) is a Croke-Kleiner admissible group, the following corollary is immediate from Theorem 1.4.
**Corollary 3.7**.: _Every graph manifold has a finite cover whose fundamental group is \(\mathcal{H}\)-inaccessible._
It is still unknown whether \(\mathcal{H}\)-inaccessibility of a finite index normal subgroup of \(G\) passes to \(\mathcal{H}\)-inaccessibility of the ambient group \(G\). Thus it is natural to ask whether the "finite cover" condition in Corollary 3.7 can be removed. We will address this question in the following section.
## 4. \(\mathcal{H}\)-accessibility of 3-manifold groups
The goal of this section is to prove Theorem 1.1, which gives conditions under which the fundamental group of a non-geometric 3-manifold is \(\mathcal{H}\)-inaccessible.
We begin by recalling some definitions and facts about 3-manifolds. Let \(M\) be a compact, connected, orientable, irreducible 3-manifold with empty or toroidal boundary. By the geometrization theorem for 3-manifolds of Perelman ([Perel1], [Perel2], [Perel3]) and Thurston, either
1. the manifold \(M\) is _geometric_, in the sense that its interior admits one of the following geometries: \(S^{3}\), \(\mathbb{E}^{3}\), \(\mathbb{H}^{3}\), \(S^{2}\times\mathbb{R}\), \(\mathbb{H}^{2}\times\mathbb{R}\), \(\widetilde{SL(2,\mathbb{R})}\), Nil, and Sol; or
2. the manifold \(M\) is _non-geometric_. In this case, the torus decomposition of 3-manifolds yields a nonempty minimal union \(\mathcal{T}\subset M\) of disjoint essential tori, unique up to isotopy, such that each component of \(M\backslash\mathcal{T}\) is either a Seifert fibered piece or a hyperbolic piece.
We refer the reader to [21] for background on geometric structures on 3-manifolds. A Seifert fibered piece is called _non-elementary_ if its base orbifold is orientable and hyperbolic, and it is called _isolated_ if it is not glued to any other Seifert fibered piece.
The manifold \(M\) is called a _graph manifold_ if all the pieces of \(M\backslash\mathcal{T}\) are Seifert fibered. A graph manifold is _non-elementary_ if it contains at least two pieces and all pieces are non-elementary. In other words, a non-elementary graph manifold is obtained by gluing at least two and at most finitely many non-elementary Seifert fibered manifolds, where the gluing maps between the Seifert components do not identify (unoriented) Seifert fibers up to homotopy.
We will call a non-geometric manifold \(M\) a _mixed manifold_ if it is not a graph manifold. If there is a subcollection \(\mathcal{T}^{\prime}\) of \(\mathcal{T}\) and a connected component of \(M\backslash\mathcal{T}^{\prime}\) that is a graph manifold, then this connected component is called a _graph manifold component_ of the mixed manifold \(M\). A graph manifold component is _maximal_ if it is not properly contained in another graph manifold component. A mixed manifold is _non-elementary_ if all maximal graph manifold components and Seifert fibered pieces are non-elementary.
**Remark 4.1**.: Every graph (respectively, mixed) manifold is finitely covered by a non-elementary graph (respectively, mixed) manifold (see, for example, [14, Lemma 3.1], [11, Lemma 2.1]).
Our starting point for proving Theorem 1.1 is the following lemma, which describes when \(\pi_{1}(M)\) is relatively hyperbolic.
**Lemma 4.2** ([1, 15]).: _Let \(M_{1},\dots,M_{k}\) be the maximal graph manifold components and Seifert fibered pieces of the torus decomposition of \(M\). Let \(S_{1},\dots,S_{\ell}\) be the tori in the boundary of \(M\) that bound a hyperbolic piece, and let \(T_{1},\dots,T_{m}\) be the tori in the torus decomposition of \(M\) that separate two hyperbolic components. Then \(\pi_{1}(M)\) is hyperbolic relative to_
\[\mathbb{P}=\{\pi_{1}(M_{p})\}_{p=1}^{k}\cup\{\pi_{1}(S_{q})\}_{q=1}^{\ell} \cup\{\pi_{1}(T_{r})\}_{r=1}^{m}.\]
This relatively hyperbolic structure on \(\pi_{1}(M)\) is useful because of the following result, which gives a criterion for relatively hyperbolic groups to be \(\mathcal{H}\)-inaccessible.
**Lemma 4.3**.: _Let \((G,\mathbb{P})\) be a relatively hyperbolic group. If there is a peripheral subgroup \(P\in\mathbb{P}\) that satisfies the hypotheses of Lemma 2.3, then \(G\) is \(\mathcal{H}\)-inaccessible._
Before proving the lemma, we state an immediate corollary, which gives a different proof of [1, Theorem 6.2].
**Corollary 4.4** ([1, Theorem 6.2]).: _The fundamental group of a finite-volume cusped hyperbolic 3-manifold is \(\mathcal{H}\)-inaccessible._
We now turn to the proof of Lemma 4.3.
Proof of Lemma 4.3.: To see that \(\mathcal{H}(G)\) does not contain a largest element, we will construct two actions of \(G\) on hyperbolic spaces with commuting elements \(a,b\in G\) that satisfy the hypotheses of Lemma 2.3. To do this, we will apply the machinery of _induced actions_ from [15].
Since \(P\) satisfies the hypotheses of Lemma 2.3 by assumption, there are commuting elements \(a,b\in P\) and isometric actions \(P\curvearrowright X\) and \(P\curvearrowright Y\) on hyperbolic spaces such that \(a\) and \(b\) act loxodromically and elliptically, respectively, in the action \(P\curvearrowright X\), and \(b\) acts loxodromically in the action \(P\curvearrowright Y\). For all \(Q\in\mathbb{P}\setminus\{P\}\), fix the trivial action of \(Q\) on a point. By [1], there exist hyperbolic spaces \(Z_{X},Z_{Y}\) on which \(G\) acts by isometries, associated to the collection of actions \(\{Q\curvearrowright*|\;Q\in\mathbb{P}\setminus\{P\}\}\cup\{P\curvearrowright X\}\) and the collection of actions \(\{Q\curvearrowright*|\;Q\in\mathbb{P}\setminus\{P\}\}\cup\{P\curvearrowright Y\}\), respectively. Moreover, there are coarsely \(P\)-equivariant quasi-isometric embeddings \(X\to Z_{X}\) and \(Y\to Z_{Y}\). Therefore, \(a\) acts loxodromically and \(b\) acts elliptically in the action \(G\curvearrowright Z_{X}\), while \(b\) acts loxodromically in the action \(G\curvearrowright Z_{Y}\). This completes the proof.
In light of Lemmas 4.2 and 4.3, to prove the \(\mathcal{H}\)-inaccessibility of \(\pi_{1}(M)\), it suffices to understand its peripheral subgroups. In Section 4.1, we analyze the fundamental groups of the Seifert fibered pieces. The more difficult subgroups to understand are the fundamental groups of the maximal graph manifold components. We consider these in Section 4.2 and give conditions under which they satisfy Lemma 2.3; see Proposition 4.8. In Section 4.3, we put these results together and prove Theorem 1.1. Up to this point, we have been assuming that \(M\) has empty or toroidal boundary. Finally, in Section 4.4, we consider \(3\)-manifolds with higher genus boundary components.
### Seifert fibered manifolds
In this section, we analyze Seifert fibered pieces.
**Lemma 4.5**.: _Let \(1\to\mathbb{Z}\xrightarrow{i}G\xrightarrow{\pi}H\to 1\) be a short exact sequence where \(\mathbb{Z}\) is central in \(G\) and \(H\) is a non-elementary hyperbolic group. Then \(G\) is \(\mathcal{H}\)-inaccessible._
Proof.: Choose a finite generating set \(J\) of \(H\) and consider the hyperbolic action \(G\curvearrowright\operatorname{Cay}(H,J)\). Let \(a\) be a generator of the group \(\mathbb{Z}\), and let \(b\) be an element of \(G\) such that \(\pi(b)\) is loxodromic in \(H\curvearrowright\operatorname{Cay}(H,J)\). The element \(b\) is thus loxodromic in the action \(G\curvearrowright\operatorname{Cay}(H,J)\), as well, while the element \(a\) is elliptic (in fact, trivial) in this action.
Since every integral cohomology class of a hyperbolic group is bounded (see [11]), the central extension \(\mathbb{Z}\to G\to H\) corresponds to a bounded element of \(H^{2}(H,\mathbb{Z})\). Hence [11, Lemma 4.1] provides a quasi-morphism \(\phi\colon G\to\mathbb{Z}\) which is unbounded on \(i(\mathbb{Z})\). By [1, Lemma 4.15], there exists a generating set \(S\) for \(G\) such that \(L:=\operatorname{Cay}(G,S)\) is a quasi-line and the inclusion \(\mathbb{Z}\to L\) induced by \(i\) is a \(\mathbb{Z}\)-equivariant quasi-isometry. We thus obtain a hyperbolic action \(G\curvearrowright L\) for which \(a\) is loxodromic. Since \(a\in Z(G)\), the elements \(a\) and \(b\) commute. By Lemma 2.3, \(G\) is \(\mathcal{H}\)-inaccessible.
**Corollary 4.6**.: _Let \(M\) be a non-elementary Seifert fibered manifold. Then \(\pi_{1}(M)\) is \(\mathcal{H}\)-inaccessible._
Proof.: Let \(\varphi\colon M\to\Sigma\) be a Seifert fibration. Since \(S^{1}\to M\to\Sigma\) is a circle bundle over \(\Sigma\), there is a short exact sequence
\[1\to\mathbb{Z}\to\pi_{1}(M)\to\pi_{1}(\Sigma)\to 1,\]
where \(\mathbb{Z}\) is the normal cyclic subgroup of \(\pi_{1}(M)\) generated by a fiber. The group \(\mathbb{Z}\) is central in \(\pi_{1}(M)\) since \(\Sigma\) is orientable (see, e.g., [10, Proposition 10.4.4]). By Lemma 4.5, the group \(\pi_{1}(M)\) is \(\mathcal{H}\)-inaccessible.
### \(\mathcal{H}\)-accessibility of non-elementary graph manifolds
Let \(M\) be a \(3\)-dimensional non-elementary graph manifold with Seifert fibered pieces \(M_{1},\dots,M_{k}\) in its torus decomposition. There is an induced graph-of-groups structure \(\mathcal{G}\) on \(\pi_{1}(M)\) with underlying graph \(\Gamma\) as follows. There is a vertex of \(\Gamma\) for each \(M_{i}\), with vertex group \(\pi_{1}(M_{i})\). Each edge group is
\(\mathbb{Z}^{2}\), the fundamental group of a torus in the decomposition. The edge monomorphisms come from the two different gluings of the torus into the two adjacent Seifert fibered components. With this graph of groups structure, \(\pi_{1}(M)\) is a Croke-Kleiner admissible group.
The universal cover \(\widetilde{M}\) of \(M\) is tiled by a countable collection of copies of the universal covers \(\widetilde{M}_{1},\ldots,\widetilde{M}_{k}\). We call these subsets _vertex spaces_. We refer to boundary components of vertex spaces as _edge spaces_. Two vertex spaces are either disjoint or intersect along an edge space. Let \(T\) be the Bass-Serre tree of \(\mathcal{G}\).
Applying Theorem 1.4 to the Croke-Kleiner admissible group \(\pi_{1}(M)\), we obtain a cover \(M^{\prime}\to M\) of degree 2 such that \(\pi_{1}(M^{\prime})\) is not \(\mathcal{H}\)-accessible. However, this is not enough to conclude \(\mathcal{H}\)-inaccessibility of \(\pi_{1}(M)\), as it is unknown whether \(\mathcal{H}\)-inaccessibility of a finite index subgroup passes to the ambient group. In this section, we will show that \(\pi_{1}(M)\) itself is \(\mathcal{H}\)-inaccessible; see Proposition 4.8.
We begin with a lemma. Let \(\pi_{\alpha}(\beta)\) be the closest point projection of a line \(\beta\) to a line \(\alpha\) in a hyperbolic space. Let \(d_{\alpha}(\cdot,\cdot)\) be in the resulting projection distances.
**Lemma 4.7**.: _Let \(F\) be a 2-dimensional hyperbolic orbifold with nonempty boundary and universal cover \(\widetilde{F}\), and let \(\mathbb{L}\) be the collection of boundary lines of \(\widetilde{F}\). For any \(\alpha\in\mathbb{L}\) and any loxodromic \(\gamma\in\pi_{1}(F)\) whose axis in \(\widetilde{F}\) is also a line in \(\mathbb{L}\), the following holds. There exists a constant \(\lambda>0\) such that for any \(n\in\mathbb{Z}\) and any line \(\beta\in\mathbb{L}\setminus\{\alpha,\gamma^{n}(\alpha)\}\), we have_
\[d_{\beta}(\alpha,\gamma^{n}(\alpha))\leq\lambda.\]
The proof of this lemma is very similar to that of [1, Lemma 5.6]. We refer the reader to that paper for some figures that may be helpful; see in particular [1, Figure 8].
Proof of Lemma 4.7.: Since \(\mathbb{L}\) is a \(\pi_{1}(F)\)-invariant collection of axes in the hyperbolic plane \(\mathbb{H}^{2}\), it follows from Example 2.8 that \((\mathbb{L},\pi_{\ell})\) satisfies the projection axioms for some constant \(\xi\). In particular, there exists a constant \(\xi>1\) such that \(\operatorname{diam}(\pi_{\ell}(\ell^{\prime}))\leq\xi\) for distinct elements \(\ell\) and \(\ell^{\prime}\) in \(\mathbb{L}\). Let
\[\lambda=\max\{\xi,d(\alpha,\gamma(\alpha))+2\xi,d(\alpha,\gamma^{2}(\alpha))+2 \xi\}.\]
Let \(\ell\in\mathbb{L}\) denote the axis of \(\gamma\) in \(\widetilde{F}\). If \(\ell=\alpha\), then
\[d_{\beta}(\alpha,\gamma^{n}(\alpha))=d_{\beta}(\alpha,\alpha)\leq\xi\leq\lambda,\]
and the result holds. For the remainder of the proof, we assume \(\ell\neq\alpha\) and consider two cases. _Case 1_: \(\beta\notin\{\gamma^{k}(\alpha)\,|k\in\mathbb{Z}\}\)
In this case, there exists a unique \(k_{0}\in\mathbb{Z}\) such that \(\beta\) lies between \(\gamma^{k_{0}}(\alpha)\) and \(\gamma^{k_{0}+1}(\alpha)\). That is, \(\partial\mathbb{H}^{2}\) minus the endpoints of \(\gamma^{k_{0}}(\alpha)\) and \(\gamma^{k_{0}+1}(\alpha)\) consists of four intervals, one containing the endpoints of \(\beta\), one containing the endpoints of all \(\gamma^{i}(\alpha)\) for \(i\notin\{k_{0},k_{0}+1\}\), and the other two disjoint from all endpoints of lines in \(\mathbb{L}\). Fixing an appropriate orientation on \(\beta\), we partially order sub-intervals \(I=[x,y]\) and \(J=[z,w]\) of \(\beta\) (with \(x\leq y\) and \(z\leq w\) in the orientation) by \(I\leq J\) if \(x\leq z\) and \(y\leq w\). Then the projections of the lines \(\gamma^{k}(\alpha)\) onto \(\beta\) occur in the following order
\[\pi_{\beta}(\gamma^{k_{0}}(\alpha))<\pi_{\beta}(\gamma^{k_{0}-1}(\alpha))<\pi_ {\beta}(\gamma^{k_{0}-2}(\alpha))<\ldots<\pi_{\beta}(\ell)\]
and
\[\pi_{\beta}(\ell)<\ldots<\pi_{\beta}(\gamma^{k_{0}+3}(\alpha))<\pi_{\beta}( \gamma^{k_{0}+2}(\alpha))<\pi_{\beta}(\gamma^{k_{0}+1}(\alpha)).\]
Thus
\[d_{\beta}(\alpha,\gamma^{n}(\alpha))\leq d_{\beta}(\gamma^{k_{0}}(\alpha), \gamma^{k_{0}+1}(\alpha))\leq d(\gamma^{k_{0}}(\alpha),\gamma^{k_{0}+1}( \alpha))+2\xi,\]
where \(d(\gamma^{k_{0}}(\alpha),\gamma^{k_{0}+1}(\alpha))\) denotes the distance between \(\gamma^{k_{0}}(\alpha)\) and \(\gamma^{k_{0}+1}(\alpha)\) in the hyperbolic plane. The final inequality follows from the fact that the nearest point projection is a 1-Lipschitz map and that \(\pi_{\ell}(\ell^{\prime})\) has diameter at most \(\xi\) for any distinct lines \(\ell,\ell^{\prime}\in\mathbb{L}\). Since \(d(\gamma^{k_{0}}(\alpha),\gamma^{k_{0}+1}(\alpha)))=d(\alpha,\gamma(\alpha))\), it follows that
\[d_{\beta}(\alpha,\gamma^{n}(\alpha))\leq d(\alpha,\gamma(\alpha))+2\xi\leq\lambda.\]
_Case 2_: \(\beta=\gamma^{k}(\alpha)\) for some integer \(k\neq 0,n\). Using an analogous argument to Case 1, we see that \(d_{\beta}(\alpha,\gamma^{n}(\alpha))\) is bounded above by
\[d_{\beta}(\gamma^{k-1}(\alpha),\gamma^{k+1}(\alpha))\leq d(\alpha,\gamma^{2}( \alpha))+2\xi\leq\lambda.\qed\]
**Proposition 4.8**.: _The fundamental group of a non-elementary graph manifold is \(\mathcal{H}\)-inaccessible._
Proof.: Let \(\mathcal{G}\) be the graph-of-groups structure on \(\pi_{1}(M)\) with underlying graph \(\Gamma\) described at the beginning of this section. The assumption that the graph manifold \(M\) is non-elementary ensures that there are at least two vertices in the graph \(\Gamma\). We divide the proof into two cases, depending on the location of loops in \(\Gamma\).
Fix an edge \(\alpha\) in \(\Gamma\) that is not a loop, and label the vertex \(\alpha^{-}\) by \(\mu\) and the vertex \(\alpha^{+}\) by \(\omega\). Let \(T_{\alpha}\) be the torus in \(M\) associated to the edge \(\alpha\). Let \(v\) and \(w\) be two adjacent vertices in the tree \(T\) such that \(\widetilde{M}_{v}\) and \(\widetilde{M}_{w}\) are the universal covers of the Seifert pieces \(M_{\mu}\) and \(M_{\omega}\), respectively. Let \(z_{\mu}\) and \(z_{\omega}\) be the generators of \(Z_{\mu}\) and \(Z_{\omega}\), respectively.
_Case 1:_ Suppose there is no loop in \(\Gamma\) based at the vertex \(\mu\). Let
\[\mathbb{W}_{\mu}:=\left\{L_{v}\,\big{|}\,\widetilde{M}_{v}\text{ is a lift of the Seifert fibered piece }M_{\mu}\right\}\]
If \(L_{v}\) and \(L_{v^{\prime}}\) are two distinct elements in \(\mathbb{W}_{\mu}\), then \(d(v,v^{\prime})\geq 2\) (though they are not necessarily an even distance apart). In this case, the techniques in Section 3 apply to show that there is a cobounded action \(\pi_{1}(M)\curvearrowright\mathcal{C}_{K}(\mathbb{W}_{\mu})\) such that \(z_{\mu}\) is loxodromic and \(z_{\omega}\) is elliptic.
_Case 2:_ Suppose there is a loop in \(\Gamma\) based at the vertex \(\mu\).
As in Section 3, we partition the vertex set \(T^{0}\) into two disjoint collections of vertices \(\mathcal{V}_{1}\) and \(\mathcal{V}_{2}\) such that if \(z\) and \(z^{\prime}\) both lie in \(\mathcal{V}_{i}\) then \(d(z,z^{\prime})\) is even. Applying Theorem 1.4 to the Croke-Kleiner admissible group \(\pi_{1}(M)\), we obtain a degree 2 cover \(M^{\prime}\to M\) such that \(\pi_{1}(M^{\prime})\) is \(\mathcal{H}\)-inaccessible and \(\pi_{1}(M^{\prime})\) preserves \(\mathcal{V}_{1}\) and \(\mathcal{V}_{2}\).
Assume without loss of generality that \(v\) is in \(\mathcal{V}_{1}\) and \(w\) is in \(\mathcal{V}_{2}\). Let
\[\mathbb{Q}_{\mu}:=\left\{L_{u}\,\big{|}\,u\in\mathcal{V}_{1}\text{ and } \widetilde{M}_{u}\text{ is a lift of }M_{\mu}\right\}\]
The results in Section 3 still hold for \(\mathbb{Q}_{\mu}\), and thus we obtain quasi-trees of spaces \(\mathcal{C}_{K}(\mathbb{Q}_{\mu})\) for sufficiently large \(K\). Since \(\pi_{1}(M^{\prime})\) preserves \(\mathbb{Q}_{\mu}\), we obtain an action \(\pi_{1}(M^{\prime})\curvearrowright\mathcal{C}_{K}(\mathbb{Q}_{\mu})\) as in Section 3.
Passing to a power of two if necessary, we may assume that \(z_{\mu},z_{\omega}\in\pi_{1}(M^{\prime})\). As shown in the proof of Theorem 1.4, the element \(z_{\mu}\) acts loxodromically on \(\mathcal{C}_{K}(\mathbb{Q}_{\mu})\), while \(z_{\omega}\) acts elliptically on \(\mathcal{C}_{K}(\mathbb{Q}_{\mu})\). By the construction of \(\mathcal{C}_{K}(\mathbb{Q}_{\mu})\), and since vertex groups are central extensions of \(\mathbb{Z}\), the element \(z_{\mu}\) is a WWPD\({}^{+}\) element in the action \(\pi_{1}(M^{\prime})\curvearrowright\mathcal{C}_{K}(\mathbb{Q}_{\mu})\). Hence Proposition 2.5 provides a homogeneous quasimorphism \(q_{K}\colon\pi_{1}(M^{\prime})\to\mathbb{R}\) satisfying \(q_{K}(z_{\omega})=0\) and \(q_{K}(z_{\mu})\neq 0\).
Our goal is to extend \(q_{K}\) to a homogeneous quasimorphism \(\pi_{1}(M)\to\mathbb{R}\) while ensuring that \(z_{\omega}\) and \(z_{\mu}\) still have trivial and non-trivial image, respectively. Let \(h\in\pi_{1}(M)\) be a representative of the non-trivial coset of \(\pi_{1}(M^{\prime})\) in \(\pi_{1}(M)\). Define a function \(q_{K}^{\prime}\colon\pi_{1}(M^{\prime})\to\mathbb{R}\) by
\[q_{K}^{\prime}(x):=q_{K}(x)+q_{K}(hxh^{-1}).\]
Note that \(q^{\prime}_{K}\) is constant on conjugacy classes of \(\pi_{1}(M)\), i.e., \(q^{\prime}_{K}(yxy^{-1})=q^{\prime}_{K}(x)\) for any \(y\in\pi_{1}(M)\) and \(x\in\pi_{1}(M^{\prime})\)). Hence it follows from the proof of [1, Lemma 7.2] that \(q^{\prime}_{K}\) extends to a homogeneous quasimorphism \(\rho_{K}\colon\pi_{1}(M)\to\mathbb{R}\) defined by \(\rho_{K}(x):=q^{\prime}_{K}(x^{2})/2\) for each \(x\in\pi_{1}(M)\).
**Lemma 4.9**.: _Suppose there is a loop in \(\Gamma\) based at \(\mu\). For \(K\) large enough, we have \(\rho_{K}(z_{\omega})=0\) and \(\rho_{K}(z_{\mu})\neq 0\)._
We defer the proof of the lemma for the moment and assume this result to complete the proof Proposition 4.8. Since \(\rho_{K}\colon\pi_{1}(M)\to\mathbb{R}\) is a nonzero homogeneuous quasimorphism, we obtain from Proposition 2.6 an action \(\pi_{1}(M)\curvearrowright\mathcal{L}\) on a quasi-line. Moreover, since \(\rho_{K}(z_{\mu})\neq 0\) and \(\rho_{K}(z_{\omega})=0\), the element \(z_{\mu}\) is loxodromic while \(z_{\omega}\) is elliptic in this action.
Now, consider the other endpoint \(\omega\) of \(\alpha\). Suppose first there is not a loop in \(\Gamma\) based at \(\omega\). Interchanging the roles of \(\mu\) and \(\omega\) in Case 1 above produces an action \(\pi_{1}(M)\curvearrowright\mathcal{C}_{K}(\mathbb{W}_{\omega})\) such that \(z_{\mu}\) is elliptic and \(z_{\omega}\) is loxodromic. On the other hand, if there is a loop in \(\Gamma\) based at \(\omega\), then interchanging the roles of \(\mu\) and \(\omega\) in Case 2 above produces an action \(\pi_{1}(M)\curvearrowright\mathcal{L}^{\prime}\) on a quasi-line in which (after possibly passing to a power of 2) \(z_{\mu}\) is elliptic and \(z_{\omega}\) is loxodromic.
Regardless of which combination of cases holds for the vertices \(\mu\) and \(\omega\), we have produced two actions on hyperbolic spaces and two commuting elements \(z_{\mu}\) and \(z_{\omega}\) which satisfy the conditions of Lemma 2.3, which concludes the proof.
We now prove Lemma 4.9.
Proof of Lemma 4.9.: Recall that \(v\in\mathcal{V}_{1}\). As \(h\in\pi_{1}(M)\) is a representative of the non-trivial coset of \(\pi_{1}(M^{\prime})\) in \(\pi_{1}(M)\), we have \(hv\in\mathcal{V}_{2}\). Note that \(\widetilde{M}_{hv}\) is also a lift of \(M_{\mu}\), even though \(hv\) is not in \(\mathcal{V}_{1}\). Fix a vertex \(v_{0}\) adjacent to \(hv\) such that \(\widetilde{M}_{v_{0}}\) is a lift of \(M_{\mu}\) in \(\widetilde{M}\). This ensures that \(L_{v_{0}}\) is in \(\mathbb{Q}_{1}\). Let \(l\in\mathbb{L}_{hv}\) be the boundary line of \(\widetilde{F}_{hv}\) corresponding to the edge \([v_{0},hv]\).
We will first show that \(\rho_{K}(z_{\omega})=0\). Since \(q_{K}\) is a homogeneous quasimorphism and \(z_{\omega}\in\pi_{1}(M^{\prime})\), we have that
\[\rho_{K}(z_{\omega})=q^{\prime}_{K}(z_{\omega})=q_{K}(z_{\omega})+q_{K}(hz_{ \omega}h^{-1})=0+q_{K}(hz_{\omega}h^{-1}).\]
By Proposition 2.5, to show \(\rho_{K}(z_{\omega})=q_{K}(hz_{\omega}h^{-1})=0\), it suffices to show that \(hz_{\omega}h^{-1}\) is elliptic in the action \(\pi_{1}(M^{\prime})\curvearrowright\mathcal{C}_{K}(\mathbb{Q}_{1})\). Let \(\xi>0\) be the projection constant of the projection complexes \(\mathbb{Q}_{1}\) and \(\mathbb{Q}_{2}\). Since \(M_{hv}\) is a Seifert fibered piece, we have \(\widetilde{M}_{hv}=\widetilde{F}_{hv}\times\mathbb{R}\), where \(F_{hv}\) is the base orbifold of \(M_{hv}\). Applying Lemma 4.7 to the space \(F_{hv}\), the collection of boundary lines of \(\widetilde{F}_{hv}\), the fixed boundary line \(l\), and the chosen element \(\gamma=hz_{\omega}h^{-1}\), we obtain a constant \(\lambda>0\). We further enlarge \(\lambda\) so that it satisfies Lemma 3.4.
Choose \(K>4\xi+4+2\lambda+\lambda^{2}\) large enough to apply Proposition 2.9, and let \(y_{0}\) be a point in the projection of \(L_{hz_{\omega}h^{-1}v_{0}}\) to \(L_{v_{0}}\). We will show that \(d_{\mathcal{C}_{K}(\mathbb{Q}_{1})}(y_{0},\gamma^{n}(y_{0}))\leq 6K\) for all \(n\in\mathbb{Z}\), which will imply that \(\gamma\) is elliptic in the action \(\pi_{1}(M^{\prime})\curvearrowright\mathcal{C}_{K}(\mathbb{Q}_{1})\), as desired.
Fix \(n\in\mathbb{Z}\). By Proposition 2.9, we have
\[d_{\mathcal{C}_{K}(\mathbb{Q}_{1})}(y_{0},\gamma^{n}(y_{0}))\leq 4\sum_{ \begin{subarray}{c}u\in\mathcal{V}_{1}\\ L_{u}\in\mathbb{Q}_{1}\end{subarray}}[d_{L_{u}}(y_{0},\gamma^{n}(y_{0}))]_{K }+6K. \tag{2}\]
Thus it suffices to show that \(d_{L_{u}}(y_{0},\gamma^{n}(y_{0}))<K\) for all \(u\in\mathcal{V}_{1}\) such that \(L_{u}\in\mathbb{Q}_{1}\). Since \(L_{u}\in\mathbb{Q}_{1}\), \(\widetilde{M}_{u}\) is a lift of \(M_{\mu}\).
We divide the proof into several cases, depending on the location of the vertex \(u\).
_Case 1: \(u\in\{v_{0},\gamma^{n}(v_{0})\}\)._ We assume that \(u=v_{0}\) as the case \(u=\gamma^{n}(v_{0})\) is proved similarly.
By assumption, \(y_{0}\in L_{v_{0}}\), and so \(\gamma^{n}(y_{0})\in L_{\gamma^{n}(y_{0})}\). By definition,
\[d_{L_{v_{0}}}(y_{0},\gamma^{n}(y_{0}))=\operatorname{diam}(\{y_{0}\}\cup\Pi_{L_{ v_{0}}}(L_{\gamma^{n}(v_{0})}))\]
and
\[d_{L_{v_{0}}}(L_{\gamma(v_{0})},L_{\gamma^{n}(v_{0})})=\operatorname{diam} \bigl{(}\Pi_{L_{v_{0}}}(L_{\gamma(v_{0})})\cup\Pi_{L_{v_{0}}}(L_{\gamma^{n}(v_{ 0})})\bigr{)}.\]
As \(y_{0}\in\Pi_{L_{v_{0}}}(L_{\gamma(v_{0})})\) and the diameter of \(\Pi_{L_{v_{0}}}(L_{\gamma(v_{0})})\) is no more than \(\xi\), it follows that
\[\bigl{|}d_{L_{v_{0}}}(y_{0},\gamma^{n}(y_{0}))-d_{L_{v_{0}}}(L_{\gamma(v_{0})},L_{\gamma^{n}(v_{0})})\bigr{|}\leq 2\xi. \tag{3}\]
The line \(l\) is the boundary line of \(\widetilde{F}_{hv}\) associated to the edge \([v_{0},hv]\). Recall that \(z_{\omega}\) is an element of the edge group \(G_{[v,w]}\), and so it fixes the vertex \(v\). Thus \(\gamma(hv)=hz_{\omega}h^{-1}(hv)=hz_{\omega}(v)=hv\), and so the lines \(\gamma(l)\) and \(\gamma^{n}(l)\) are the boundary lines in \(\widetilde{F}_{hv}\) associated to the edges \([hv,\gamma(v_{0})]\) and \([hv,\gamma^{n}(v_{0})]\), respectively.
Combining (3) with Lemmas 3.4 and 4.7 implies that
\[d_{L_{v_{0}}}(y_{0},\gamma^{n}(y_{0})) \leq d_{L_{v_{0}}}(L_{\gamma(v_{0})},L_{\gamma^{n}(v_{0})})+2\xi\] \[\leq\lambda d_{l}(\gamma(l),\gamma^{n}(l))+\lambda+2\xi\] \[=\lambda\,d_{\gamma^{-1}(l)}(l,\gamma^{n-1}(l))+\lambda+2\xi\leq \lambda^{2}+\lambda+2\xi<K.\]
_Case 2: \(u\in\operatorname{Lk}(hv)\) but \(u\not\in\{v_{0},\gamma^{n}(v_{0})\}\)._ Let \(b\) be the boundary line of \(\widetilde{F}_{hv}\) corresponding to the edge \([u,hv]\), so that \(b\notin\{l,\gamma^{n}(l)\}\). By Lemma 4.7, we have that \(d_{b}(l,\gamma^{n}(l))\leq\lambda\). It follows from Lemma 3.4 that
\[d_{L_{u}}(y_{0},\gamma^{n}(y_{0})) =d_{L_{u}}(L_{v_{0}},L_{\gamma^{n}(v_{0})})\] \[\leq\lambda\,d_{b}(l,\gamma^{n}(l))+\lambda\] \[\leq\lambda^{2}+\lambda<K.\]
_Case 3: \(u\not\in\operatorname{Lk}(hv)\)._ In this case, \(d(u,[v_{0},\gamma^{n}(v_{0})])\geq 2\), and so
\[d_{L_{u}}(y_{0},\gamma^{n}y_{0})=d_{L_{u}}(L_{v_{0}},L_{\gamma^{n}(v_{0})}) \leq\lambda<K.\]
We have shown that \(d_{L_{u}}(y_{0},\gamma^{n}(y_{0}))<K\) for all \(u\in\mathcal{V}_{1}\) such that \(L_{u}\in\mathbb{Q}_{1}\). Therefore (2) shows that \(d_{\mathcal{C}_{K}(\mathbb{Q}_{1})}(y_{0},\gamma^{n}(y_{0}))\leq 6K\) for all \(n\). It follows that \(\gamma\) is elliptic in the action \(\pi_{1}(M^{\prime})\curvearrowright\mathcal{C}_{K}(\mathbb{Q}_{1})\), and so \(q(z_{\omega})=0\).
To complete the proof, we need to verify that
\[\rho_{K}(z_{\mu})=q^{\prime}_{K}(z_{\mu})=q_{K}(z_{\mu})+q_{K}(hz_{\mu}h^{-1}) \neq 0.\]
Since \(hz_{\mu}h^{-1}\) is a central element in \(G_{hv}=\operatorname{Stab}_{G}(hv)\), it follows from Remark 2.13 that \(hz_{\mu}h^{-1}\) acts elliptically on \(L_{v_{0}}\), and thus also on \(\mathcal{C}_{K}(\mathbb{Q}_{1})\). By Proposition 2.5, we have \(q_{K}(hz_{\mu}h^{-1})=0\). Since \(q_{K}(z_{\mu})\neq 0\), it follows that \(\rho_{K}(z_{\mu})\neq 0\).
Theorem 1.3 now follows immediately from Corollary 4.6 and Proposition 4.8.
### Theorem 1.1
In this section, we put together the above results and prove Theorem 1.1, whose statement we recall for the convenience of the reader.
**Theorem 1.1**.: _Let \(M\) be a non-geometric 3-manifold with empty or toroidal boundary. If the torus decomposition of \(M\) contains any of the following, then \(\pi_{1}(M)\) is \(\mathcal{H}\)-inaccessible:_
1. _a hyperbolic piece which contains a boundary torus of_ \(M\)_;_
2. _two hyperbolic pieces glued along a torus;_
3. _an isolated non-elementary Seifert fibered piece; or_
4. _a non-elementary maximal graph manifold component._
Proof.: Let \(M_{1},\ldots,M_{k}\) be the maximal graph manifold components and isolated Seifert fibered pieces of the torus decomposition of \(M\). Let \(S_{1},\ldots,S_{\ell}\) be the tori in the boundary of \(M\) that bound a hyperbolic piece, and let \(T_{1},\ldots,T_{m}\) be the tori in the torus decomposition of \(M\) that separate two hyperbolic components of the torus decomposition. By Lemma 4.2, \(\pi_{1}(M)\) is hyperbolic relative to
\[\mathbb{P}=\{\pi_{1}(M_{p})\}_{p=1}^{k}\cup\{\pi_{1}(S_{q})\}_{q=1}^{\ell}\cup \{\pi_{1}(T_{r})\}_{r=1}^{m}.\]
In all of the cases (1)-(4), the collection \(\mathbb{P}\) is non-empty.
In case (1), the collection \(\{S_{1},\ldots,S_{\ell}\}\neq\emptyset\), while in case (2), \(\{T_{1},\ldots,T_{m}\}\neq\emptyset\). Both of these collections consist of tori. Note that \(\mathbb{Z}^{2}\) is \(\mathcal{H}\)-inaccessible: the projections of \(\mathbb{Z}^{2}\) onto each factor yield two actions on lines to which Lemma 2.3 applies. Thus, if \(\{\pi_{1}(S_{q})\}\cup\{\pi_{1}(T_{r})\}\) is nonempty, then \(\pi_{1}(M)\) is \(\mathcal{H}\)-inaccessible by Lemma 4.3, proving the theorem in cases (1) and (2).
Next suppose that (3) holds, so that there is an isolated non-elementary Seifert fibered piece \(M_{p}\). By the proof of Corollary 4.6 we see that \(\pi_{1}(M_{p})\) has two actions to which Lemma 2.3 applies. By Lemma 4.3, \(\pi_{1}(M)\) is \(\mathcal{H}\)-inaccessible.
Finally, suppose that (4) holds, so that there is a non-elementary maximal graph manifold component \(M_{p}\). By the proof of Proposition 4.8, there are two commuting elements \(a,b\in\pi_{1}(M_{p})\) and two actions on hyperbolic spaces (in fact, quasi-trees) \(\pi_{1}(M_{p})\curvearrowright X\) and \(\pi_{1}(M_{p})\curvearrowright Y\) such that \(a\) and \(b\) are elliptic and loxodromic, respectively, in \(\pi_{1}(M_{p})\curvearrowright X\) and \(a\) is loxodromic in \(\pi_{1}(M_{p})\curvearrowright Y\). Applying Lemma 4.3 to \(P=\pi_{1}(M_{p})\), we conclude that \(\mathcal{H}(\pi_{1}(M))\) contains no largest element.
### \(\mathcal{H}\)-accessibility of finitely generated 3-manifold groups
In this section, we explain how one might reduce the study of \(\mathcal{H}\)-accessibility of all finitely generated 3-manifold groups to the case of compact, orientable, irreducible, \(\partial\)-irreducible 3-manifold groups. In particular, we show that for any hyperbolic 3-manifold \(M\) without rank-1 cusps, if \(\pi_{1}(M)\) is finitely generated then it is \(\mathcal{H}\)-accessible.
Let \(M\) be an orientable 3-manifold with finitely generated fundamental group. It follows from Scott's Core Theorem that \(M\) contains a compact codimension zero submanifold whose inclusion map is a homotopy equivalence [10], and thus also an isomorphism on fundamental groups. We thus can assume our 3-manifolds are compact.
The sphere-disk decomposition provides a decomposition of a compact, orientable 3-manifold \(M\) into irreducible, \(\partial\)-irreducible pieces \(M_{1},\ldots,M_{k}\). In particular, \(\pi_{1}(M)\) is a free product \(\pi_{1}(M_{1})\ast\pi_{1}(M_{2})\ast\cdots\ast\pi_{1}(M_{k})\). Let \(G_{i}:=\pi_{1}(M_{i})\). Note that \(\pi_{1}(M)\) is hyperbolic relative to the collection \(\mathbb{P}=\{G_{1},\ldots,G_{k}\}\). In light of Lemma 4.3, the \(\mathcal{H}\)-inaccessibility of \(\pi_{1}(M)\) follows whenever some \(G_{i}\) satisfies the conditions of Lemma 2.3. Hence, it suffices to investigate the \(\mathcal{H}\)-accessibility of the groups \(G_{i}\).
If \(M\) has empty or toroidal boundary, then \(\mathcal{H}\)-accessibility of \(\pi_{1}(M)\) is understood, except for a few sporadic cases, by Theorem 1.1. The following proposition addresses certain manifolds with higher genus boundary.
**Proposition 4.10**.: _Let \(M\) be a compact, orientable, irreducible, \(\partial\)-irreducible 3-manifold which has at least one boundary component of genus at least \(2\). Then \(\pi_{1}(M)\) is \(\mathcal{H}\)-inaccessible under either of the following hypotheses:_
1. \(M\) _has trivial torus decomposition and at least one torus boundary component; or_
2. \(M\) _has non-trivial torus decomposition._
_On the other hand, if \(M\) has trivial torus decomposition and all boundary components have genus at least \(2\), then \(\pi_{1}(M)\) is \(\mathcal{H}\)-accessible._
Proof.: As in [10, Section 6.3], we can paste compact hyperbolic \(3\)-manifolds with totally geodesic boundaries to the higher genus boundary components of \(M\) to obtain a finite volume hyperbolic manifold \(N\) (in case \(M\) has trivial torus decomposition) or a mixed \(3\)-manifold (in case \(M\) has non-trivial torus decomposition).
If (1) holds, then the manifold \(N\) has toroidal boundary, and, by assumption, there is a boundary torus \(T\) for \(N\) which is also a boundary torus of \(M\).
The subgroup \(P:=\pi_{1}(T)\simeq\mathbb{Z}^{2}\) satisfies Lemma 2.3 and is a peripheral subgroup in the relatively hyperbolic structure on \(\pi_{1}(N)\). The proof of Lemma 4.3 shows that there are commuting elements \(a,b\in P\) and hyperbolic actions \(\pi_{1}(N)\curvearrowright Z_{X}\) and \(\pi_{1}(N)\curvearrowright Z_{Y}\) such that \(a\) and \(b\) act loxodromically and elliptically, respectively, in the action \(\pi_{1}(N)\curvearrowright Z_{X}\), and \(b\) acts loxodromically in the action \(G\curvearrowright Z_{Y}\). As \(\pi_{1}(M)\) is a subgroup of \(\pi_{1}(N)\), we obtain induced actions \(\pi_{1}(M)\curvearrowright Z_{X}\) and \(\pi_{1}(M)\curvearrowright Z_{Y}\). Since \(a,b\in\pi_{1}(M)\), we see that \(\pi_{1}(M)\) is \(\mathcal{H}\)-inaccessible by Lemma 2.3.
If (2) holds, then \(N\) has either empty or toroidal boundary and has the following properties:
1. \(M\) is a submanifold of \(N\) with incompressible toroidal boundary;
2. cutting \(N\) along the tori in the torus decomposition of \(M\) yields the torus decomposition of \(N\); and
3. each piece of \(M\) with a boundary component of genus at least \(2\) is contained in a hyperbolic piece of \(N\).
In particular, it follows from (ii) and (iii) that \(N\) is a mixed \(3\)-manifold, and hence \(\pi_{1}(N)\) is \(\mathcal{H}\)-inaccessible by Theorem 1.1.
In the proof of Theorem 1.1, we prove the \(\mathcal{H}\)-inaccessibility of \(\pi_{1}(N)\) by showing there are two commuting elements \(a,b\in\pi_{1}(T)\) for some torus \(T\) in the torus decomposition of \(N\) and isometric actions \(\pi_{1}(N)\curvearrowright Z_{X}\) and \(\pi_{1}(N)\curvearrowright Z_{Y}\) on hyperbolic spaces, and then applying Lemma 2.3.
By (ii), \(T\) is also a torus in the torus decomposition of \(M\). Thus the induced actions \(\pi_{1}(M)\curvearrowright Z_{X}\) and \(\pi_{1}(M)\curvearrowright Z_{Y}\) satisfy the hypotheses of Lemma 2.3, and so \(\pi_{1}(M)\) is \(\mathcal{H}\)-inaccessible.
We now turn our attention to the final statement of the theorem. In this case, the manifold \(N\) is closed. A finitely generated subgroup \(H\) of \(N\) is a _virtual surface fiber subgroup_ if \(N\) admits a finite cover \(N^{\prime}\to N\) such that \(H\) is a subgroup of \(\pi_{1}(N^{\prime})\) and \(H\) is a surface fiber subgroup of \(\pi_{1}(N^{\prime})\). Any finitely generated subgroup \(H\) of \(\pi_{1}(N)\) is either a geometrically finite Kleinian group or a virtual surface fiber subgroup in \(\pi_{1}(N)\) by the Covering Theorem (see [1]) and the Subgroup Tameness Theorem (see [1, 1] or [1, Theorem 4.1.2] for a statement). In particular, \(\pi_{1}(M)\) is either a virtual surface fiber subgroup, in which case it is hyperbolic, or it is geometrically finite in \(\pi_{1}(N)\). In the latter case, \(\pi_{1}(M)\) is undistorted in \(\pi_{1}(N)\)[1, Corollary 1.6], and we again conclude that \(\pi_{1}(M)\) is hyperbolic, since undistorted subgroups of hyperbolic groups are hyperbolic. As a result, in either case, \(\pi_{1}(M)\) is \(\mathcal{H}\)-accessible.
|
2307.09271 | Cosmology from Large Populations of Galaxy-Galaxy Strong Gravitational
Lenses | We present a forecast analysis on the feasibility of measuring the
cosmological parameters with a large number of galaxy-galaxy scale strong
gravitational lensing systems. Future wide area surveys are expected to
discover and measure the properties of more than 10 000 strong lensing systems.
We develop a hierarchical model that can simultaneously constrain the lens
population and cosmological parameters by combining Einstein radius
measurements with stellar dynamical mass estimates for every lens.
Marginalizing over the lens density profiles and stellar orbital anisotropies,
we find that $w$ can be constrained to a precision of $0.11$ with 10 000
galaxy-galaxy lens systems, which would be better than any existing
single-probe constraint. We test our method on 161 existing lenses, finding
$w=-0.96\pm0.46$. We also show how to mitigate against the potential systematic
of redshift evolution in the mean lens density profile of the population. | Tian Li, Thomas E. Collett, Coleman M. Krawczyk, Wolfgang Enzi | 2023-07-18T14:03:30Z | http://arxiv.org/abs/2307.09271v1 | # Cosmology from Large Populations of Galaxy-Galaxy Strong Gravitational Lenses
###### Abstract
We present a forecast analysis on the feasibility of measuring the cosmological parameters with a large number of galaxy-galaxy scale strong gravitational lensing systems. Future wide area surveys are expected to discover and measure the properties of more than 10 000 strong lensing systems. We develop a hierarchical model that can simultaneously constrain the lens population and cosmological parameters by combining Einstein radius measurements with stellar dynamical mass estimates for every lens. Marginalizing over the lens density profiles and stellar orbital anisotropies, we find that \(w\) can be constrained to a precision of \(0.11\) with 10 000 galaxy-galaxy lens systems, which would be better than any existing single-probe constraint. We test our method on 161 existing lenses, finding \(w=-0.96\pm 0.46\). We also show how to mitigate against the potential systematic of redshift evolution in the mean lens density profile of the population.
keywords: (cosmology:) cosmological parameters - cosmology: observations - gravitational lensing: strong - galaxies: structure
## 1 Introduction
The acceleration of the universe has been discovered through Type-Ia supernovae (Perlmutter et al., 1999; Riess et al., 1998), and has been measured by several observational probes, including Cosmic Microwave Background (CMB) Anisotropies, Baryon Acoustic Oscillations, Weak Gravitational Lensing, Galaxy Clustering, and Redshift Space Distortion (Mortonson et al., 2013). These observations concluded that under flat \(\Lambda\)CDM model, the dark energy density makes up \(70\%\) of the universe today, and has an equation of state of \(w\approx-1\). However, the exact nature of dark energy and dark matter still remains unknown. The 5 \(\sigma\) discrepancy of Hubble Constant between Planck's CMB measurement (Planck Collaboration et al., 2014, 2016, 2020) and local measurements of supernovae (Riess et al., 2011; Freedman et al., 2012; Riess et al., 2016) also suggest potential new physics beyond the \(\Lambda\)CDM model.
In addition to the above methods, galaxy scale strong gravitational lensing provides an independent probe to constrain the cosmological parameters. Strong gravitational lensing occurs when two galaxies align perfectly to our line of sight, such that the light from the background source galaxy will be distorted and magnified by the foreground lens galaxy, resulting in multiple sources or an Einstein ring in the observed image (Einstein, 1936; Zwicky, 1937a, b). The radius of the Einstein ring (Einstein radius), relative positions, flux ratios, and time delays between multiple images depend both on the gravitational potential of the lens galaxy and angular diameter distances between the observer, lens galaxy, and source galaxy. It is the dependence on the angular diameter distances that makes strong lensing sensitive to cosmological parameters. Figure 1 illustrates the sensitivity of the Einstein radius to the equation of state of dark energy. The most well-studied method of constraining cosmology with strong lensing is time-delay cosmography, which uses the temporal variation of a gravitationally lensed quasar or supernova to constrain the Hubble constant (Refsdal, 1964; Birrer et al., 2022; Treu & Marshall, 2016; Treu et al., 2022). Alternatively, the equation of state of dark energy, \(w\), can be measured from systems with sources at multiple redshifts (Gavazzi et al., 2008): Collett & Auger (2014) used a single double source plane lens to infer \(w=-0.99^{+0.19}_{-0.22}\). Aside from single lens analyses, statistical analyses of the ensemble of lens systems can also place bounds on cosmological parameters. Oguri et al. (2008), Chae et al. (2002), and Chae (2007) studied lenses from Cosmic Lens All-sky Survey (CLASS, Myers et al., 2003) and The Sloan Digital Sky Survey Quasar Lens Search (SQLS, Inada et al., 2012). They measured \(w\) through comparing the empirical distribution of image separations in observed samples of lenses with theoretical models. Since the number of lens systems is low, the constraint on \(w\) using this method is weak (e.g., \(w=-1.1\pm 0.6^{+0.3}_{-0.5}\), Oguri et al. (2008)).
The current cosmology analyses with strong lensing systems are limited by several factors. The primary issue is that the sample of known galaxy-scale lenses is only a few hundred systems, discovered in several surveys with heterogeneous selection functions. Once a sample of time-delay or compound lenses has been selected only a handful of suitable systems have adequate data for precision cosmography, e.g the latest TDCOSMO sample has only 6 time-delay lenses (Birrer et al., 2020). Accurate and efficient lens modelling is also a challenge. The mass distribution in the lens must be inferred to convert lensing observables into cosmological constraints. This challenge is compounded by the mass-sheet degeneracy, where different mass models can produce identical strong lensing observables but imply different cosmological parameters (Schneider & Sluse, 2013).
Future optical imaging surveys, including Euclid and LSST, are predicted to discover more than \(10^{6}\) galaxy-galaxy strong lensing systems (Collett, 2015). Euclid will provide high-resolution images, such that lens modelling could plausibly be performed for every lens, without the need for additional imaging data. However, almost all strong lensing science requires accurate lens and source redshifts, and the 4MOST Strong Lensing Spectroscopic Legacy Survey (4SLSLS) is expected to obtain the spectrum of \(\approx\) 10 000 strong lensing systems (Collett et al., 2023).
Combining lensing and stellar dynamics opens up a new statistical method to constrain cosmology. Gravitational lensing determines the mass inside Einstein Radius, and stellar kinematics determines the gravitational potential within which the stars are moving (Koopmans, 2006). Assuming general relativity, the dynamic mass enclosed with the Einstein radius must equal the gravitational mass measured using the Einstein radius.
Thus the combination of lensing and dynamical observables are sensitive to the mass profile of the lens, the orbital properties of its stars, and the cosmological distances (Futamase and Yoshida, 2001; Grillo et al., 2008). The challenge inherent to this method is that the mass profile of lenses and the orbital profiles of their stars are not well known. To produce cosmological constraints, either assumptions must be made about the lenses, or the cosmological parameters and lens properties must be inferred simultaneously. Biesiada et al. (2010) applied the lensing plus dynamics method on 20 lens systems. By assuming a SIS mass profile for all lens galaxies, they found \(\Omega_{M}=0.27\pm 0.28\), and \(w=-0.63\pm 0.45\). Cao et al. (2015) and Chen et al. (2019) improved on this by fitting \(\sim 100\) lenses with powerlaw density profiles.
In this paper, we investigate how well the combination of Euclid and 4MOST data for 10 000 lenses can constrain the cosmological parameters. We employ a Bayesian hierarchical model and simultaneously fit for the cosmological parameters and the ensemble properties of the lens galaxies, including the density profile slope and the stellar orbital anisotropy. It is hard to perform detailed modelling for 10 000 lens systems, so we assume that we can only use catalog-level data in our analysis.
The rest of the paper is organized as follows. In Section 2, we present the mass model of the lens galaxy and the equation that relates the galaxy's velocity dispersion to cosmological parameters. We then introduce the properties of the mock data and discuss potential future surveys for data acquisition. In Section 3, we describe the hierarchical model used to simultaneously fit the lens galaxy properties and cosmology. In Section 4, we present the results obtained under different cosmological models and data measurement accuracies. The final section summarizes the main conclusions. In this paper, the fiducial cosmology for the mock data set is as follows: \(\Omega_{\rm M}=0.3\), \(\Omega_{\Lambda}=0.7\), \(\Omega_{k}=0\), and \(w=-1\).
## 2 Theory
The mass enclosed within the Einstein ring can be related to Einstein radius through:
\[\theta_{\rm E}=\sqrt{\frac{4GM\left(\theta_{\rm E}\right)}{c^{2}}\frac{D_{\rm ls }}{D_{\rm D}\lambda}} \tag{1}\]
where \(D_{\rm I}\) is the angular diameter distance of the lens galaxy, \(D_{ls}\) is the angular diameter distance between lens and source, and \(D_{s}\) is that between observer and source. \(\theta_{E}\) is the angular Einstein radius. M(\(\theta_{\rm E}\)) is the galaxy mass enclosed within the Einstein radius, and the angular diameter distance is :
\[D_{\rm ij}=\frac{c/H_{0}}{(1+z_{\rm j})}\left(\frac{\sinh\left(\sqrt{|\Omega_ {k}|}\int_{s_{\rm i}}^{\rm i}\frac{\rm d\pi}{E(z)}\right)}{\sqrt{|\Omega_{k}| }}\right) \tag{2}\]
where sinn(x) = sin(x), x, or sin(x) for open (\(\Omega_{k}<0\)), flat (\(\Omega_{k}=0\)), or closed (\(\Omega_{k}>0\)) universes respectively, and E(z) is the normalised Hubble parameter:
\[E(z)=\sqrt{\Omega_{\rm M}(1+z)^{3}+\Omega_{k}(1+z)^{2}+\Omega_{\Lambda}(1+z)^ {3(1+w)}} \tag{3}\]
When combining lensing and dynamics, the cosmological model is not directly probed by the measurement of a single distance, but instead through a ratio of distances \(\frac{D_{s}}{D_{ls}}\). However, to make this measurement we need a model that can connect the dynamical data with the lensing mass. Since lenses are typically elliptical galaxies (ETGs) with E/S0 morphologies (Oguri and Marshall, 2010), we use a power profile for both total mass density and stellar luminosity profile (Koopmans, 2006):
\[\rho(r) =\rho_{0}\left(\frac{r}{r_{0}}\right)^{-\gamma}\] \[\nu(r) =\nu_{0}\left(\frac{r}{r_{0}}\right)^{-\delta}. \tag{4}\] \[\beta(r) =1-\frac{\frac{\sigma_{\theta}^{2}}{\sigma_{\theta}^{2}}}{\]
where \(\rho(r)\) is the mass density (include dark matter) distribution function. \(\nu(r)\) is the luminosity density of stars. \(\beta(r)\) is the anisotropy of the stellar velocity dispersion (stellar orbital anisotropy), where \(\sigma_{\theta}\) and \(\sigma_{r}\) are the tangential and radial velocity dispersion, respectively. \(\beta\) ranges from +1 to \(-\infty\), where \(\beta=0\) corresponds to the "isotropic" case, \(\beta=1\) corresponds to a galaxy with pure circular stellar movement, \(\beta=-\infty\) means that stars in the galaxy only have radial movement. In the scenario where \(\gamma=\delta=2\) and \(\beta=0\), the mass model reduces to the Singular Isothermal Sphere (SIS) model, which is a commonly used approximation for the mass profiles of elliptical galaxies (e.g. Auger et al. (2010)).
After solving the spherical Jeans equation and substituting the dynamical mass into equation (2) we get (Koopmans, 2006):
\[\sigma_{\rm i}^{2}(R_{A})=\frac{c^{2}}{2\sqrt{\pi}}\frac{D_{s}}{D_{ls}}\theta _{E}\times f\left(\gamma,\delta,\beta\right)\left(\frac{\theta_{A}}{\theta_{E }}\right)^{2-\gamma}, \tag{5}\]
Figure 1: The Einstein radius as a function of the source redshift, for different values of the equation of state of dark energy. All other parameters are fixed. We assume the redshift of lens galaxy is 0.2, and has an Einstein mass of \(10^{11}\)M\(\odot\).
where
\[\begin{split} f\left(\gamma,\delta,\beta\right)=\frac{3-\delta}{(\xi-2 \beta)(3-\delta)}\left[\frac{\Gamma[(\xi-1)/2]}{\Gamma(\xi/2)}-\beta\frac{ \Gamma[(\xi+1)/2]}{\Gamma[(\xi+2)/2]}\right]\\ \times\frac{\Gamma(\gamma/2)\Gamma[\xi/2)}{\Gamma[(\gamma-1)/2]( \Gamma-1/2]}\end{split} \tag{6}\]
\(\sigma_{1}^{2}(R_{A})\) is the luminosity averaged line-of-sight velocity dispersion (LOSVD) measured in a circular fibre of radius \(\theta_{A}\)1. \(\Gamma\)s are Gamma functions, and \(\xi=\gamma+\delta-2\).
Footnote 1: For 4MOST the fibre diameter is 1.45 arcseconds (de Jong et al., 2012).
Solving Equation 5, we find that for a fixed galaxy mass and surface brightness profile, a steeper density profile (larger \(\gamma\)) and a higher velocity anisotropy (larger \(\beta\)) lead to a higher stellar velocity dispersion. Figure 2 shows the value of Equation 6 as a function of \(\gamma\) and \(\beta\), assuming \(\delta=2.173\), which is the typical value of SLACS lenses.
### Mock data
To build our mock sample of lenses, we use the simulated strong lensing population forecast to be observed by Euclid from Collett (2015). We remove all systems with a source redshift greater than 1.5 since the [OII] doublet is redshifted out of the 4MOST wavelength range at this redshift. The sample is built assuming lenses are uniformly distributed in co-moving volume, follow the observed velocity dispersion function of SDSS (Choi et al., 2007), and have an SIS density profile. The source properties use the LSST sky simulations, with redshifts and number counts matched to observations (Connolly et al., 2010). These assumptions produce a realistic lens population, but lack the complexities of a non-SIS density profile, or of stellar anisotropy. Therefore, we take only the lens galaxy redshift, source galaxy redshift, and the Einstein radius in the data set. We assign \(\gamma\), \(\beta\), and \(\delta\) values to each lens galaxy to produce new velocity dispersions.
Observations shows that \(\gamma\) has a distribution of \(2.078\pm 0.16\)(Auger et al., 2010), \(\delta\) has a distribution of \(2.173\pm 0.085\)(Chen et al., 2019), and \(\beta=0.18\pm 0.13\)(Schwab et al., 2010; Bolton et al., 2006). We generate mock galaxies using these distributions. The true values of velocity dispersion are generated through equation (6). Then, we add random noise to these values to simulate measurement error. The measurement error of Einstein radius is set as 0.01 arcsec, and the error on velocity dispersion of 4MOST is 10km/s. We neglect errors on \(\delta\) and the redshifts since they can be measured with high accuracy. In our fiducial model, we assume that \(\beta\) and \(\gamma\) for each lens are unknown. The full setup of mock data is shown in Table 1.
## 3 Hierarchical model
Since we do not know the mass profile or orbital anisotropy of individual lenses apriori, and they are not easily measured without detailed lens modelling and integral field unit kinematics, we instead require a hierarchical model to connect the measured Einstein radius and the velocity dispersion of every lens with the underlying cosmological parameters of the Universe. In fact, even the population properties for lens density profiles and stellar anisotropies are not well known once the cosmology is allowed to be a free parameter. For our population model, we assume that density profile slopes and stellar anisotropies follow a Gaussian distribution. Our hierarchical model must therefore fit for the ensemble mean density profile slope, \(\langle\gamma\rangle\), and scatter, \(\sigma_{\gamma}\), the ensemble mean anisotropy \(\langle\beta\rangle\), and scatter, \(\sigma_{\beta}\), the individual slopes, \(\gamma_{i}\), and anisotropies, \(\beta_{i}\), of each lens, and the underlying cosmological model parameters. Figure 3 illustrate the structure of our hierarchical model, with the observed Einstein radii and velocity dispersions linked to the model parameters through equation 5.
### Sampling the parameters of our hierarchical model
Our hierarchical model has a large number of free parameters ( \(\sim\) 3 per observed lens \(\times\) 10 000 lenses), traditional MCMC methods are not sufficient for exploring such a posterior distribution. To that end, we make use of the computational frameworks initially developed for large machine learning models, specifically the NumPyro (Phan et al., 2019; Bingham et al., 2019) probabilistic programming language (PPL).
NumPyro is an extension of the Pyro (Bingham et al., 2019) frame
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Property name** & **Distribution** & **Measurement error** \\ \hline \(\gamma\) & \(2\pm 0.16\) & 0.02* \\ \(\delta\) & 2.173 \(\pm\) 0.085 & None \\ \(\beta\) & 0.18 \(\pm\) 0.13 & – \\ \(\sigma_{v}\) & Equation 5 & 10 km/s \\ \(\theta_{\rm E}\) & Collett (2015) & 0.01* \\ \(z_{1},z_{s}\) & Collett (2015) & None \\ \hline \hline \end{tabular} *\(\gamma\) is only treated as an observable in Section 5
\end{table}
Table 1: The parameters of the mock strong lensing systems used in this work. Einstein radius and redshifts are taken from Collett (2015), with source redshifts below \(z_{s}<1.5\). The velocity dispersions are computed from the other parameters using equation 5. Among the above properties, \(\delta\), redshift, Einstein radius, and Velocity dispersion are treated as observables. The measurement error of \(\delta\) and redshift are negligible. \(\gamma\) can also be measured through detailed lens modelling, but we treat it as a free parameter except in the end of Section 5.
Figure 2: The value of the function that links Einstein radius and velocity dispersion as a function of the orbital anisotropy and lens density profile slope. It is given in equation 6 as a function of \(\gamma\) and \(\beta\). We have fixed \(\delta=2.173\). The white regions are unphysical as the luminosity-weighted mass diverges here.
work that uses JAX\({}^{2}\)for automatic differentiation and adds various Hamiltonian Monte-Carlo (HMC) (Duane et al., 1987; Brooks et al., 2011) sampling methods. Automatic differentiation is a technique that efficiently computes the partial derivative of a function without the need to write any new code (Baydin et al., 2018), allowing for the computation of gradients of the likelihood function to be evaluated for all free parameters in the model.
We estimate the posterior distribution through NumPyro's No-U-Turn-Sampler (NUTS). This is an HMC sampler that uses gradient information to find new draw proposals from the likelihood. Its main advantages over traditional MCMC methods are it produces chains with small auto-correlation values (e.g. each draw is independent), requires relatively few warm-up steps, and can draw samples from very high dimensional distributions without issue.
The priors of the model parameters are as follows:
\[\text{Cosmology}:\ \
## 4 Results
We apply our hierarchical approach to model up to 10 000 lens systems and fit three different cosmological models. Aside from the \(\Lambda\)CDM model, we fit the \(u\)CDM model, where the equation of state of dark energy is not fixed at -1. We also fit the ouCDM model, which allows the universe to have positive or negative curvature.
Figure 4 shows the posterior distributions of the \(\Lambda\)CDM, \(u\)CDM, and \(ou\)CDM models. The marginalized results are also shown in the figure. The parameters of lens populations are accurately recovered within 68\(\%\) confidence level. Especially, the mean of \(\gamma\) is constrained with high precision (\(\pm\) 0.03 level). Figure 5 shows the offset between the recovered and input values of the nuisance parameters in our model, i.e. the \(\gamma_{i}\) and \(\beta_{i}\) the standard deviation of the differences are both 0.12. This number is smaller than the intrinsic scatter of \(\gamma\) and \(\beta\) (0.16 and 0.13), and demonstrates that the model is accurately recovering these nuisance parameters as well as the population and cosmological parameters that we are primarily interested in.
Every model successfully recovers the input cosmological parameters. For \(w\)CDM, we find \(w=-0.98\pm 0.11\). This is more precise than current individual cosmological probes: the Pantheon Type Ia supernovae data find \(w=-0.90\pm 0.14\)(Brout et al., 2022); baryon acoustic oscillations (BAO) from eBOSS give \(w=-0.69\pm 0.15\)(Alam et al., 2021). However, our results are less constraining on \(w\) than the combination of both datasets and cosmic microwave background data from Planck, which yields \(w\) = -1.03 \(\pm\) 0.03 (Planck Collaboration et al., 2020).
We find some strong covariances between the cosmological parameters and the lens population parameters of our hierarchical model. The most noticeable are covariances between \(\langle\beta\rangle\) with \(\omega_{M}\), \(\Omega_{k}\) and \(w\). This degeneracy arises from the fact that when mass is held constant, varying \(\langle\beta\rangle\) leads to different velocity dispersions and, consequently, requires different cosmological parameters. Similarly, the mean density profile slope, \(\langle\gamma\rangle\), is covariant with cosmology, since it governs the mapping from velocity dispersion to Einstein radius.
### The impact of data quality on the cosmological constraining power of lens samples
The previous section was based on the assumption that 4MOST delivers 10 000 strong lenses with a velocity dispersion measured to a precision level of 10 km/s. In practice, it is difficult to accurately constrain the velocity dispersion of the lens galaxy in a spectroscopic survey. First, not all lens galaxies will be bright enough for their spectra to have sufficient signal-to-noise ratio (SNR) to measure with 4MOST (Iovino et al., 2023). Second, measuring velocity dispersions from spectra is intrinsically hard, since we do not perfectly know the correct choice of stellar templates for the spectral energy distributions of lens galaxies (Collett et al., 2018; Newman et al., 2017). Thirdly, continuum contamination from the source galaxy may be challenging to remove (but see Turner et al, 2023, submitted)
In this section, we explore how the constraining power for the \(w\)CDM cosmological model varies when we have different sample sizes or velocity dispersion measurement errors. The results are shown in the top panel of figure 6. As expected, the constraining power on \(w\) improves as we increase the number of samples or improve the velocity dispersion measurement. The \(w\) constraint versus sample size roughly follows \(\sigma_{w}=10\times N^{-0.5}\) as shown in the bottom panel of figure 6. For 4MOST observations, there will be a trade-off between observing more lens systems with relatively low SNR (resulting in larger LOSVD errors) and obtaining longer exposure time for each target (resulting in higher SNR but lower sample size). We find that generally, decreasing the LOSVD error by a factor of two (improving \(w\) by a factor of \(\approx\) 1.3-1.7) is comparably helpful for \(w\) as increasing the sample size by a factor of two.
As shown in figure 5, the individual \(\beta_{i}\) are constrained comparably well to direct measurements of the circular velocity curve using HST (0.12 vs 0.05-0.4, Gerhard et al., 2001). The individual \(\gamma_{i}\) error (0.12) is about 5 times larger than the typical result from lens modeling (e.g., 2.11 \(\pm\) 0.04 by Dye and Warren (2005), 1.96 \(\pm\) 0.02 by Dye et al. (2008), and 2.08 \(\pm\) 0.03 by Suyu et al. (2010)). This indicates that adding constraints on individual \(\gamma_{i}\) through lens modeling, instead of leaving them as nuisance parameters would substantially improve the cosmological constraining power.
### Evolving Equation of State of Dark Energy
The equation of state of dark energy (\(w\)) is defined as the ratio of pressure over energy density. In many theoretical cosmological models, \(w\) evolves with redshift (e.g. Peebles and Ratra (1988); Caldwell (2002); Feng et al. (2005)). We test our method on an evolving \(w\) where \(w(z)\) = \(w_{0}+w_{a}\frac{z}{1+z}\)(Caldwell et al., 1998). Since our mock data has fixed \(w=-1\), the fiducial value this model is \(w_{0}=-1\), \(w_{a}=0\)(Chevallier and Polarski, 2001; Linder, 2003).
Our forecast constraints for 10 000 lenses are shown in Figure 7. The 1D marginalized posteriors give \(w_{a}=0.0^{+0.8}_{-1.4}\), \(w_{0}=-1.0^{+0.5}_{-0.4}\). Since these parameters are strongly covariant, we use the figure of merit (FoM) from equation 6 in Mortonson et al. (2010) to quantify the overall dark energy constraining power:
\[\mathrm{FoM}=\frac{6.17\pi}{A_{95}}, \tag{7}\]
Where \(A_{95}\) is the Area enclosed within the 95\(\%\) confidence contour in the \(w_{0}-w_{a}\) plane. Larger FoM's therefore imply better dark energy constraining power. The FoM of our 10 000 lens result is 15. As a comparison, the 10-year LSST forecast for the combination of weak lensing and Large Scale Structure which has a FoM of 49 (The LSST Dark Energy Science Collaboration et al., 2018), for LSST 10-year supernovae the FoM is expected to be 32. We find that the galaxy-galaxy strong lensing constraints have very similar \(w_{0}-w_{a}\) covariance as those forecast for weak lensing. This is to be expected since both methods constrain the distance ratio \(D_{s}/D_{ls}\).
## 5 The impact of the evolution of lens galaxy population
The results of the previous section were based on mock data where \(\gamma\) and \(\beta\) follow Gaussian distributions that do not evolve with redshift. However, this assumption may not be true of the real Universe. Lens galaxies might have evolving density slopes due to various reasons. At high redshift, dissipative processes like gas-rich mergers dominate, leading to steeper total density slopes (large \(\gamma\), see Remus et al. (2013); Sonnenfeld et al. (2014)). At redshifts lower than \(z\sim 2\) (which is all the lenses in our sample), dissipationless processes like gas-poor mergers can flatten the density slopes (e.g., Hiziz et al. (2012, 2013)).
Hydrodynamic simulations show that the mass-density slope of early-type galaxies (ETGs) becomes flatter at lower redshift (Johansson et al., 2012; Remus et al., 2013, 2017; Wang et al., 2020). However, most strong lensing studies find no evidence for evolving density slopes with redshift (Koopmans, 2006; Auger et al., 2010; Ruff et al., 2011; Bolton et al., 2012; Barnabe et al., 2011; Cui et al., 2013).
Figure 4: Posterior distributions and 1D marginalized posterior distributions of the cosmological parameters and the lens population parameters for 10 000 lens systems. Grey assumes an ou/CDM universe, red is for \(w\)CDM, and blue is for \(\Lambda\)CDM universe. \(\gamma\) and \(\sigma_{\gamma}\) are the population density profile slope mean and standard deviation respectively. \(\beta\) and \(\sigma_{\beta}\) are the mean and standard deviation values of the population’s velocity anisotropy. The contour shows the 68\(\%\) and 95\(\%\) confidence levels, while the grey dashed line represents the fiducial value which was used to generate our mock data.
2017), or a slight decrease in density slopes with redshift (Sonnenfeld et al., 2013; Cao et al., 2015, 2016; Holanda et al., 2017). This might suggest that strong lensing surveys are biased towards certain lens populations, or that there is no evolution at all at low redshift. Since the possibility of an evolving mass profile has not been ruled out, we explore the robustness of our method when applied to an evolving lens population.
According to the simulations of Remus et al. (2017), the evolving total mass profile can be approximated by a linear relation: \(\gamma_{z}=0.21z+2.03\). This equation gives unreasonably steep profiles compared to observed lenses, so for our new mock data we use the evolution of Remus et al. (2017), but fix the mean and scatter to give a total population that has the same average slope and scatter as in Section 4. Thus we draw our density slopes from \(\langle\gamma_{z}\rangle=0.21z+1.89\) with a scatter of 0.155.
We first try to fit the redshift evolving lens dataset with the non-evolving hierarchical model used in Section 4. The resulting cosmology posteriors distributions for 10 000 lens systems are shown in blue in figure 8 (full result in appendix A). The population parameter of \(\langle\gamma\rangle\) population is recovered, but the cosmology parameters and the stellar anisotropies are systematically incorrect. Thus, a more complex model is needed to deal with this scenario.
We can generalize our hierarchical model to account for population redshift evolution by fitting additional parameters that describe the evolution of the population. In order to keep our parameters as similar as possible to those in section 4, we parameterise our density profile redshift evolution as follows:
\[\gamma_{z}=\langle\gamma\rangle+\Delta_{\gamma}\times(z-0.47)\pm\sigma_{\gamma}. \tag{8}\]
where 0.47 is the mean lens redshift of our population, we set \(\langle\gamma\rangle=2\) as the ensemble mean density profile slope and \(\Delta_{\gamma}=0.21\) as its linear evolution with redshift.
Unlike the model where we ignore the evolution of the population, we find that fitting for the evolution does reproduce the input lens galaxy population parameters (Figure 10). We recover the correct redshift density evolution: \(\Delta_{\gamma}=0.195^{+0.015}_{-0.017}\). Using this method, we are able to constrain \(w\) to -1.1 \(\pm\) 0.07 level (Figure 8), which is better than the \(\pm\) 0.11 error that we found for \(w\)CDM without \(\gamma\) evolution. For a flat \(w_{0}w_{a}\)CDM cosmology, the figure of merit improves to 17 as shown in the top panel of the figure 11. The modest improvements are due to the slightly reduced intrinsic scatter of our mock population.
Alternatively, the population parameters of describing density profiles can be constrained on a lens-by-lens basis using detailed lens modelling of the arcs observed in each system, without the need to know the cosmological model. This is because the slope of the mass density determines the radial derivative of the deflection angles, and thus the radial width of the arc is sensitive to the density profile slope (Dye & Warren, 2005; Suyu et al., 2010; Collett et al., 2018). With image quality comparable to that of HST and Euclid, one can constrain \(\gamma\) to a precision level of \(\sigma_{\gamma}\approx 0.02\)(Meng et al., 2015), although this neglects the impact of the mass-sheet degeneracy (Gorenstein et al., 1988; Saha, 2000; Wucknitz, 2002; Liesenborgs & De Rijcke, 2012; Schneider & Sluse, 2013). Whilst lens modelling at scale is currently challenging, adding a precise prior on \(\gamma_{i}\) for each lens should significantly improve the constraints on cosmological parameters. If we
Figure 5: Histograms of the difference between input and recovered individual nuisance parameters. \(\gamma_{i}^{\rm fit}\) and \(\beta_{i}^{\rm fit}\) are the means of the posteriors for each individual lens. The standard deviation of both fitting errors are comparable than the intrinsic scatters of the population.
Figure 6: Top: 1 \(\sigma\) uncertainty on the equation of state of dark energy as a function of numbers of lenses and observational velocity dispersion measurement uncertainty.
Bottom: At a fixed LOSVD error of 10 km/s, the standard deviation of \(w\) as function of number of lenses. The black dots represents the model result compared against the solid curve of 10 times the square root of the number of lenses.
assume that we are able to pre-determine the value of \(\gamma\) for each lens system with a precision level of 0.02, we can treat \(\gamma\) as an observable, similar to the Einstein radius. The measurement of "\(w\)" is greatly improved in this scenario: \(w=-1.01\pm 0.06\) for a 10 000 lenses with no evolution of the population, or \(w=-1.08\pm 0.07\) for the population where \(\Delta_{\gamma}=0.21\). Respectively, the figure of merit improves to 28 and 64 for 10 000 lenses assuming a flat \(w_{0}w_{a}\)CDM cosmology (see the bottom two panels of the figure A1).
## 6 Larger Scatter on Lens Population
In this study, our mock lenses are created, assuming the distribution of \(\sigma_{\gamma}\) and \(\sigma_{\beta}\) match those inferred from SLACS data (Auger et al., 2010) and a set of nearby elliptical galaxies (Gerhard et al., 2001). This assumption is critical to our forecasts since \(\sigma_{\gamma}\) and \(\sigma_{\beta}\) quantify how standardized each lens is. If \(\sigma_{\gamma}\) or \(\sigma_{\beta}\) are significantly larger in the real Universe then the power of this method to constrain cosmography will be greatly diminished. It should be noted that the lens systems were selected from ground-based sky surveys, which may introduce biases compared to space-based surveys like Euclid. Additionally, the nearby elliptical galaxies might not be a perfect representation of lens elliptical galaxies. For instance, Xu et al. (2017) analyzed elliptical galaxies in the Illustris simulation and found that the standard deviation of velocity anisotropy can reach 0.3, while in our work, we employed a value of 0.13.
We generate mock galaxies with a wider range of \(\gamma\) (0.1 - 0.3) and \(\beta\) (0.1 - 0.3) to test the effectiveness of our method. Table 2 presents the constraints on \(w\) (\(\sigma_{w}\)) with 1,000 lens systems in a flat universe under different distributions of \(\gamma\) and \(\beta\). The constraints deteriorate as the scatter in either \(\gamma\) or \(\beta\) increases, but even in the most pessimistic scenario the constraints degrade by a factor of 2.17. On the other hand, our forecasts may be pessimistic if lenses can be further standardized by understanding the physical properties that drive the scatter of \(\gamma_{i}\) and \(\beta_{i}\) away from the population mean.
## 7 Application on Existing Data
As a simple example of our method, we apply it to existing data to evaluate how well our power-law mass model can describe real lens galaxies. Additionally, we can compare our results with previous research that used a similar method and the \(w\)CDM model (Cao et al., 2015; Chen et al., 2019). We use a dataset of 161 lens systems selected by Chen et al. (2019) (see Table A1), which were obtained from surveys including LSD, SL2S, SLACS, and S4TM. They measured the luminosity density slope and calculated the equivalent fiber radius of the spectrum for each galaxy. As we lack the specific \(\delta\) value for each individual galaxy, we treat \(\delta\) as an unknown value and draw from its measured distribution: \(\delta=\mathcal{N}(2.173,0.085)\). The average velocity dispersion error for this dataset is 22 km/s.
The resulting posterior distribution is shown in figure 9. In our results, we find that \(w\) = -0.90 \(\pm\) 0.45, which is an even better measurement compared to the theoretical result shown in Figure 6. Regarding galaxy population properties, we obtain \(\langle\gamma\rangle=1.89\pm 0.05\), \(\sigma\gamma=0.18\pm 0.03\), \(\langle\beta\rangle=-0.17\pm 0.16\), and \(\sigma_{\beta}=0.21\pm 0.09\) at a 68\(\%\) confidence level. The predicted \(\gamma\) and \(\beta\) populations are both smaller than those reported in other studies. This is likely due to the fact that we lack accurate \(\delta\) values. Additionally, these 161 lens systems were obtained from four different surveys, and
\begin{table}
\begin{tabular}{c c c c} \hline & \(\sigma_{\gamma}=0.1\) & \(\sigma_{\gamma}=0.2\) & \(\sigma_{\gamma}=0.3\) \\ \hline \(\sigma_{\beta}=0.1\) & 0.90 & 1.03 & 1.33 \\ \(\sigma_{\beta}=0.2\) & 1.15 & 1.26 & 1.47 \\ \(\sigma_{\beta}=0.3\) & 1.52 & 1.79 & 2.17 \\ \hline \end{tabular}
\end{table}
Table 2: The relative change of the 68 percent confidence interval of \(w\) as a function of how standardizable lenses are. We assume with 10 000 lens systems, but change the intrinsic scatter of the lens population \(\gamma\) and \(\beta\).
Figure 8: The impact on the inferred \(w\) and \(\Omega_{\rm M}\) of redshift evolution in the mean lens density profile. The blue contours represent a model that does not allow for redshift evolution in \(\gamma\). The red contours represent the result for a model which fits for linear evolution of the mean population \(\gamma\). The grey contour results when individual \(\gamma_{i}\) are measured through detailed lens modelling of every lens.
Figure 7: Constraints on a flat cosmological model with evolving \(w\) where \(w\) = \(w_{0}\) + \(w_{a}\)(1-a). The grey dashed line is the fiducial value of our mocks. The blue contour is the posteriors distribution of \(w_{a},w_{0}\) with 10 000 strong lensing systems. The green and red lines show current results from Type Ia supernova and CMB+BAO, respectively (Brout et al., 2022; Planck Collaboration et al., 2020).
the population (selection function) of lens systems from different surveys can vary in which case fitting the ensemble with a single lens population would be incorrect. Understanding the selection function between different surveys is crucial for accurately measuring the lens population because the future strong lensing sample is likely to be the combination of LSST and Euclid discoveries.
## 8 Discussion and Conclusion
In this work, we have constructed a hierarchical model of galaxy-galaxy lenses and the underlying cosmological parameters. We have employed Hamiltonian Monte Carlo through JAX-based NumPyro modelling to efficiently perform the analysis. Our findings indicate that we can simultaneously constrain cosmological parameters and lens galaxy properties under different cosmological models, including \(\Lambda\)CDM, \(w\)CDM, and \(ow\)CDM. With a sample size of 10 000 lens systems, our method should achieve a 68\(\%\) confidence interval of \(\pm\)0.11 on \(w\). These levels of constraint are comparable to other cosmic probes such as the Cosmic Microwave Background (CMB), standard candles, weak lensing, and galaxy clustering. Furthermore, we have shown that the evolution of the lens population can also be simultaneously constrained. We also tested the ability of galaxy-galaxy lenses to constrain evolving dark energy. Our forecast Figure of Merit is 15, which is within a factor of 3 of each individual probe
Figure 9: Posterior distribution and 1D marginalized distribution of the cosmological and lens population parameters assuming a \(w\)CDM universe and 161 real lenses from Cao et al. (2015) and Chen et al. (2019).
forecast for the LSST 10-year cosmological constraints (Mortonson et al., 2010). These results rely only on measurements of the Einstein radius, redshifts, velocity dispersion, and luminosity profile of each lens. Additional constraints on the lens density profile from detailed lens modelling, would improve the cosmological constraining power by a factor of \(\sim\)2, however, our method can still work without this huge investment in detailed modelling.
Additionally, we applied our model to 161 real lenses in Section 7, finding \(w=-0.90\pm 0.45\). As we discuss below, this result is likely systematics dominated but it illustrates that the method does work on real data.
One of the major simplifications made in this study is that the mass and light models used may not accurately represent real galaxies. The power law light profile we employ is a simplification of the more general Sersic profile (Sersic, 1963), which can describe the light profile of most elliptical galaxies. The effect of the PSF is also not considered in our model, which will bring extra complexity to the velocity dispersion measurement. Furthermore, galaxies are made of dark and luminous matter, and whilst the 'bulge-halo conspiracy' gives total density profiles that are close to powerlaw, the absolute truth is undoubtedly more complex than we have assumed. Our powerlaw assumptions, make the mathematics of our problem analytic (Equation 5), but should not hugely change the final constraints, since the model is only relevant in mapping the aperture dynamical mass onto the mass within the Einstein radius.
In this study, we made the assumption that all galaxy properties follow Gaussian distributions, this might not be true in reality. Additionally, due to a lack of observational evidence, we assumed that there are no correlations between \(\gamma\), \(\beta\), and \(\delta\). However, in actuality, these properties are likely to have initial scaling relations. For example, Auger et al. (2010) found that a steeper mass density profile implies a higher central surface mass density. Furthermore, Cappellari et al. (2007) discovered that elliptical galaxies can be categorized as slow and fast rotators, and they exhibit different \(\beta\) populations. There are also some outliers that have very low \(\beta\) found by (Gerhard et al., 2001). In our results, the orbital velocity anisotropy of lens galaxies exhibits a strong degeneracy with cosmological parameters. Therefore, understanding these correlations will aid in better fitting the mass models of individual galaxies. Another potential observational bias arises from our assumption that elliptical galaxies have a constant \(\beta\), whereas observations suggest that elliptical galaxies have varying \(\beta\) with radius. Specifically, the central region of typical massive elliptical galaxies tends to be isotropic or mildly radially anisotropic (Gerhard et al., 2001; Cappellari et al., 2007). In spectroscopic surveys, the fiber has a fixed radius, which means that the region from which we measure the velocity dispersion will have a different size depending on the half-light radius. Consequently, any observed evolution in the properties may simply be a side effect resulting from the evolution of the lens galaxy's angular size. Similarly, we have ignored the potential for redshift evolution in the population anisotropy, which may be expected from simulations (Xu et al., 2017).
The simulated lens population in this study was limited to a redshift cutoff of 1.5. In real observations, the lens population is subject to additional limitations. For instance, observing the lens galaxy and the source galaxy simultaneously in a spectroscopic survey might not be feasible for lens systems with a large Einstein radius. As a result, some low-redshift lens galaxies may be ruled out from the analysis. Additionally, faint lens galaxies can lead to large velocity dispersion measurement errors, making them less likely to be included in the analysis. This can exclude low-mass galaxies and consequently affect the \(\gamma\) population estimation. Conversely, luminous galaxies beyond the redshift cutoff might be included in the analysis, which can introduce further complexities. These observational limitations and selection effects should be taken into account when interpreting the results and considering the true lens galaxy population.
The method used in this study can also be valuable for the analysis of time-delay cosmography, such as gravitational lens quasars and supernovae. Similar to our approach, galaxy density profiles can be simultaneously inferred with the Hubble constant, as demonstrated in previous work (e.g., Birrer et al. (2020)). But their mass models are heavily prior dominated, the bias in \(\gamma\) will introduce a significant bias in \(\rm H_{0}\) in LSST era where the number of lensed quasar/SNe are large (Collett & Cunnington, 2016). The lens galaxy population parameters we obtained can serve as priors when modeling gravitational lens quasars or supernovae. However, it is important to note that normal lens systems may have different population parameters compared to lensed quasar or supernova systems. Conducting a detailed investigation into the selection function between these systems will be necessary to mitigate any biases.
Whilst this work is a simplified investigation of how to simultaneously constrain the astrophysics of lenses and the underlying cosmological parameters, it has shown that the potential constraining power is competitive to more established cosmological probes. These encouraging results motivate further work on modelling the selection function, fitting more general density and dynamical profiles, and gathering the required data.
## Acknowledgements
We are grateful to Andy Lundgren, Giovanni Granata, Hannah Turner, Simon Birrer, Sydney Erickson, Shuo Cao, Shude Mao, Shawn Knabel, Phil Marshall, Russell Smith, and Leon Koopmans for helpful conversations that have enriched this work.
Numerical computations were done on the Sciama High Performance Compute (HPC) cluster which is supported by the ICG, SEP-Net, and the University of Portsmouth.
This work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (LensEra: grant agreement No 945536). TC is funded by the Royal Society through a University Research Fellowship. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising.
## 9 Data Availability
Forecast lens population data are available at [https://github.com/collett/LensPop](https://github.com/collett/LensPop). Model posterior chains are available from the corresponding author on request. The parameters of 161 Lens systems are available at Chen et al. (2019) in [https://doi.org/10.1093/mnras/stz1902](https://doi.org/10.1093/mnras/stz1902).
|
2309.01381 | Classic algorithms are fair learners: Classification Analysis of natural
weather and wildfire occurrences | Classic machine learning algorithms have been reviewed and studied
mathematically on its performance and properties in detail. This paper intends
to review the empirical functioning of widely used classical supervised
learning algorithms such as Decision Trees, Boosting, Support Vector Machines,
k-nearest Neighbors and a shallow Artificial Neural Network. The paper
evaluates these algorithms on a sparse tabular data for classification task and
observes the effect on specific hyperparameters on these algorithms when the
data is synthetically modified for higher noise. These perturbations were
introduced to observe these algorithms on their efficiency in generalizing for
sparse data and their utility of different parameters to improve classification
accuracy. The paper intends to show that these classic algorithms are fair
learners even for such limited data due to their inherent properties even for
noisy and sparse datasets. | Senthilkumar Gopal | 2023-09-04T06:11:55Z | http://arxiv.org/abs/2309.01381v1 | Classic algorithms are fair learners: Classification Analysis of natural weather and wildfire occurrences
###### Abstract
Classic machine learning algorithms have been reviewed and studied mathematically on its performance and properties in detail. This paper intends to review the empirical functioning of widely used classical supervised learning algorithms such as Decision Trees, Boosting, Support Vector Machines, k-nearest Neighbors and a shallow Artificial Neural Network. The paper evaluates these algorithms on a sparse tabular data for classification task and observes the effect on specific hyperparameters on these algorithms when the data is synthetically modified for higher noise. These perturbations were introduced to observe these algorithms on their efficiency in generalizing for sparse data and their utility of different parameters to improve classification accuracy. The paper intends to show that these classic algorithms are fair learners even for such limited data due to their inherent properties even for noisy and sparse datasets.
Decision Trees Support Vector Machines k-nearest Neighbors Hyperparameter tuning
## 1 Introduction
Though the classical Machine learning algorithms such as Decision trees, SVMs and k-nearest neighbors have been studied on their theoretical efficiency, practitioners and researchers tend to perform large scale hyper parameter searches or experimentation to identify the appropriate model and parameters everytime. There seems to be a need to do systematic study of these algorithms and these tuning parameters to quickly reduce the hypothesis space for identifying the best set of parameters to extract optimal performance.
**Decision Trees:** Decision trees utilizes a tree structure to codify its learned strategy to classification the provided input. The ID3 algorithm [13] uses Entropy and Information gain to identify the nodes to split and arrive at the full tree during the training step. There are further improvements such as Boosting and pruning which can be tuned and investigated further to improve the accuracy of the tree.
**Support Vector Machines:** SVMs attempt to learn a decision boundary for linearly separable and non-linearly separable data (_using kernel tricks_). The algorithm identifies the largest margin hyperplane that divides the data to perform classification and is effective even for higher dimension data.
**k-nearest Neighbors:** kNN learns the representation of the input data in the feature space using the nearest \(K\) neighbors. This algorithm uses a lazy approach where there is no training time and the inference uses the nearest neighbors to determine the classification of new data points. However, this gets slower with a large number of samples or independent variables.
**Artificial Neural Network**: A Multi-layer perceptron classifier [10] is used to perform the classification task with varying hidden layer sizes as one of its hyperparameters. The MLP learns the layer weights as part of the training process to reduce the error using backpropagation.
All the code used to perform the experiments and results are published for reference 1
## 2 Related Work
There has been multiple earlier works for analyzing classification ML algorithms on 112 real life binary datasets [17] to observe the functionalities of the classic algorithms. There has been previous in depth studies on tree based methods on the well documented UCI datasets [18] or non tabular datasets [2] to analyze the performance of these algorithms on naturally occurring datasets with the intent on observing their effectiveness on these specific datasets. However, these explorations performed their analyzes with well rounded and natural datasets without any perturbations or synthesis for effective hyperparameter analysis, similar to [19]. Their findings around the effectiveness of gradient boosted trees over SVMs and decision trees were based on their documented datasets without any enquiry into how the hyperparameters would change their individual behavior. There has been some earlier work, [7] to understand how levels of decision trees help, but this was only for decision trees and does not perform any active data perturbations and neither any extensive model parameter analysis.
## 3 Methodology
### Datasets
The paper deters from using the commonly used UCI and other well established datasets to avoid running into "_statistical accidents_" as discussed in [7]. To understand the algorithms and the effects that the hyper parameters have on them, the datasets need to be chosen producing relatively lower accuracy scores with the default algorithm implementations, but being responsive to various adjustments performed. The other criteria was to identify both a binary and a multi-class classification problem, to help understand the inherent behavior of the algorithms and use them to effectively evaluate them using various comparison metrics.
The first dataset - _Rattle_[14] represents the daily weather observations from Australian weather stations. This has around 56k samples and 65 features posing a non-trivial binary classification with tomorrow's rain as the class of prediction. The second dataset - Wildfire [1] contains data regarding the various US wildfires that occurred in the US from 1992 to 2015. However, this dataset has a relatively sparse feature set posing a completely different facet for investigating the algorithms.
Choosing these widely different datasets would help us explore the various supervised learning algorithms, their underpinnings, effects of their hyper parameters and how to tune for effective algorithmic performance. The two data sets used were particularly chosen for their sparsity and imbalance to study the effectiveness of these algorithms The data has also been synthetically modified to help analyze the algorithms and garner their potential for classification problems where such data inefficiencies lie and identifies the hyperparameter tuning strategies which can be applied for other imbalanced and noisy data to optimize training for classification tasks.
### Preparation
To understand the underpinnings of the algorithms and further analyze their model complexity, the datasets were reduced in size by certain filters. However, these reductions were carefully chosen to prevent any learner bias and caution was taken to avoid such biases by only choosing features with little to no correlation.
The Rattle dataset has 65 features and only the top 30 features were selected using the Chi-squared statistic to identify the most relevant ones. These features have the most impact on the outcome variable and hence the paper performs all the analyses using this reduced dataset. The distribution of some features are available at Figure 1 and the dataset characteristics are illustrated in Table 1
Figure 1: Various features of the datasets - Rattle and Wildfire
The Wildfire dataset is filtered for only CA wildfires and the selection was based on domain knowledge rather than any statistical method. Also, feature data such as _discovery_date, cont_date_ have been modeled as categorical features by extracting the month and weekday of the year and the time taken for containment. **This dataset varies drastically from the previous one with larger categorical vs. continuous feature sets.** The dataset has been slightly modified to use only four classes of outcomes to create a more balanced dataset since some of the classes were very sparse and added the need to balance them which was beyond the scope of this paper.
### Intuition and Default results
The results of using the algorithms with their default parameters are available in Table 2. Along with the random classifier, they help form a baseline to tune the hyper parameters. These are simple accuracy metrics with no cross-validation and a 80/20 split for training and validation. Rattle dataset has a large sample size and predictably performs well across all algorithms. Due to the presence of more continuous variables, it performs slightly better while using ANN and SVM in comparison to Decision trees. While the Wildfire dataset with mostly categorical features fare better with Decision trees and with boosting in comparison to NN and SVM. As noted by [11], the lower dimensionality wildfire dataset fared better with decision trees while the Rattle dataset with higher dimensionality performed better with neural networks and SVM.
### Experiment Setup
The following methodology and analysis discusses the effects of various parameters on all the five algorithms for these datasets and their respective change in accuracy metrics.
For the Rattle dataset, only the irrelevant data columns were removed and other columns were label encoded with min-max scaling for easier convergence and samples with unknown values were dropped. For the Wildfire dataset, the query constrained the dataset to have only samples with less unknown values and the multi-class data were merged and filtered to only 4 classes. There were also a few categorical features extracted using _discovery_time_. These were also label encoded and run through a min-max scaling as preparation.
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Name** & **Rattle** & **Wildfire** \\ \hline Class & Binary & Multi-class \\ \hline Features & 30 & 7 \\ \hline Sample size & 56420 & 11825 \\ \hline Balanced & No & Yes \\ \hline \end{tabular}
\end{table}
Table 1: Dataset characteristics
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline
### Model validation process flow
The dataset is split initially broken into 80/20 split and the 20\(\%\) split is used as the final hold out test data to verify the final tuned algorithms performance. All the training and cross validation happen on the 80\(\%\) split which is further divided into train/validation sets for grid search, model complexity and learning curve analysis etc., while using a **3-fold** cross validation set. **Parameters which are not inherently embedded within the model as part of the learning from the dataset, but influence the functioning of the model are termed as hyper-parameters** and each algorithm is executed with a varying range of values for selected parameters. Grid search is employed to perform scoring across various combinations of the hyper parameter values to identify the optimal one.
A learning curve analysis is then performed to determine various aspects of the model built such as bias, variance and how the model has generalized along with time curves. A final test of the optimized model is performed using the initial hold-out test data to measure the impact of improvements performed as part of the analysis. All these steps are performed in sequence as illustrated in Figure 2.
## 4 Evaluation
### Decision Trees
A Decision Tree uses a tree-like model of decisions and consequences as a representation of the training data, and predicts outcomes for newer instances based on the tree model.
#### 4.1.1 Hyper parameter tuning
**The _max_depth_ parameter** determines the maximum depth of the tree. Due to its algorithmic nature, decision trees are susceptible to overfitting leading to high variance with deeper trees which is very evident from _max_depth_ graphs for both datasets. At low values of max_depth, both datasets have high bias due to poor complexity, but around values 5-7, as per Figure 3 the **training and validation curves start to diverge** indicating **overfitting and high variance at higher values of max_depth**.
**The _min_samples_split_ parameter** is the minimum number of samples required to split an internal node. The curve results exhibit that the model suffers from **high bias and low variance, evident from high accuracy for low min_samples_splits where overfitting occurs**. However, since the validation score holds steady, the model conforms to low variance and would need tuning to balance the high bias. Using the plots referred in Figure 4, a range of values for _max_depth_ and _min_samples_split_ were determined and using a gridsearch, the optimal combination was identified.
As Balakrishnan [3] states, entropy is defined "as the average or expected uncertainty associated with a set of events" and computes using the log base 2 of probabilities in comparison to Gini index which calculates only on simple probabilities and works usually well for continuous feature variables. The criteria for attribute split was determined as **entropy** instead of gini index due to the presence of more categorical attributes and entropy usually performs better in such models. This was verified using grid search as well.
Figure 3: Effects of max_depth parameter on training/validation accuracy using validation_curve
#### 4.1.2 Model Complexity Analysis
Once the optimal hyper parameters are identified, model complexity analysis can be performed to identify the bias-variance trade-offs attributed in the model. Plotting the learning curve in Figure 5 indicates that for the Rattle dataset, the curves have converged indicating low variance. However, the training score starts high and reduces with training size, indicating the model has trouble identifying more variations of the training data size i.e., **high bias**. Since the curves have converged and due to high bias rather than adding more sample data, **additional features are required to improve the model's complexity**, as the model seems to be slightly overfit.
For the Wildfire dataset, a similar outcome is identified. The model has **higher bias and lower variance**. As the curves have not converged, they can definitely gain from more sample data to help reduce variance and also need more features for improving complexity and reducing the bias. With increasing training sample size, the training time also increases linearly as the tree is being built during training, while the prediction time is constant as it involves a simple lookup from the earlier built tree.
#### 4.1.3 Pruning and its effects
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline & \multicolumn{2}{l|}{**Rattle**} & \multicolumn{2}{l|}{**Wildfire**} \\ \hline & **Current** & **Pruned** & **Current** & **Pruned** \\ \hline Accuracy & 0.83 & 0.84 & 0.55 & 0.53 \\ \hline Precision & 0.82 & 0.82 & 0.52 & 0.48 \\ \hline Recall & 0.83 & 0.84 & 0.55 & 0.53 \\ \hline Branches & 13 & 6 & 31 & 15 \\ \hline Nodes & 27 & 13 & 63 & 31 \\ \hline \end{tabular}
\end{table}
Table 3: Results with and without pre-pruning
Figure 4: Effects of min_samples_split parameter on training/validation accuracy
Figure 5: Learning Curve Model Complexity Analysis for Decision Trees
Pruning of decision trees helps reduce the tree size by eliminating parts of the tree that contribute very little to the classification problem. Pruning addresses the over fitting problem by reducing bias and eliminating complexity. Pruning can be achieved by using either early stopping/pre-pruning or post-pruning techniques. The following pre-pruning techniques have been utilized successfully with minimal to no loss of accuracy and also achieving better generalization - Reducing the depth of the tree (using _max_depth_ parameter) and using _min_samples_leaf_ to force splits, only when there are a minimum number of samples available in that node and the results are available in Table 3 and the corresponding confusion matrices are available in Figure 6
Another interesting aspect of this analysis, shows the contrast and trade-off between information gain (achieved using gini/entropy) versus the accuracy metric which determines the growth and pruning of the tree respectively.
On plotting the learning curves with and without pruning, we can see that pruning helps generalization better and helps achieve better accuracy over validation set as evident in Figure 7. We can also notice **reduced bias (_lower training scores at lower samples_) and **reduced variance** as well. The effect is more prominent in the Wildfire dataset as the training scores have gone down significantly for lower training sizes producing a lower bias strain.
### Boosting
Boosting is an ensemble algorithm aimed at reducing bias and variance by using a family of weak learners to create a strong classifier. A weak learner is simply a classifier that is very lightly correlated with actual classification, while a strong learner is well-correlated with the classification, such as the decision tree classifier built earlier.
#### 4.2.1 Hyper parameter tuning
Along with tuning the parameters, the analysis also observes the effectiveness of boosting while using the pruned and default decision trees.
The _n_estimators_**parameter** indicates the maximum number of estimators to use for boosting. The effects of boosting on the accuracy using the unpruned and pruned trees display an interesting behavior where the unpruned tree being a strong classifier, starts to overfit right from the beginning even with a low number of estimators. However, the well
Figure 6: Confusion matrices before and after pruning for Wildfire
Figure 7: Learning curves of Rattle and Wildfire with scores for current and pruned Decision Trees
generalized pruned tree improves accuracy with boosting with the addition of more estimators before the overfitting begins. Theoretically boosting helps generalize better, but however the datasets that have been chosen seem to be resistant, which might be due to noise/misclassification or lack of features in case of Wildfire dataset leading to **low bias and high variance** as illustrated in Figure 8
Boosting with multiple classifiers helps address the high bias problem that was earlier observed. However, due to the lack of features and enough sample data, high variance can be addressed only to a certain extent.
**The _learning_rate parameter_** determines the contribution of each classifier and is usually used to offset large estimators with a lower learning rate to help generalize. The _learning_rate_ accuracy vs. training size using both the unpruned and pruned trees paints an interesting picture. With higher _learning_rate_ values, the classifiers get more weightage for their decisions and with lesser classifiers contributing more, overfitting can be observed when the learning_rate increases, leading to the theory that **high bias cannot be addressed using learning_rate, though low variance** seems to be obtained as seen in Figure 9
In both datasets, the interesting aspect is the resistance offered by the pruned tree for overfitting until larger values of _n_estimators_ and _learning_rate_, signifying their importance while using Decision trees.
#### 4.2.2 Model Complexity Analysis
Plotting the learning curve using the optimal hyper parameter values, we can observe that boosting has not completely addressed the high bias as shown in Figure 10. Training error is still high for lower sampling values indicating the **fallacy of using Boosting with a relatively strong learner**. As observed, the models with boosting have better accuracy values, but still have **high bias and similar variance in comparison to decision tree algorithms**. We can also observe that the testing accuracy increases better in comparison to the decision tree learning curve, allowing the training and
Figure 8: Effects of n_estimators parameter on accuracy for both non-pruned \(\&\) pruned trees as estimators
Figure 9: Effects of learning_rate parameter on accuracy for both non-pruned \(\&\) pruned trees as estimators
testing accuracies to converge, indicating low variance and enabling addition of more sample data to help reduce variance. The effect of boosting is observed in the Wildfire dataset as well, with a **higher bias** due to its lack of features which gets amplified in Boosting.
A key observation is the effectiveness of Boosting while using a relatively strong learner as base estimator. As the underlying learner is not a weak learner (_closer to random results_), the effectiveness of Boosting is not too high in comparison with the underlying base learner with a mere 3-\(4\%\) increase in accuracy. However, it is also worthy to note that the underlying classifier already has high accuracy values (Rattle) dataset and Boosting still was able to increase its high accuracy further. Also, boosting hyper parameters cannot completely eradicate overfitting, if the underlying learner is already strong and overfits to some extent. As expected, Boosting helps reduce the error boundaries, however fails to address the bias in the model. Similar to Decision trees, the training time increases linearly with the sample size, while the prediction time is constant as well described in Figure 11
_Note: Boosting rarely overfits for low variance datasets and can be tuned with very few hyperparameters in comparison to XGBoost [4] which is finetuned for speed and performance with a multitude of hyperparameters and is beyond the range of this experiment._
### k-Nearest Neighbors
k-Nearest Neighbors classification performs instance-based learning instead of deriving an internal model, by saving the training data samples. Prediction is computed using the majority vote of nearest neighbors determining the class having most representation among the identified neighbors.
#### 4.3.1 Hyper parameter tuning
**The n_neighbors parameter** is the critical parameter for kNN as it determines the number of neighbors to consider for classifying the request sample. An interesting observation was the **high bias** (large training accuracy and small testing accuracy) observed when the neighbors were weighted using "_distance_" instead of "_uniform_" weights. Intuition suggests this might be due to the presence of a large set of samples within small ranges which influences the result a lot more due to its closeness. To avoid this overfitting only "_uniform_" weights were used.
The complexity analysis presents an expected phenomenon where with k\(=\)1 or low values, **the training data is overfit with high bias**. However, with increasing neighbors we were able to observe convergence with the testing accuracy curve. Surprisingly in comparison to earlier algorithms, the testing score remains fairly steady for Rattle and with
Figure 11: Learning curves for Boosting Algorithm capturing time for training and test
Figure 10: Learning curves for Boosting Algorithm in comparison with Decision tree
slight changes for Wildfire indicating a **low variance model** with a larger neighbor count, but reinforces the **high bias hypothesis**.
**The p (distance metric) parameter** is another parameter that was analyzed. _Chomboon, Kittipong, et al_ mentions that Euclidean, Manhattan and Minkowski have similar accuracy metrics and performing complexity analysis produces an interesting data point where Euclidean distance outperforms the rest by a minor margin. All the \(p\) values exhibited **high variance with low bias,** drawing the conclusion that they do not contribute to reducing any model overfitting, rather providing incremental improvements for accuracy.
The effects of various values for both these parameters are illustrated in Figure 12
#### 4.3.2 Model Complexity Analysis
Plotting the learning curve using the optimal parameters identified, as in Figure 13, produces some telling results. Using the appropriate number of neighbors helps generalize the model even for low training samples. The accuracy is slightly low for less training samples, however the model displays **low bias** and **low variance** and continues the trend across different sample sizes. Increasing the number of neighbors would probably lower the variance with a penalty for accuracy or more training samples can be added to help improve accuracy and reduce variance.
The same phenomenon was observed in both datasets indicating kNN as a good meta-classification algorithm for these types of datasets. Due to its nature of merely saving the sample instead of generating any internal representations, the training time is low and constant, while the prediction time decreases with increase of training sample set.
### Artificial Neural Networks (ANN)
Artificial Neural networks (ANN) are based on brain-like systems of input, output layers and hidden layers similar to neurons to process data and produce learning based outputs. They use multiple layers and weighted perceptrons to "learn" and represent the model of learning a particular problem dataset. A multilayer perceptron (MLP) is one such ANN where each node uses a nonlinear activation function.
#### 4.4.1 Hyper parameter tuning
Due to the wide variety of hyper parameters being available for tuning, few of these parameters were verified for their usability and disregarded based on their accuracy results. For instance, the _'adam' solver_ was chosen as per sklearn recommendation for large datasets and a brief accuracy comparison with other solvers. _momentum_ parameter is utilized only with'sgd' solver and did not yield any accuracy contributions and was disregarded.
Figure 12: Effects of \(n\_neighbors\) and the \(p\) parameter on training/validation accuracy
Figure 13: Learning curves for kNN for complexity analysis along with time for training and test
**The _learning_rate_init_ parameter** yields an interesting curve with **low bias and high variance** for the Rattle dataset where there are a large number of features and a larger dataset, while offering a **high bias and low variance** model for the sparsely featured Wildfire dataset, as shown in Figure 14
**The _alpha_ parameter** does not seem to offer any help in addressing bias and seems to indicate a **low bias** since the training score does not decrease with increase in sample size. However, the gap does indicate a **high variance** which can be addressed only by adding more data or possibly increasing the learning_rate or performing re-sampling of data to generate newer data. A similar phenomenon is observed with the Wildfire data set as well.
**The _hidden_layers_ parameter** is not really a comparable evaluation parameter using a graph. However, the graphs were generated with increasing order of complexity of hidden layers to analyze their performance. As expected the accuracy scores do increase with an increase in hidden layers. However, there is little to no indication of bias which seems to make ANN an attractive model for these datasets. There are slight indications of overfitting in the Rattle dataset while using 5-layers of 30 nodes. However, the wildfire dataset seems stable with **low bias and low variance**. Some more sample data might help address the slight variance, but the model requires more features to improve its complexity and accuracy scores.
The effects of various values for both parameters are shown in detail in Figure 15.
#### 4.4.2 Model Complexity Analysis
Both the datasets have mostly categorical features and rattle has 30 features lending itself to be solved by the sigmoid activation function. Though they have slow convergence, the presence of a rich feature set and a shallow network of (30,30) makes **sigmoid** the best suited activation for Rattle dataset. However, due to the sparsity of features and to prevent the vanishing gradient problem in sigmoid activation, the **tanh** activation was used instead for the Wildfire dataset.
Relu activation was considered, but grid search proved sigmoid and tanh better for these particular datasets and intuitively they function better for classification problems than Relu. The hidden layer network was modeled intuitively with one node per feature (or its multiples) and _sequential orthogonal approach_[12] where one layer after another is added for error minimization.
Figure 14: Effects of _learning_rate_init_ parameter on training/validation accuracy
Figure 15: Effects of _alpha and hidden_layer_sizes_ parameters on training/validation accuracy
Plotting the error rate against the number of iterations produces an interesting observation where the time for training and testing remains constant across the number of iterations. However, with increase in iterations count, the feature rich Rattle dataset, starts to overfit slightly (**high bias and low variance**) while the sparsely featured Wildfire dataset overfits pretty quickly. It is evident that the model for Wildfire suffers from **high bias** and **high variance** and overfitting which might be due to the larger number of nodes in each hidden layer as shown in Figure 16 and Figure 17
The Rattle dataset might fare better from slight adjustments to the number of nodes and adding more layers to reduce variance and using regularization methods such **early stopping** and **L1 regularization** to reduce the number of features. While for the Wildfire dataset, the number of nodes in each layer needs to be reduced using **drop-out regularization** and by adding more features to improve the model's complexity.
### Support Vector Machines
SVM trains a model functioning as a non-probabilistic linear classifier using sample representation of spatial coordinates and plotted in such a manner to achieve clear separability across classes. For this analysis, the C- Support Vector Classifier (SVC) with different kernels is utilized instead of a Stochastic Gradient Descent which is usually recommended for larger datasets.
#### 4.5.1 Hyper parameter tuning
**The \(C\) parameter** represents the penalty of the error term and works as a regularizing component where with larger values, a tight decision boundary is preferred for better classification (overfit) while lower values support simple decision boundaries (underfit). This is evident from the plot where larger values indicate overfitting with the validation accuracy reducing with increase in C values. The low validation accuracy affirms the **high bias** with overfitting and the divergence confirms a **large variance**.
**The _gamma parameter**_ represents the kernel co-efficient and determines the circle of evaluation. With small values, a larger range of samples are used and with higher values, only a small set of closer samples are used, leading to overfitting. This is evident from the plots where for lower values of gamma, the accuracy suffers with **low bias** while larger values tend to overfit leading to the curves diverging and displaying **high variance**. It is imperative to choose the appropriate C and gamma parameters to ensure the SVM algorithm performs optimally with lower bias and variance.
Figure 18 and Figure 19 illustrate the effects of the two hyper parameters, C and _gamma_ on training and validation accuracies.
Figure 16: Learning curves for ANN for complexity analysis across training sample size
Figure 17: Learning curves for ANN for varied number of iterations/epochs
_Note: This learning curve is plotted with error metric in y-axis while others are plotted with accuracy_
#### 4.5.2 Model Complexity Analysis with Kernel Comparison
Using the optimal values the model complexity is analyzed over varying sizes of training samples using different kernels which determines the type of hyperplane used for separation of data.
Due to the richness of data and feature set in Rattle, the data was linearly separable allowing the linear kernel to function with slightly higher accuracy than the RBF kernel. As [8] states, linear kernel is a degenerated version of RBF, and always has lesser accuracy than a tuned RBF kernel lending the argument that the RBF kernel may not have the most appropriate C and gamma parameters. However, due to its ability to work with non-linear boundaries, SVM exhibits a high range of accuracy and as evident from Figure 21 has **low bias and very low variance** for both kernels. The training and validation scores do not vary on a large scale with training sample size and the curves almost converge
Figure 19: Effects of _gamma_ parameter on training/validation accuracy
Figure 20: Learning curve analysis using _linear_ and _rbf_ kernels on Rattle dataset
Figure 18: Effects of \(C\) parameter on training/validation accuracy
displaying minimal variance. Due to its complexity, SVMs have larger training time with increase in training sample size as evident from the plots.
An interesting observation for the Wildfire dataset is the non-linear separability of its data, which might be due to the lack of features or insufficient data due to which the linear kernel does not converge even for 1000k iterations. However, RBF kernel capably sketches the nonlinear data in a higher dimensional space allowing SVM to separate the classes for better classification.
The iteration time (epochs) vs. error plot in Figure 22 presents an interesting observation where **SVM exhibits good resistance to overfitting** even on large numbers of iterations, by maintaining the error rates at high epochs. The training and prediction time are linear to training epochs while after reaching a saturation remains constant as plotted for Wildfire. Both plots exhibit **low bias and very low variance**, presenting itself as the best model for these datasets.
## 5 Conclusion
We presented a simple evaluation of the classic algorithms on sparse tabular data for classification tasks and observed the effect on hyperparameters when the data is synthetically modified for higher noise. We were able to observe the efficiency of these algorithms in generalizing for sparse data and their utility of different parameters to improve classification accuracy. We were able to demonstrate that these classic algorithms are fair learners even for such limited data due to their inherent properties even for noisy and sparse datasets as observed in Table4.
Using the optimally tuned models, the accuracy was measured using the hold out test dataset and the results are available below in comparison with the default accuracy and their newly improved tuned accuracy.
For the Rattle dataset, due to its large feature set and sample size, almost all algorithms functioned well with some hyper parameter tuning while for the Wildfire dataset with its mostly categorical features and sparse dataset, kNN and Decision Trees worked out really well with the highest accuracy values. It aligns with the understanding that large features and higher dimensionality fares well with ANN and SVMs while lower dimensional datasets perform better
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|} \hline
**Algorithm** & **Decision Trees** & **Boosting** & **\&**Nearest Neighbors** & **Neural Network** & **SVM** \\ \hline
**Rattle** & 0.79 & 0.84 & 0.80 & 0.85 & 0.83 & 0.83 & 0.85 & 0.86 & 0.85 & 0.85 \\ \hline
**Widfire** & 0.47 & 0.55 & 0.55 & 0.59 & 0.51 & 0.49 & 0.44 & 0.51 & 0.50 & 0.46 \\ \hline \end{tabular}
\end{table}
Table 4: Accuracy results of the algorithms using the default and tuned parameter values
Figure 21: Learning curve analysis using _rbf_ and _sigmoid_ kernels on Wildfire dataset
Figure 22: Learning curves for SVM for varied number of iterations/epochs
_Note: This learning curve is plotted with error metric in y-axis while others are plotted with accuracy_
with Decision trees and more with Boosting. Though the Rattle dataset has a large feature set and training data, the accuracy still suffers from the curse of dimensionality and PCA or feature reduction would be required to add more weightage to high variance features.
A random classifier has 0.66 and 0.27 accuracies for Rattle and Wildfire dataset exhibiting them as good datasets to experiment upon using these algorithms. All the algorithms performed better than the random classifier though the Wildfire dataset was more resistant to tuning due to its lack of features and missing data. There are indications of better performance with decision trees and using Boosting with almost 0.59 accuracy. Further analysis with more dimensionality and data can definitely improve the models' performance.
### Further improvements and experiments
Further analyses by changing the distance metric for KNN and using dimensionality reduction such as PCA to help optimize the distance metric, can be performed. Rebalancing some of the classes in the Wildfire dataset using undersampling, oversampling or weight balancing to analyze their impact on accuracy can be attempted. Utilizing a few of the advanced neural networks, SVMs and deep learning models to experiment with their performance can yield more interesting results.
|
2302.12931 | CATNIPS: Collision Avoidance Through Neural Implicit Probabilistic
Scenes | We introduce a transformation of a Neural Radiance Field (NeRF) to an
equivalent Poisson Point Process (PPP). This PPP transformation allows for
rigorous quantification of uncertainty in NeRFs, in particular, for computing
collision probabilities for a robot navigating through a NeRF environment. The
PPP is a generalization of a probabilistic occupancy grid to the continuous
volume and is fundamental to the volumetric ray-tracing model underlying
radiance fields. Building upon this PPP representation, we present a
chance-constrained trajectory optimization method for safe robot navigation in
NeRFs. Our method relies on a voxel representation called the Probabilistic
Unsafe Robot Region (PURR) that spatially fuses the chance constraint with the
NeRF model to facilitate fast trajectory optimization. We then combine a
graph-based search with a spline-based trajectory optimization to yield robot
trajectories through the NeRF that are guaranteed to satisfy a user-specific
collision probability. We validate our chance constrained planning method
through simulations and hardware experiments, showing superior performance
compared to prior works on trajectory planning in NeRF environments. Our
codebase can be found at https://github.com/chengine/catnips, and videos can be
found on our project page (https://chengine.github.io/catnips). | Timothy Chen, Preston Culbertson, Mac Schwager | 2023-02-24T23:15:50Z | http://arxiv.org/abs/2302.12931v3 | # CATHUPS: Collision Avoidance Through Neural Implicit Probabilistic Scenes
###### Abstract
We formalize a novel interpretation of Neural Radiance Fields (NeRFs) as giving rise to a Poisson Point Process (PPP). This PPP interpretation allows for rigorous quantification of uncertainty in NeRFs, in particular, for computing collision probabilities for a robot navigating through a NeRF environment model. The PPP is a generalization of a probabilistic occupancy grid to the continuous volume and is fundamental to the volumetric ray-tracing model underlying radiance fields. Building upon this PPP model, we present a chance-constrained trajectory optimization method for safe robot navigation in NeRFs. Our method relies on a voxel representation called the Probabilistic Unsafe Robot Region (PURR) that spatially fuses the chance constraint with the NeRF model to facilitate fast trajectory optimization. We then combine a graph-based search with a spline-based trajectory optimization to yield robot trajectories through the NeRF that are guaranteed to satisfy a user-specific collision probability. We validate our chance constrained planning method through simulations, showing superior performance compared with two other methods for trajectory planning in NeRF environment models.
Collision Avoidance, Robot Safety, Visual-Based Navigation, NeRFs
## I Introduction
Constructing an environment model from onboard sensors, such as RGB(-D) cameras, lidar, or touch sensors, is a fundamental challenge for any autonomous system. Recently, Neural Radiance Fields (NeRFs) [1] have emerged as a promising 3D scene representation with potential applications in a variety of robotics domains including SLAM [2], pose estimation [3, 4], reinforcement learning [5], and grasping [6]. NeRFs offer several potnetial benefits over traditional scene representations: they can be trained using only monocular RGB images, they provide a continuous representation of obstacle geometry, and they represent scene geometry accurately even when specular or transparent materials are present while sensors such as depth cameras and lidar often fail [4, 6]. Using current implementations [7, 8], NeRFs can be trained in seconds using only RGB images captured from monocular cameras, making onboard, online NeRF training a viable option for robotic systems.
However, NeRFs do not directly give information about spatial occupancy, which poses a challenge in using NeRF models for safe robot navigation. In other 3D scene representations, such as (watertight) triangle meshes [9], occupancy grids [10], or Signed Distance Fields (SDFs) [11], occupancy is well-defined and simple to query. NeRFs, however, do not admit simple point-wise occupancy queries, since they represent the scene geometry implicitly through a continuous volumetric density field. For this reason, integrating NeRF models into robotic planners with mathematical safety guarantees remains an open problem.
In this work we develop a framework for robot trajectory planning that can generate trajectories through a NeRF scene with probabilistic safety guarantees. To do this, we propose a mathematical interpretation of a NeRF as a Poisson Point Process (PPP), which allows for the rigorous computation of collision probabilities for a robot moving through a NeRF scene. We further introduce a novel scene representation, a Probabilistically Unsafe Robot Region (PURR), that convolves the robot geometry with the NeRF to yield a 3D map of all robot positions with collision probabilities less than a user-specified threshold. Finally, we propose a fast, chance-constrained trajectory planner that uses the PURR to ensure
Fig. 1: (a) Ground-truth of the Stonehenge scene, (b) Poisson Point Process (PPP) of the scene represented as a point cloud, (c) Probabilistically Unsafe Robot Region (PURR) of scene, (d) Generated safe paths from our method. |
2306.07175 | Easy-plane multi-$\mathbf{q}$ magnetic ground state of
Na$_3$Co$_2$SbO$_6$ | Na$_3$Co$_2$SbO$_6$ is a potential Kitaev magnet with a monoclinic layered
crystal structure. Recent investigations of the $C_3$-symmetric sister compound
Na$_2$Co$_2$TeO$_6$ have uncovered a unique triple-$\mathbf{q}$ magnetic ground
state, as opposed to a single-$\mathbf{q}$ (zigzag) one, prompting us to
examine the influence of the reduced structural symmetry of Na$_3$Co$_2$SbO$_6$
on its ground state. Neutron diffraction data obtained on a twin-free crystal
reveal that the ground state remains a multi-$\mathbf{q}$ state, despite the
system's strong in-plane anisotropy. This robustness of multi-$\mathbf{q}$
orders suggests that they are driven by a common mechanism in the honeycomb
cobaltates, such as higher-order magnetic interactions. Spin-polarized neutron
diffraction results show that the ordered moments are entirely in-plane, with
each staggered component orthogonal to the propagating wave vector. The
inferred ground state favors a so-called XXZ easy-plane anisotropic starting
point for the microscopic model over a Kitaev one, and features unequal ordered
moments reduced by strong quantum fluctuations. | Yuchen Gu, Xintong Li, Yue Chen, Kazuki Iida, Akiko Nakao, Koji Munakata, V. Ovidiu Garlea, Yangmu Li, Guochu Deng, I. A. Zaliznyak, J. M. Tranquada, Yuan Li | 2023-06-12T15:14:20Z | http://arxiv.org/abs/2306.07175v1 | # Easy-plane multi-q magnetic ground state of Na\({}_{3}\)Co\({}_{2}\)SbO\({}_{6}\)
###### Abstract
Na\({}_{3}\)Co\({}_{2}\)SbO\({}_{6}\) is a potential Kitaev magnet with a monoclinic layered crystal structure. Recent investigations of the \(C_{3}\)-symmetric sister compound Na\({}_{2}\)Co\({}_{2}\)TeO\({}_{6}\) have uncovered a unique triple-**q** magnetic ground state, as opposed to a single-**q** (zigzag) one, prompting us to examine the influence of the reduced structural symmetry of Na\({}_{3}\)Co\({}_{2}\)SbO\({}_{6}\) on its ground state. Neutron diffraction data obtained on a twin-free crystal reveal that the ground state remains a multi-**q** state, despite the system's strong in-plane anisotropy. This robustness of multi-**q** orders suggests that they are driven by a common mechanism in the honeycomb cobaltates, such as higher-order magnetic interactions. Spin-polarized neutron diffraction results show that the ordered moments are entirely in-plane, with each staggered component orthogonal to the propagating wave vector. The inferred ground state favors a so-called XXZ easy-plane anisotropic starting point for the microscopic model over a Kitaev one, and features unequal ordered moments reduced by strong quantum fluctuations.
Magnetic frustration arises from competing interactions between localized magnetic moments, or spins, leading to a vast degeneracy of classical ground states and suppressed order formation in quantum systems [1; 2; 3]. Acquiring precise knowledge of the order parameter can provide valuable insights when a frustrated magnet attains order. However, obtaining such information can be challenging. The Kitaev honeycomb model [4] has garnered interest due to its unique magnetic frustration properties, exact quantum solvability, and potential applications in topological quantum computation [5]. Significant research progress in materializing the Kitaev model has been made [5; 6; 7; 8; 9; 10], with \(3d\) cobaltates recently emerging as promising materials [11; 12; 13; 14; 15].
Two key factors driving research interest in honeycomb cobaltates are the appealing theoretical expectation of weak non-Kitaev interactions [11; 12] and the growth of large, high-quality single crystals [16; 17; 18; 19]. However, some cobaltates have recently been argued to be better described as easy-plane anisotropic (XXZ) rather than Kitaev magnets [20; 21; 22; 23]. The compound Na\({}_{3}\)Co\({}_{2}\)SbO\({}_{6}\) (NCSO) has nevertheless been considered to exhibit significant Kitaev interactions [23] and potential spin liquid behavior [13; 24]. Furthermore, NCSO is a structurally well-defined and clean material [25], which are traits making it valuable for in-depth studies aiming to avoid the structural complications recently found in \(\alpha\)-RuCl\({}_{3}\)[26] and Na\({}_{2}\)Co\({}_{2}\)TeO\({}_{6}\)[27].
In candidate Kitaev materials, including cobaltates, the antiferromagnetic order of zigzag ferromagnetic chains, dubbed the zigzag order, is widely regarded as the predominant form of magnetic ground state [17; 29; 30; 31; 32]. The zigzag order is characterized by a single propagating wave vector (**q**) at one of the \(M\)-points of the hexagonal Brillouin zone (BZ). However, recent research on Na\({}_{2}\)Co\({}_{2}\)TeO\({}_{6}\) has unveiled a surprising triple-**q** ordered state [33; 34; 35; 36; 37; 38]. Despite ongoing debate about its relevance
Figure 1: Neutron diffraction on a twin-free crystal measured on SENJU [28] at 2 K and in zero field. Data are averaged from \((H,K,2.9)\) to \((H,K,3.1)\) in reciprocal lattice units (r.l.u.). Hexagon indicates the 2D Brillouin zone (BZ), which is elongated along \((1,0,0)\) due to the monoclinic inclination of the \(c\) axis (\(\beta=108.6^{\circ}\)). Magnetic peaks are observed at \((H,K)=(1/2,\pm 1/2)\) but not at \((0,-1)\). Washboard-like texture is due to small gaps between detectors.
[39; 40; 41], the triple-**q** state can be identified as a superposition of three single-**q** zigzag components rotated by 120 degrees from one another [33; 42]. Recent theoretical studies suggest that a multi-**q** state can become energetically favorable over a zigzag state when higher-order spin interactions are considered [34; 44; 34]. While the multi-**q** state in Na\({}_{2}\)Co\({}_{2}\)TeO\({}_{6}\) preserves the lattice \(C_{3}\) rotational symmetry about the \(c\)-axis, it remains unclear whether the state is necessarily \(C_{3}\)-symmetric or can be stable even in a lower-symmetry setting, potentially due to the prominence of higher-order spin interactions.
In this Letter, our investigation of NCSO addresses two crucial questions: whether the system is better characterized as an XXZ rather than a Kitaev magnet, and whether the lack of \(C_{3}\) symmetry is compatible with the formation of multi-**q** order. Using neutron diffraction on a twin-free crystal, we reveal the presence of two, rather than one, or three, zigzag-like antiferromagnetic components in zero field. We show that the two components belong to the same multi-**q** (double-**q**) order parameter, the critical evidence being that their signal ratio remains unchanged after the system is trained by strong in-plane magnetic fields along a low-symmetry direction. Spin-polarized neutron diffraction further demonstrates that the staggered spins in each zigzag component lie entirely in-plane and perpendicular to the staggered wave vector, which is more compatible with the XXZ model than the Kitaev model. Superimposing the components as revealed by the spin-polarized diffraction data yields a 2D non-collinear spin pattern with unequal moment sizes. Since the reduction of classically ordered moments is a hallmark of quantum fluctuations, our results render NCSO as a promising system for exploring spin-liquid physics.
The space-group symmetry of NCSO is \(C2/m\) (No. 12) [17; 25], the same as that of the high-temperature structure of \(\alpha\)-RuCl\({}_{3}\)[26; 29; 45]. Figure 1 displays our neutron diffraction data obtained on a white-beam diffraction instrument [28] from a 6 mg twin-free crystal [25], covering the \((H,K,3)\) reciprocal plane. In zero field, we observe magnetic Bragg peaks at only two of the three \(M\)-points of the pseudo-hexagonal BZ: at \((1/2,1/2,3)\) and \((1/2,-1/2,3)\), but not at \((0,-1,3)\). This finding is consistent with previous reports using twinned crystals [17; 25]. Our complete dataset verifies the absence of magnetic peaks at \((0,\pm 1,L)\) or other symmetry-related positions in higher-index 2D BZs over a wide range of \(L\) values.
Figure 2 demonstrates that the above result is in principle consistent with both a zigzag and a multi-**q** ordered state. In the zigzag scenario, magnetic Bragg peaks at \((H,K)=\pm(1/2,1/2)\) and \(\pm(1/2,-1/2)\) originate from two types of domains (excluding time-reversal). They are related by a 180-degree rotation about the \(b\) axis (\(C_{2,b}\)), which is a crystallographic symmetry, and are thus expected to coexist in a macroscopic sample. In the multi-**q** scenario, the ordering pattern can be regarded as a superposition of the two zigzag patterns just considered, with all magnetic Bragg peaks emerging simultaneously. The non-zero structure factors at only two
Figure 2: Left half: Schematic spin patterns of two types of zigzag domains and of the multi-**q** order. The spin orientations are constrained by our spin-polarized diffraction data in Fig. 4. Dashed lines indicate a magnetic primitive cell. Lower-left insets show the applied-field (**H**) direction in the crystallographic coordinate system, and lower-right the 2D structural (grey hexagon) and magnetic (filled polygon) Brillouin zones and locations of the magnetic Bragg peaks. Right half: Expected field-training results observable by magnetic neutron diffraction, under the zigzag and multi-**q** scenarios. See text for detailed explanation.
\(M\)-points are consistent with the system's monoclinic symmetry, where the two \(M\)-points form a symmetry-enforced wave vector star. The lack of a diffraction peak at the third \(M\)-point marks the absence of higher harmonics of the magnetization modulations. It makes the spin pattern (Fig. 2) deviate from the \(C_{3}\)-symmetric one proposed in [33], likely owing to NCSO's strong in-plane anisotropy [25].
To distinguish between the two-domain zigzag and the double-\(\mathbf{q}\) scenarios, we study the impact of training the sample in an in-plane magnetic field applied in a direction rotated 30 degrees from the \(b\) axis. Magnetic diffraction peaks are monitored in a momentum plane indicated by the cyan plane in Fig. 3(a). Peaks will be identified by their in-plane components \((Q_{a},Q_{b})\). Before we discuss the data, it is useful to see why the magnetic field should affect the two types of zigzag domains differently. For the domains illustrated in Fig. 2, the difference arises from the fact that one type of domain can slightly lower its energy in the field by spin canting towards the field direction, whereas the other type cannot. The locking between the spin and the wave vector directions, enforced by spin-orbit effects [11; 12; 13], plays an important role here. Although we will later show that the specific spin orientations in Fig. 2 are corroborated by spin-polarized neutron diffraction data, we emphasize that the difference in the field's influence is generically enforced by (the lack of) symmetry: with the field applied, the \(C_{2,b}\) symmetry connecting the two types of domains becomes broken, so there is no longer a symmetry to protect the domains' energy degeneracy. As an aside, while the zigzag domains proposed in Ref. [17], with all spins lying parallel to the \(b\) axis regardless of the zigzag-chain orientation, might appear degenerate in the field, the degeneracy is at best coincidental and not symmetry-enforced (as shown below, the magnetic structure in Ref. [17] is inconsistent with our data in Fig. 4).
Figure 3 presents the result of our field-training experiment. We stress that the key observation here is not about unequal impacts on the two pairs of magnetic diffraction peaks when the field is on, but about the remnant effect of a sufficiently large field applied and then removed. With the locking between the spin and the wave vector directions, the zigzag scenario is expected to have one type of domain noticeably depopulated after training, whereas the multi-\(\mathbf{q}\) scenario should definitely return to its original state. We have selected measurement field strengths matching the known phase boundaries at 0.53, 0.73, and 0.91 Tesla for our field direction [25]. Fields above 0.91 T drive the system into a ferro
Figure 3: (a) Schematic of diffraction peaks (filled spheres) in reciprocal space. An orthogonal coordinate system is chosen to include the \(ab\) plane and its normal direction, such that magnetic diffraction peaks are seen to project onto the \(M\)-points (blue) and the \(\nicefrac{{}^{2}}{{3}}M\)-points” (red) of the 2D BZ [25]. Magenta hexagons indicate 3D monoclinic BZ boundaries at \(L=1\) and 2. Cyan plane indicates the momentum slice displayed in (b-i), where the data are obtained on SENJU [28] with the field applied and removed in the displayed order. The field direction is in the \(ab\) plane and at 30 degrees from \(b\), same as in Fig. 2. The coordinate system next to (i) indicates the cyan plane’s coordinates projected into the \(Q_{a}\)-\(Q_{b}\) plane at the bottom of (a). Colored spheres in the inset of (b-d) indicate 2D magnetic peak positions. The observed magnetic peaks, upon their first appearance, have the following \((H,K,L)\) indices: (b-c) \((1/2,1/2,1)\) and \((1/2,-1/2,1)\); (d) \((1/3,-1/3,4/3)\); (e) \((1/3,1/3,4/3)\).
magnetic state, eliminating antiferromagnetic diffraction peaks. The data in Fig. 3 confirm this understanding; in fact, all the previously identified phases and peak indexing from [25] are corroborated in our experiment. For details, see the caption of Fig. 3.
In a nutshell, our data reveal that the field-training has minimal impact. The crucial observation, comparing Fig. 3(b) and (i), is that both of the \(M\)-point diffraction peaks remain present after training. It must be noted that the two magnetic peaks in Fig. 3(b-e, g-i) consistently displayed unequal intensities due to a technical reason: the lower peak was in the horizontal scattering plane, while the upper peak was outside the scattering plane. Using a non-monochromated beam for diffraction, the differing scattering geometries led to neutrons of varying kinetic energies, or wavelengths \(\lambda\), contributing to the peaks. The data shown in Fig. 3 are not corrected for the Lorentz-factor (\(\propto d^{2}\lambda^{2}\)[46], where the \(d\)-spacings of the two peaks are equal), which explains the stronger intensity of the lower peak in Fig. 3(i). Additionally, the upper peak in Fig. 3(b-d) [compared to (h-i)] appears particularly weak because it partially fell outside the detector boundary. This issue was identified and resolved (by slightly rotating the sample) in subsequent measurements. More information on our measurement conditions can be found in [47].
We emphasize that the absence of significant training effects contrasts with the distinct influence of the field on the two zigzag components. The latter is evident from the multi-step switching of the associated diffraction peaks in Figs. 3(c-e) and (g-i). As previously mentioned, this distinction is anticipated from a symmetry perspective, since the field breaks the \(C_{2,b}\) symmetry connecting the two components. It therefore manifests the significance of spin-orbit effects in NCSO. The divergent behaviors of the components upon the field application, along with their common and full recovery upon the field removal, confirm that the components form a multi-\(\mathbf{q}\) magnetic state together. Further diffraction and magnetization measurements on twin-free crystals support these findings (Figs. S5 and S6 in [47]).
Having obtained evidence of the multi-\(\mathbf{q}\) nature, our next goal is to determine the ordered spin configuration. We first note that the multi-\(\mathbf{q}\) order must comprise zigzag components that are collinear before their superposition, as non-collinearity would require a "stripe" component admixture, leading to non-zero structure factors at 2D wave vectors (\(\pm 1/2,\pm 3/2\)) and (\(\pm 1,0\)) [49]. However, such signals are absent in experiments [25]. Thus, we focus on identifying the staggered spin direction within a single zigzag component, optimally achieved using spin-polarized neutron diffraction.
In Fig. 4, we show diffraction measurements on an array of twinned crystals with a vertically spin-polarized incoming beam. The crystals' shared \(c^{*}\) axis lies in the horizontal scattering plane, making it the reciprocal plane (\(K,K,L\)), (\(K,-K,L\)), or (\(0,K,L\)), each for about one third of the crystals in the array. In this geometry, a zigzag component generates a series of diffraction peaks at the same 2D \(M\)-point in the scattering plane, such as (\(1/2,1/2,L\)), where \(L\) is an integer. Peaks from the first two aforementioned crystallographic twins are accessible, whereas the third has no magnetic reflections (Fig. 1) in the scattering plane. An illustration of the scattering geometry can be found in Fig. S7 [47]. Spin components in the scattering plane produce spin-flip diffraction signals, while those perpendicular to the plane produce non-spin-flip signals. Figure 4 shows that all magnetic diffractions are non-spin-flip, indicating that the spins in the zigzag components associated with measured \(M\)-points lie perfectly vertical in the laboratory frame, which is a direction in the honeycomb plane and perpendicular to the 2D \(M\)-point wave vectors, irrespective of the crystallographic twin origin of the signal. Consequently, we arrive at the zigzag components' spin orientations depicted in Fig. 2. The full ordered spin configuration is obtained by superimposing the two zigzag components.
The double-\(\mathbf{q}\) magnetic structure in Fig. 2 is non-collinear because of a particular choice of the _collinear_ spin orientations in the two constituent zigzag components. Importantly, as spins in the two components on the same sites are either 60 or 120 degrees apart, their vector superposition results in two distinct spin magnitudes (\(\sqrt{3}:1\), each occupying half of the sites) in the double-\(\mathbf{q}\) structure. An alternative way to view the magnetic structure is to decompose it into four sublattices made of third-nearest-neighbor bonds, which have recently been suggested to possess significant antiferromagnetic interactions [50; 35]. As shown by the dashed hexagon in Fig. 2, each of the sub-lattices forms collinear Neel order, and the non-collinear double-\(\mathbf{q}\) structure is a peculiar "anti-collinear" combination of the sublattices.
Figure 4: Momentum scans perpendicular to the \(ab\) plane, measured on HYSPEC [48] at 0.3 K with polarized neutrons on a twinned sample. Spin-flip (SF) and non-spin-flip (NSF) data have been corrected by the flipping ratio [47]. See Fig. 3(a) for the definition of \(Q_{\perp}\).
In this view, while the sum of bilinear interactions between the sublattices vanishes, just like in the classic example of antiferromagnetic \(J_{2}\)-dominated square-lattice model [51], a non-collinear arrangement can be favored by higher-order interactions [52]. Although conceptually useful, this four-sublattice picture cannot explain the different magnitudes of the spins, and there is no guarantee that the third-nearest-neighbor interactions in NCSO dominate over the nearer-neighbor ones.
The zero-field magnetic structure of NCSO holds importance for several reasons. First, the structure is double-\(\mathbf{q}\) (Fig. 2) instead of triple-\(\mathbf{q}\), likely due to the strong magnetic in-plane anisotropy of NCSO [25]. The surprising robustness of the multi-\(\mathbf{q}\) order, despite the system's seemingly unfavorable symmetry, suggests that the order is stabilized by favorable higher-order exchange interactions [34; 43]. Second, the ordered spins are parallel to the \(ab\)-plane, supporting an XXZ starting point for the interaction model. A Kitaev-type model, with principle axes pointing at an angle out-of-plane due to the local crystallographic environment [53], would need a significant coincidence to form purely in-plane ordering. Third, an XXZ starting point (as opposed to Kitaev) aligns NCSO with other honeycomb cobaltates [20; 21; 22; 23]. While this may initially appear disadvantageous for quantum spin liquid exploration, the proposed double-\(\mathbf{q}\) magnetic structure exhibits significantly reduced classical moments on half of the sites, suggesting strong quantum fluctuations [54]. These fluctuations likely stem from a close competition between ferro- and antiferromagnetic ordering tendencies [25; 13]. As antiferromagnetic order dominates at zero field, external fields could potentially drive the system to a tipping point, where stronger quantum fluctuations and spin-liquid behaviors might emerge.
In conclusion, we report experimental evidence of easy-plane multi-\(\mathbf{q}\) magnetic order in NCSO. Our findings highlight the importance of considering high-order spin interactions within an XXZ framework when modeling honeycomb cobaltates previously considered candidate Kitaev materials. Although our results may require revisiting existing theories, NCSO remains a promising system for investigating novel phenomena related to spin liquids.
We are grateful for experimental assistance by Zirong Ye, Qian Xiao, Xiquan Zheng and Yingying Peng, and for discussions with Wenpie Chen, Lukas Janssen, Wilhelm G. F. Kruger, Zhengxin Liu, Yuan Wan, Jiucai Wang, Weiliang Yao, and Yi Zhou. The work at Peking University was supported by the National Basic Research Program of China (Grant No. 2021YFA1401900) and the NSF of China (Grant Nos. 12061131004 and 11888101). The work at Brookhaven National Laboratory was supported by Office of Basic Energy Sciences (BES), Division of Materials Sciences and Engineering, U.S. Department of Energy (DOE), under contract DE-SC0012704. A portion of this research used resources at Spallation Neutron Source, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory. One of the neutron scattering experiments was performed at the MLF, J-PARC, Japan, under a user program (No. 2021B0158).
|
2305.09408 | Numerical solution of Poisson partial differential equations in high
dimension using two-layer neural networks | The aim of this article is to analyze numerical schemes using two-layer
neural networks with infinite width for the resolution of the high-dimensional
Poisson-Neumann partial differential equations (PDEs) with Neumann boundary
conditions. Using Barron's representation of the solution with a measure of
probability, the energy is minimized thanks to a gradient curve dynamic on the
$2$ Wasserstein space of parameters defining the neural network. Inspired by
the work from Bach and Chizat, we prove that if the gradient curve converges,
then the represented function is the solution of the elliptic equation
considered. Numerical experiments are given to show the potential of the
method. | Mathias Dus, Virginie Ehrlacher | 2023-05-16T12:52:55Z | http://arxiv.org/abs/2305.09408v2 | Numerical solution of Poisson partial differential equation in high dimension using two-layer neural networks
###### Abstract
The aim of this article is to analyze numerical schemes using two-layer neural networks with infinite width for the resolution of the high-dimensional Poisson partial differential equation (PDE) with Neumann boundary condition. Using Barron's representation of the solution [1] with a probability measure defined on the set of parameter values, the energy is minimized thanks to a gradient curve dynamic on the 2-Wasserstein space of the set of parameter values defining the neural network. Inspired by the work from Bach and Chizat [2, 3], we prove that if the gradient curve converges, then the represented function is the solution of the elliptic equation considered. In contrast to the works [2, 3], the activation function we use here is not assumed to be homogeneous to obtain global convergence of the flow. Numerical experiments are given to show the potential of the method.
## 1 Introduction
### Literature review
The motivation of our work is to bring some contributions on the mathematical understanding of neural-network based numerical schemes, typically Physically-Informed-Neural-Networks (PINNs) [4, 5, 6, 7, 8, 9] approaches, for the resolution of some high-dimensional Partial Differential Equations (PDEs). In this context, it is of tremendous importance to understand why neural networks work so well in some contexts in order to improve its efficiency and get an insight of why a particular neural network should be relevant to a specific task.
The first step towards a mathematical analysis theory of neural network-based numerical methods is the identification of functional spaces suited for neural network approximation. The first important result in this direction is the celebrated theorem of approximation due to Cybenko [10] proving that two-layer neural networks can approximate an arbitrary smooth function on a compact of \(\mathbb{R}^{d}\). However, this work does not give an estimation of the number of neurons needed even if it is of utmost importance to hope for tractable numerical methods. To answer this question, Yarotsky [11] gave bounds on the number of neurons necessary to represent smooth functions. This theory mainly relies on classical techniques of Taylor expansions and does not give computable architectures in the high dimensional regime. Another original point of view was given by Barron [1] who used Monte Carlo techniques from Maurey-Jones-Barron to prove that functions belonging to a certain metric space _ie_ the Barron space, can be approximated by a two-layer NN with precision \(O\left(\dfrac{1}{\sqrt{m}}\right)\), \(m\) being the width of the first layer. Initially, Barron's norm was characterized using Fourier analysis reducing the theory to domain where Fourier decomposition is available. Now other Barron type norms which does not suppose the existence of an harmonic decomposition [12], are also available.
In order to give a global idea of how this works, one can say that a Barron function \(f_{\mu}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) can be represented by a measure \(\mu\) with second order moments :
\[f_{\mu}(x):=\int a\sigma(wx+b)d\mu(a,b,c)\]
where \(\sigma\) is an activation function and the Barron norm \(\|f_{\mu}\|_{\mathcal{B}}\) is roughly speaking, a mix of the second order moments of \(\mu\). Intuitively, the law of large number says that the function \(f_{\mu}\) can be represented
by a sum of Dirac corresponding to a two-layer neural network whose width equals the number of Dirac masses. The architecture of a two-layer neural network is recalled in Figure 1. Having said that, some important questions arise :
* What is the size of the Barron space and the influence of the activation function on such size? Some works have been done in this direction for the ReLU activation function. In [13], it is proven that \(H^{s}\) functions are Barron if \(s\geq\dfrac{d}{2}+2\) and that \(f_{\mu}\) can be decomposed by an infinite sum of \(f_{\mu_{i}}\) whose singularities are located on a \(k\) (\(k<d\)) affine subspace of \(\mathbb{R}^{d}\). For the moment, no similar result seems to hold with more regular activation functions.
* One can add more and more layers and observe the influence on the corresponding space. In [14], tree-like spaces \(\mathcal{W}_{L}\) (where \(L\) is the number of hidden layers) are introduced using an iterative scheme starting from the Barron space. Of course, multi-layers neural networks naturally belong to these spaces. Nevertheless for a function belonging to \(\mathcal{W}_{L}\), it is not clear that a multilayer neural network is more efficient than its two-layer counterpart for its approximation.
* Does solutions of classical PDEs belong to a Barron space? In this case, there is a potential to solve PDEs without suffering from the curse of dimension. Some important advances have been made in this direction in [15] where authors considered the Poisson problem with Neumann boundary condition on the \(d\) dimensional cube. If the source term is Barron, then it is proved that the solution is also Barron and there is hope for an approximation with a two-layer NN.
Using conclusions from [15], the object of this paper is to propose and analyze a neural-network based numerical approach for the resolution of the Poisson equation in the high dimensional regime with Barron source. Inspired from [2], we immerse the problem on the space of probability measures with finite second order moments defined on the parametric domain. This corresponds to finding a solution to the PDE thanks to an infinitely wide two-layer neural network. Then we interpret the learning phase of the network as a gradient curve in the space of probability measure. Finally under some hypothesis on the initial support, we prove that if the curve converges then it necessarily does towards a measure corresponding to the solution of the PDE considered. Note that our argumentation is different from [2, 3] since the convergence proof is not based on topological degree nor the topological properties of the sphere. We rather use a homology argument taken from algebraic topology and a clever choice of activation function to prove that the dynamic of the support of the gradient curve of measure behaves nicely. Numerical experiments are conducted to confirm the potential of the method proposed.
In Section 2, the problem is presented in a more precise way and the link between probability and Barron functions is made clearly. In Section 3, the gradient curve is introduced and our main theorems on its well-posedness and convergence are presented and proved. Finally, numerical experiments are exposed in Section 4.
**Notation** : For \(1\leq p\leq\infty\), the notation \(|\cdot|_{p}\) designates the \(\ell^{p}\) norm of a vector of arbitrary finite dimension with particular attention to \(p=2\) (euclidean norm) for which the notation \(|\cdot|\) is preferred.
## 2 Preliminaries
This section introduces the mathematical framework we consider in this paper to relate two-layer neural networks and high-dimensional Poisson equations.
### Problem setting
The following Poisson equation is considered on \(\Omega:=\left[0,1\right]^{d}\) (\(d\in\mathbb{N}\)) with Neumann boundary condition : find \(u^{*}\in H^{1}(\Omega)\) with \(\int_{\Omega}u^{*}=0\) solution to :
\[\begin{cases}-\Delta u^{*}=f\text{ on }\Omega,\\ \partial_{n}u^{*}=0\text{ on }\partial\Omega,\end{cases} \tag{1}\]
where \(f\in L^{2}(\Omega)\) with \(\int_{\Omega}f=0\). Here (1) has to be understood in the variational sense, in the sense that \(u^{*}\) is equivalently the unique minimizer to :
\[u^{*}=\operatorname*{argmin}_{u\in H^{1}(\Omega)}\mathcal{E}(u), \tag{2}\]
Figure 1: A two-layer neural network of width \(m\)
where
\[\forall u\in H^{1}(\Omega),\ \ \ \mathcal{E}(u):=\int_{\Omega}\Big{(}\frac{|\nabla u |^{2}}{2}-fu\Big{)}\,dx+\frac{1}{2}\Big{(}\int_{\Omega}udx\Big{)}^{2}.\]
This can indeed be easily checked by classic Lax-Milgram arguments. The functional \(\mathcal{E}\) is strongly convex and differentiable with derivative given by Lemma 1.
**Lemma 1**.: _The functional \(\mathcal{E}:H^{1}(\Omega)\to\mathbb{R}\) is continuous, differentiable and for all \(u\in H^{1}(\Omega)\), it holds that_
\[\forall v\in H^{1}(\Omega),\ d\,\mathcal{E}\,|_{u}(v)=\int_{\Omega}\left( \nabla u\cdot\nabla v-fv\right)dx+\int_{\Omega}udx\int_{\Omega}vdx.\]
It can be easily seen that points \(u\) where the differential is identically zero are solution to equation (1).
**Remark 1**.: _The coercive symmetric bilinear form \(\bar{a}\) involved in the definition of the energy writes :_
\[\bar{a}(u,v):=\int_{\Omega}\nabla u\cdot\nabla vdx+\int_{\Omega}udx\int_{ \Omega}vdx.\]
_The energy \(\mathcal{E}\) can then be equivalently rewritten thanks to the bilinear form \(a\) :_
\[\mathcal{E}(u)=\frac{1}{2}\bar{a}(u-u^{\star},u-u^{\star})-\frac{1}{2}\int_{ \Omega}|\nabla u^{\star}|^{2}dx.\]
The aim of the present work is to analyze a numerical method based on the use of infinite-width two-layer neural networks for the resolution of (1) with a specific focus on the case when \(d\) is large.
### Activation function
We introduce here the particular choice of activation function we consider in this work.
Let \(\sigma:\mathbb{R}\to\mathbb{R}\) be the classical Rectified Linear Unit (ReLU) function where :
\[\forall y\in\mathbb{R},\ \sigma(y):=\max(y,0). \tag{3}\]
Let \(\rho:\mathbb{R}\to\mathbb{R}\) be defined by
\[\left\{\begin{array}{cl}Z\exp\left(-\frac{\tan(\frac{\tau}{2}y)^{2}}{2} \right)&\mbox{if }|y|\leq 1\\ 0&\mbox{otherwise.}\end{array}\right. \tag{4}\]
where the constant \(Z\in\mathbb{R}\) is defined such that the integral of \(\rho\) is equal to one. For all \(\tau>0\), we then define \(\rho_{\tau}:=\tau\rho(\tau\cdot)\) and \(\sigma_{\tau}:\mathbb{R}\to\mathbb{R}\) the function defined by
\[\forall y\in\mathbb{R},\ \sigma_{\tau}(y):=(\rho_{\tau}\star\sigma)(y). \tag{5}\]
We then have the following lemma.
**Lemma 2**.: _For any \(\tau>0\), it holds that_
1. \(\sigma_{\tau}\in\mathcal{C}^{\infty}(\mathbb{R})\) _is uniformly bounded ans so is_ \(\sigma_{\tau}^{\prime}\)_,_
2. _for all_ \(y<-1/\tau\)_,_ \(\sigma_{\tau}(y)=0\)_,_
3. _for all_ \(y>1/\tau\)_,_ \(\sigma_{\tau}(y)=y\)_,_
4. _there exists_ \(C>0\) _such that for all_ \(\tau>0\)_,_ \[\|\sigma-\sigma_{\tau}\|_{H^{1}(\mathbb{R})}\leq\frac{C}{\sqrt{\tau}}.\]
Proof.: The first item \((i)\) is classic and left to the reader. For \((ii)\), we have :
\[\sigma_{\tau}(y)=\int_{-1/\tau}^{1/\tau}\rho_{\tau}(y)\sigma(x-y)dy \tag{6}\]
and if \(x<-1/\tau\) then \(x-y<0\) for \(-1/\tau<y<1/\tau\) and \(\sigma(x-y)=0\). This naturally gives \(\sigma_{\tau}(y)=0\).
For \((iii)\), using again (6) and if \(x>1/\tau\), then \(x-y>0\) for \(-1/\tau<y<1/\tau\) and \(\sigma(x-y)=x-y\). As a consequence,
\[\sigma_{\tau}(y)=\int_{-1/\tau}^{1/\tau}\rho_{\tau}(y)(x-y)dy=x,\]
where we have used the fact that \(\int_{\mathbb{R}}\rho_{\tau}(y)dy=1\) and \(\int_{\mathbb{R}}y\rho_{\tau}(y)dy=0\) by symmetry of \(\rho\).
For \((iv)\), we have by \((ii)-(iii)\):
\[\|\sigma-\sigma_{\tau}\|_{L^{2}(\mathbb{R})}^{2}=\int_{-1/\tau}^{1/\tau}( \sigma(x)-\sigma_{\tau}(x))^{2}dx\leq\frac{8}{\tau^{2}},\]
where we used the fact that \(|\sigma(x)|,|\sigma_{\tau}(x)|\leq 1/\tau\) on \([-1/\tau,1/\tau]\). In a similar way,
\[\|\sigma^{\prime}-\sigma_{\tau}^{\prime}\|_{L^{2}(\mathbb{R})}^{2}=\int_{-1/ \tau}^{1/\tau}(\sigma^{\prime}(x)-\sigma_{\tau}^{\prime}(x))^{2}dx\leq\frac{4} {\tau}.\]
The two last inequalities gives \((iv)\).
In this work, we will rather use a hat version of the regularized ReLU activation function. More precisely, we define:
\[\forall y\in\mathbb{R},\ \sigma_{H,\tau}(y):=\sigma_{\tau}(y+1)-\sigma_{\tau}(2y )+\sigma_{\tau}(y-1). \tag{7}\]
We call hereafter this activation function the regularized HReLU (Hat ReLU) activation. When \(\tau=+\infty\), the following notation is proposed :
\[\forall y\in\mathbb{R},\ \sigma_{H}(y):=\sigma(y+1)-\sigma(2y)+\sigma(y-1). \tag{8}\]
The reasons why we use this activation is that it has a compact support and can be used to generate an arbitrary piecewise constant function on \([0,1]\). Note however that neither \(\sigma_{H,\tau}\) nor \(\sigma_{H}\) are homogeneous (in contrast to the activation functions considered in [2, 3]). Notice also that a direct corollary of Lemma 2 is that there exists a constant \(C>0\) such that for all \(\tau>0\),
\[\|\sigma_{H}-\sigma_{H,\tau}\|_{H^{1}(\mathbb{R})}\leq\frac{C}{\sqrt{\tau}} \tag{9}\]
We will also use the fact that there exists a constant \(C>0\) such that for all \(\tau>0\),
\[\|\sigma_{H,\tau}\|_{L^{\infty}(\mathbb{R})}\leq C,\ \|\sigma_{H,\tau}^{ \prime}\|_{L^{\infty}(\mathbb{R})}\leq C,\ \|\sigma_{H,\tau}^{\prime\prime}\|_{L^{\infty}(\mathbb{R})}\leq C\tau\ \ \text{and}\ \|\sigma_{H,\tau}^{\prime\prime\prime}\|_{L^{\infty}(\mathbb{R})}\leq C \tau^{2}. \tag{10}\]
Figure 2: The hat activation function and its regularization (\(\tau=4\))
### Spectral Barron space
We introduce the orthonormal basis in \(L^{2}(\Omega)\) composed of the eigenfunctions \(\{\phi_{k}\}_{k\in\mathbb{N}^{d}}\) of the Laplacian operator with Neumann boundary conditions, where
\[\forall k=(k_{1},\ldots,k_{d})\in\mathbb{N}^{d},\ \forall x:=(x_{1},\cdots,x_{d}) \in\Omega,\quad\phi_{k}(x_{1},\ldots,x_{d}):=\prod_{i=1}^{d}\cos(\pi k_{i}x_{i}). \tag{11}\]
Notice that \(\{\phi_{k}\}_{k\in\mathbb{N}^{d}}\) is also an orthogonal basis of \(H^{1}(\Omega)\). Using this basis, we have the Fourier representation formula for any function \(u\in L^{2}(\Omega):\)
\[u=\sum_{k\in\mathbb{N}^{d}}\hat{u}(k)\phi_{k},\]
where for all \(k\in\mathbb{N}^{d}\), \(\hat{u}(k):=\langle\phi_{k},u\rangle_{L^{2}(\Omega)}\). This allows to define the (spectral) Barron space [15] as follows :
**Definition 1**.: _For all \(s>0\), the Barron space \(\mathcal{B}^{s}(\Omega)\) is defined as :_
\[\mathcal{B}^{s}(\Omega):=\Big{\{}u\in L^{1}(\Omega):\sum_{k\in\mathbb{N}^{d}}( 1+\pi^{s}|k_{1}^{s})|\hat{u}(k)|<+\infty\Big{\}} \tag{12}\]
_and the space \(\mathcal{B}^{2}(\Omega)\) is denoted \(\mathcal{B}(\Omega)\). Moreover, the space \(\mathcal{B}^{s}(\Omega)\) is embedded with the norm :_
\[\|u\|_{\mathcal{B}^{s}(\Omega)}:=\sum_{k\in\mathbb{N}^{d}}(1+\pi^{s}|k|_{1}^{s} )|\hat{u}(k)|. \tag{13}\]
By [15, Lemma 4.3], it is possible to relate the Barron space to traditional Sobolev spaces :
**Lemma 3**.: _The following continuous injections hold :_
* \(\mathcal{B}(\Omega)\hookrightarrow H^{1}(\Omega)\)_,_
* \(\mathcal{B}^{0}(\Omega)\hookrightarrow L^{\infty}(\Omega)\)_._
The space \(\mathcal{B}(\Omega)\) has interesting approximation properties related to neural networks schemes. We introduce the following approximation space:
**Definition 2**.: _Let \(\chi:\mathbb{R}\rightarrow\mathbb{R}\) be measurable, \(m\in\mathbb{N}^{*}\) and \(B>0\). The space \(\mathcal{F}_{\chi,m}(B)\) is defined as:_
\[\mathcal{F}_{\chi,m}(B):=\left\{c+\sum_{i=1}^{m}a_{i}\chi(w_{i}\cdot x+b_{i}) :c,a_{i},b_{i}\in\mathbb{R},\,w_{i}\in\mathbb{R}^{d},\ |c|\leq 2B,|w_{i}|=1,|b_{i}| \leq 1,\sum_{i=1}^{m}|a_{i}|\leq 4B\right\} \tag{14}\]
Now, we are able to state the main approximation theorem.
**Theorem 1**.: _For any \(u\in\mathcal{B}(\Omega)\), \(m\in\mathbb{N}^{*}\) :_
1. _there exists_ \(u_{m}\in\mathcal{F}_{\sigma_{H,m}(\|u\|_{\mathcal{B}(\Omega)})}\) _such that :_ \[\|u-u_{m}\|_{H^{1}(\Omega)}\leq\frac{C\|u\|_{\mathcal{B}(\Omega)}}{\sqrt{m}},\]
2. _there exists_ \(\tilde{u}_{m}\in\mathcal{F}_{\sigma_{H,m},m}(\|u\|_{\mathcal{B}(\Omega)})\) _such that :_ \[\|u-\tilde{u}_{m}\|_{H^{1}(\Omega)}\leq\frac{C\|u\|_{\mathcal{B}(\Omega)}}{ \sqrt{m}}.\] (15)
_where for both items, \(C\) is a universal constant which does not depend on \(d\) neither on \(u\)._
Proof.: Let \(B:=\|u\|_{\mathcal{B}(\Omega)}\). We just give a sketch of the proof of (ii), (i) being derived from similar arguments as in [15, Theorem 2.1]
By (i), there exists \(u_{m}\in\mathcal{F}_{\sigma_{H},m}(B)\) such that
\[\|u-u_{m}\|_{H^{1}(\Omega)}\leq\frac{CB}{\sqrt{m}}.\]
The function \(u_{m}\) can be written as :
\[u_{m}(x)=c+\sum_{i=1}^{m}a_{i}\sigma_{H}(w_{i}\cdot x+b_{i})\]
for some \(c,a_{i},b_{i}\in\mathbb{R}\), \(w_{i}\in\mathbb{R}^{d}\) for \(i=1,\ldots,m\) with \(|c|\leq 2B,|w_{i}|=1,|b_{i}|\leq 1,\sum_{i=1}^{m}|a_{i}|\leq 4B\).
By Lemma 2\((iv)\), there exists \(C>0\) such that for all \(\tau>0\), \(\|\sigma_{H}-\sigma_{H,\tau}\|_{H^{1}(\mathbb{R})}\leq\frac{C}{\sqrt{\tau}}\), it is easy to see that
\[\|\tilde{u}_{m}-u_{m}\|_{H^{1}(\Omega)}\leq\frac{CB}{\sqrt{m}}\]
where :
\[\tilde{u}_{m}(x)=c+\sum_{i=1}^{m}a_{i}\sigma_{H,m}(w_{i}\cdot x+b_{i}).\]
Consequently,
\[\|u-\tilde{u}_{m}\|_{H^{1}(\Omega)}\leq\frac{CB}{\sqrt{m}}\]
which yields the desired result.
**Remark 2**.: _With other words, a Barron function can be approximated in \(H^{1}(\Omega)\) by a two-layer neural network of width \(m\) with precision \(O\left(\frac{1}{\sqrt{m}}\right)\) when the activation function is the HReLU one._
In the sequel, we assume that any parameter vector \(\theta=(c,a,w,b)\) takes values in the neural network parameter set
\[\Theta:=\mathbb{R}\times\mathbb{R}\times S_{\mathbb{R}^{d}}(1)\times[-\sqrt{ d}-2,\sqrt{d}+2], \tag{16}\]
with \(S_{\mathbb{R}^{d}}(1)\) the unit sphere of \(\mathbb{R}^{d}\). In addition, for all \(r>0\), we denote by
\[K_{r}:=[-2r,2r]\times[-4r,4r]\times S_{\mathbb{R}^{d}}(1)\times[-\sqrt{d}-2, \sqrt{d}+2]. \tag{17}\]
The particular choice of the domain value of the parameter \(b\), namely \([-\sqrt{d}-2,\sqrt{d}+2]\) will be made clear in the following. Moreover, let \(\mathcal{P}_{2}(\Theta)\) (respectively \(\mathcal{P}_{2}(K_{r})\)) denote the set of probability measures on \(\Theta\) (respectively on \(K_{r}\)) with finite second-order moments.
Let us make the following remark.
**Remark 3**.: _Let \(m\in\mathbb{N}^{*}\), \(u_{m}\in\mathcal{F}_{\chi,m}(B)\) with \(B>0\) and \(\chi:\mathbb{R}\rightarrow\mathbb{R}\). Then, there exists \(c,a_{i},b_{i}\in\mathbb{R}\), \(w_{i}\in\mathbb{R}^{d}\) for \(i=1,\ldots,m\) with \(|c|\leq 2B,|w_{i}|=1,|b_{i}|\leq 1,\sum_{i=1}^{m}|a_{i}|\leq 4B\) such that for all \(x\in\Omega\),_
\[u_{m}(x) =c+\sum_{i=1}^{m}a_{i}\chi(w_{i}\cdot x+b_{i})\] \[=\sum_{i=1}^{m}\left(c+\sum_{j=1}^{m}|a_{j}|sign(a_{i})\chi(w_{i }\cdot x+b_{i})\right)\frac{|a_{i}|}{\sum_{j=1}^{m}|a_{j}|}\] \[=\int_{\Theta}[c+a\chi(w\cdot x+b)]d\mu_{m}(c,a,w,b),\]
_where the measure \(\mu_{m}\) is a probability measure on \(\Theta\) given by :_
\[\mu_{m}:=\sum_{i=1}^{m}\frac{|a_{i}|}{\sum_{j=1}^{m}|a_{j}|}\delta_{(c,\sum_{j=1} ^{m}|a_{j}|sign(a_{i}),w_{i},b_{i})}.\]
_Remark that \(\mu_{m}\) has support in \(K_{B}\). In addition, the sequence \((\mu_{m})_{m\in\mathbb{N}^{*}}\) is uniformly (with respect to \(m\)) bounded in \(\mathcal{P}_{2}(\Theta)\)._
For a general domain \(\Omega\) which is not of the form \(\Omega=[0,1]^{d}\), the solution to equation (1) does not necessarily belong to the Barron space even if the source term has finite Barron norm. Nevertheless for our case \(\left(\Omega=[0,1]^{d}\right)\), there is an explicit bound of the Barron norm of the solution compared with the source one. This gives hope for a neural network approximation of the solution.
**Theorem 2**.: _[_15_]_ _Let \(u^{*}\) be the solution of the equation (1) with \(f\in\mathcal{B}^{0}(\Omega)\), then \(u^{*}\in\mathcal{B}(\Omega)\). Moreover, the following estimate holds :_
\[\|u^{*}\|_{\mathcal{B}(\Omega)}\leq d\|f\|_{\mathcal{B}^{0}(\Omega)}.\]
### Infinite width two-layer neural networks
In order to ease the notation for future computations, for all \(\tau>0\), we introduce the function \(\Phi_{\tau}:\Theta\times\Omega\to\mathbb{R}\) defined by
\[\forall\theta:=(c,a,w,b)\in\Theta,\;\forall x\in\Omega,\quad\Phi_{\tau}( \theta;x):=c+a\sigma_{H,\tau}(w\cdot x+b) \tag{18}\]
and \(\Phi_{\infty}:\Theta\times\Omega\to\mathbb{R}\) defined by such that:
\[\forall\theta:=(c,a,w,b)\in\Theta,\;\forall x\in\Omega,\quad\Phi_{\infty}( \theta;x):=c+a\sigma_{H}(w\cdot x+b). \tag{19}\]
The space \(\mathcal{P}_{2}(\Theta)\) is embedded with the 2-Wasserstein distance :
\[\forall\mu,\nu\in\mathcal{P}_{2}(\Theta),\quad W_{2}^{2}(\mu,\nu):=\inf_{\tau \in\Gamma(\mu,\nu)}\int_{\Theta^{2}}d(\theta,\tilde{\theta})^{2}d\gamma(\theta,\tilde{\theta}),\]
where \(\Gamma(\mu,\nu)\) is the set of probability measures on \(\Theta^{2}\) with marginals given respectively by \(\mu\) and \(\nu\) and where \(d\) is the geodesic distance in \(\Theta\). For the interested reader, the geodesic distance between \(\theta,\tilde{\theta}\in\Theta\) can be computed as :
\[d(\theta,\tilde{\theta})=\sqrt{(c-\tilde{c})^{2}+(a-\tilde{a})^{2}+d_{S_{bd}(1 )}(w,\tilde{w})+(b-\tilde{b})^{2}}.\]
For all \(\tau,r>0\), we introduce the operator \(P_{\tau}\) and the functional \(\mathcal{E}_{\tau,r}\) defined as follows :
**Definition 3**.: _The operator \(P_{\tau}:\mathcal{P}_{2}(\Theta)\to H^{1}(\Omega)\) is defined for all \(\mu\in\mathcal{P}_{2}(\Theta)\) as :_
\[P_{\tau}(\mu):=\int_{\Theta}\Phi_{\tau}(\theta;x)d\mu(\theta).\]
_Additionally, we define the functional \(\mathcal{E}_{\tau,r}(\mu):\mathcal{P}_{2}(\Theta)\to\mathbb{R}\) as :_
\[\mathcal{E}_{\tau,r}(\mu):=\begin{cases}\mathcal{E}(P_{\tau}(\mu))\text{ if }\mu(K_{\tau})=1\\ \qquad+\infty\text{ otherwise}.\end{cases}\]
_._
**Proposition 1**.: _For all \(0<\tau,r<\infty\), the functional \(\mathcal{E}_{\tau,r}\) is weakly lower semicontinuous._
Proof.: Let \((\mu_{n})_{n\in\mathbb{N}^{*}}\) be a sequence of elements of \(\mathcal{P}_{2}(\Theta)\) which narrowly converges towards some \(\mu\in\mathcal{P}_{2}(\Theta)\). Without loss of generality, we can assume that \(\mu_{n}\) is supported in \(K_{r}\) for all \(n\in\mathbb{N}^{*}\). Then, it holds that :
* the limit \(\mu\) has support in \(K_{r}\) (by Portmanteau theorem);
* moreover, let \(u_{n}:\Omega\to\mathbb{R}\) be defined such that for all \(x\in\Omega\), \[u_{n}(x):=\int_{\Theta}\Phi_{\tau}(\theta;x)d\mu_{n}(\theta)=\int_{K_{r}}\Phi_{ \tau}(\theta;x)\,d\mu_{n}(\theta).\] Since for all \(x\in\Omega\), the function \(K_{r}\ni\theta\mapsto\Phi_{\tau}(\theta;x)\) is continuous and bounded, it then holds that, for all \(x\in\Omega\), \[u_{n}(x)\underset{n\to\infty}{\longrightarrow}u(x):=\int_{K_{r}}\Phi_{\tau}( \theta;x)d\mu(\theta)=\int_{\Theta}\Phi_{\tau}(\theta;x)d\mu(\theta),\] where the last equality comes from the fact that \(\mu\) is supported in \(K_{r}\).
* It actually holds that the sequence \((u_{n})_{n\in\mathbb{N}^{*}}\) is uniformly bounded in \(\mathcal{C}(\Omega)\). Indeed, there exists \(C_{\tau}>0\) such that for all \(x\in\Omega\) and \(n\in\mathbb{N}^{*}\), we have \[u_{n}(x)^{2} =\left(\int_{K_{r}}\Phi_{\tau}(\theta;x)d\mu_{n}(\theta)\right)^{2}\] \[\leq\int_{K_{r}}\Phi_{\tau}^{2}(\theta;x)d\mu_{n}(\theta)\] \[\leq Cr^{2},\] where the last inequality comes from (10).
As a consequence of the Lebesgue dominated convergence theorem, the sequence \((u_{n})_{n\in\mathbb{N}^{*}}\) strongly converges towards \(u\) in \(L^{2}(\Omega)\). Reproducing the same argument as above for the sequence \((\nabla u_{n})_{n\in\mathbb{N}^{*}}\), one easily proves that this strong convergence holds in fact in \(H^{1}(\Omega)\). The fact that the functional \(\mathcal{E}:H^{1}(\Omega)\to\mathbb{R}\) is continuous allows us to conclude.
**Remark 4**.: _In \(\mathcal{P}_{2}(K_{r})\), the weak convergence is metricized by the Wasserstein distance. Hence, \(\mathcal{E}_{\tau}\) is lower semicontinuous as a functional from \((\mathcal{P}_{2}(\Theta),W_{2})\) to \((\mathbb{R},|\cdot|)\)._
Finally, the lower semicontinuity of \(\mathcal{E}_{\tau,r}\) and compactness of \(\mathcal{P}_{2}(K_{r})\) (as \(K_{r}\) is compact) allows to prove the existence of at least one solution to the following minimization problem :
**Problem 1**.: _For \(0<\tau<\infty\) and \(0<r<+\infty\), let \(\mu_{\tau,r}^{\star}\in\mathcal{P}_{2}(\Theta)\) be solution to_
\[\mu_{\tau,r}^{\star}\in\operatorname*{argmin}_{\mu\in\mathcal{P}_{2}(\Theta) }\mathcal{E}_{\tau,r}(\mu). \tag{20}\]
For large values of \(\tau\) and \(r=d\|f\|_{\mathcal{B}^{0}(\Omega)}\), solutions of (20) enable to obtain accurate approximations of the solution of (1). This result is stated in Theorem 3.
**Theorem 3**.: _There exists \(C>0\) such that for all \(m\in\mathbb{N}^{*}\) and any solution \(\mu_{m,d\|f\|_{\mathcal{B}^{0}(\Omega)}}^{\star}\) to (20) with \(\tau=m\) and \(r=d\|f\|_{\mathcal{B}^{0}(\Omega)}\), it holds that:_
\[\left\|u^{\star}-\int_{\Theta}\Phi_{m}(\theta;\cdot)d\mu_{m,d\|f\|_{\mathcal{B }^{0}(\Omega)}}^{\star}(\theta)\right\|_{H^{1}(\Omega)}\leq Cd\frac{\|f\|_{ \mathcal{B}^{0}(\Omega)}}{\sqrt{m}}\]
_where \(u^{\star}\) is the solution of the equation (1)._
Proof.: For all \(m\in\mathbb{N}^{*}\), let \(\tilde{u}_{m}\in\mathcal{F}_{\sigma_{H,m},m}(\|u^{*}\|_{\mathcal{B}})\) satisfying (15) for \(u=u^{*}\) (using Theorem 1). Since \(\|u^{*}\|_{\mathcal{B}(\Omega)}\leq d\|f\|_{\mathcal{B}^{0}(\Omega)}\) thanks to Theorem 2 and by Remark 3, \(\tilde{u}_{m}\) can be rewritten using a probability measure \(\mu_{m}\) with support in \(K_{d\|f\|_{\mathcal{B}^{0}(\Omega)}}\) as :
\[\forall x\in\Omega,\quad\tilde{u}_{m}(x)=\int_{\Theta}\Phi_{m}(\theta;x)\,d \mu_{m}(\theta).\]
Let \(\mu_{m,d\|f\|_{\mathcal{B}^{0}(\Omega)}}^{\star}\) be a minimizer of (20) with \(\tau=m\) and \(r=d\|f\|_{\mathcal{B}^{0}(\Omega)}\). Then, it holds that:
\[\mathcal{E}_{m,d\|f\|_{\mathcal{B}^{0}(\Omega)}}\left(\mu_{m,d\|f\|_{\mathcal{ B}^{0}(\Omega)}}^{\star}\right)\leq\mathcal{E}_{m,d\|f\|_{\mathcal{B}^{0}( \Omega)}}(\mu_{m}),\]
which by Remark 1, is equivalent to :
\[\bar{a}(u^{\star}_{m}-u^{\star},u^{\star}_{m}-u^{\star})\leq\bar{a}(\tilde{u}_{m}- u^{\star},\tilde{u}_{m}-u^{\star}).\]
where for all \(x\in\Omega\),
\[u^{\star}_{m}(x):=\int_{\Theta}\Phi_{m}(\theta;x)\,d\mu^{\star}_{m,d\|f\|_{ \mathcal{B}^{0}(\Omega)}}(\theta).\]
Denoting by \(\alpha\) and \(L\) respectively the coercivity and continuity constants of \(\bar{a}\), we obtain that
\[\|u^{\star}_{m}-u^{\star}\|_{H^{1}(\Omega)}\leq\frac{L}{\alpha}\|\tilde{u}_{m} -u^{\star}\|_{H^{1}(\Omega)}\leq Cd\frac{\|f\|_{\mathcal{B}^{0}(\Omega)}}{ \sqrt{m}}.\]
### Main results
In this section, we find a solution to Problem 1 using gradient curve techniques. More particularly, we will define and prove the existence of a gradient descent curve such that if the convergence is asserted, then the convergence necessarily holds towards a global minimizer. In all the sequel, we fix an a priori chosen value of \(\tau>0\).
#### 2.5.1 Well-posedness
First, we introduce the concept of gradient curve which formally writes for \(r>0\):
\[\forall t\geq 0,\,\,\,\frac{d}{dt}\mu^{r}(t)=-\nabla\,\mathcal{E}_{\tau,r}( \mu^{r}(t)). \tag{21}\]
Equation (21) has no mathematical sense since the space \(\mathcal{P}_{2}(\Theta)\) is not a Hilbert space and consequently, the gradient of \(\mathcal{E}_{\tau,r}\) is not available in a classical sense. Nevertheless \(\mathcal{P}_{2}(\Theta)\) being an Alexandrov space, it has a differential structure which allows to define gradients properly. The careful reader wishing to understand this structure can find a complete recap of all useful definitions and properties of Alexandrov spaces in Appendix A.
Before exposing our main results of well-posedness, we recall the basic definition of local slope [16]. In the sequel, we denote by \(\mathcal{P}_{2}(K_{r})\) the set of probability measures on \(\Theta\) with support included in \(K_{r}\).
**Definition 4**.: _At every \(\mu\in\mathcal{P}_{2}(K_{r})\), the local slope writes :_
\[|\nabla^{-}\,\mathcal{E}_{\tau,r}\,|(\mu):=\limsup_{\nu\to\mu}\frac{( \mathcal{E}_{\tau,r}(\mu)-\mathcal{E}_{\tau,r}(\nu))_{+}}{W_{2}(\mu,\nu)}\]
_which may be infinite._
In Section 3.1, we prove two theorems; the first one states the existence and the uniqueness of the gradient curve with respect to \(\mathcal{E}_{\tau,r}\) when \(r<\infty\).
**Theorem 4**.: _For all \(\mu_{0}\in\mathcal{P}_{2}(K_{r})\), there exists a unique locally Lipschitz gradient curve \(\mu^{r}:\mathbb{R}_{+}\to\mathcal{P}_{2}(K_{r})\) which is also a curve of maximal slope with respect to the upper gradient \(|\nabla^{-}\,\mathcal{E}_{\tau,r}\,|\). Moreover, for almost all \(t\geq 0\), there exists a vector field \(v^{r}_{t}\in L^{2}(\Theta;\,d\mu^{r}(t))^{d+3}\) such that_
\[\int_{\Theta}\|v^{r}_{t}\|^{2}\,d\mu^{r}(t)=\|v^{r}_{t}\|^{2}_{L^{2}(\Theta;\, d\mu^{r}(t))}<+\infty \tag{22}\]
_and :_
\[\left\{\begin{array}{rl}\partial_{t}\mu^{r}(t)+\operatorname{div}(v^{r}_{t} \mu^{r}(t))=&0\\ \mu^{r}(0)=&\mu_{0}\\ \mu^{r}(t)\in&\mathcal{P}_{2}(K_{r}).\end{array}\right. \tag{23}\]
In the second theorem, we focus on the case when \(r=+\infty\) for which we formally take the limit of gradient curves \((\mu^{r})_{r>0}\) as \(r\) goes to infinity. Introducing the following quantities, the definition of which will be made precise below :
\[\left\{\begin{array}{rl}\phi_{\mu}(\theta):=&d\,\mathcal{E}\,|_{\mathcal{P}_ {r}(\theta)}(\Phi_{\tau}(\theta;\cdot)),\\ v_{\mu}(\theta):=&\nabla_{\theta}\phi_{\mu}(\theta),\end{array}\right.\]
and \(\mathbf{P}\) the projection on the tangent bundle of \(\Theta\) the precise definition of which is given in Definition 5, the following theorem is proved.
**Theorem 5**.: _For all \(\mu_{0}\) compactly supported, there exists a curve \(\mu:\mathbb{R}_{+}\to\mathcal{P}_{2}(\Theta)\) such that :_
\[\begin{cases}\partial_{t}\mu(t)+\operatorname{div}((-\mathbf{P}v_{\mu(t)})\mu( t))=0\\ \mu(0)=\mu_{0}\end{cases} \tag{24}\]
_and for almost all \(t\geq 0\) :_
\[\int_{\Theta}|\mathbf{P}v_{\mu(t)}|^{2}\;d\mu(t)=\|\mathbf{P}v_{\mu(t)}\|_{L^ {2}(\Theta;d\mu(t))}^{2}<+\infty.\]
_Moreover, the solution satisfies :_
\[\forall t\geq 0,\mu(t)=\chi(t)\#\mu_{0}\]
_with \(\chi:\mathbb{R}_{+}\times\Theta\to\Theta\) solution to_
\[\begin{cases}\partial_{t}\chi(t;\theta)=-\mathbf{P}v_{\mu(t)}(\theta)\\ \chi(0;\theta)=\theta.\end{cases}\]
In Remark 6, we argue why proving the existence and uniqueness of a gradient curve for \(\mathcal{E}_{\tau,\infty}\) is not reachable. This is why \(\mu:\mathbb{R}_{+}\to\mathcal{P}_{2}(\Theta)\) is described as a limiting gradient curve and not a gradient curve itself in Theorem 5.
#### 2.5.2 Link with neural network
Our motivation for considering the analysis presented in the previous section is that we can link the learning phase of a neural network with the optimization procedure given by gradient curves defined above. Indeed, let \(m>0\) be an integer. A two-layer neural network \(u\) with \(\sigma_{H,\tau}\) as activation function can always be written as :
\[u=\frac{1}{m}\sum_{i=1}^{m}\Phi_{\tau}(\theta_{i},.) \tag{25}\]
with \(\theta_{i}\in\Theta\). Then, we differentiate the functional \(\mathcal{F}:(\theta_{1},\cdots,\theta_{m})\to\mathcal{E}\left(\frac{1}{m}\sum _{i=1}^{m}\Phi_{\tau}(\theta_{i},\cdot)\right)\) :
\[d\,\mathcal{F}\,|_{\theta_{1},\cdots,\theta_{m}}(d\theta_{1},\cdots,d\theta_{ m})=d\,\mathcal{E}\,|_{u}\left(\frac{1}{m}\sum_{i=1}^{m}\nabla_{\theta}\Phi_{ \tau}(\theta_{i},\cdot)\cdot d\theta_{i}\right).\]
Thus, the gradient of \(\mathcal{F}\) is given by :
\[\nabla_{\theta_{i}}\,\mathcal{F}(\theta_{1},\cdots,\theta_{m})=\frac{1}{m} \nabla_{\theta}\phi_{\mu}(\theta_{i})\]
where :
\[\mu:=\frac{1}{m}\sum_{i=1}^{m}\delta_{\theta_{i}}\in\mathcal{P}_{2}(\Theta). \tag{26}\]
As a consequence, a gradient descent of \(\mathcal{F}\) in the sense that, for all \(1\leq i\leq m\),
\[\begin{cases}\frac{d}{dt}\theta_{i}(t)=-m\mathbf{P}\nabla_{\theta_{i}}\, \mathcal{F}(\theta_{1}(t),\cdots,\theta_{m}(t))\\ \theta_{i}(0)=\theta_{i,0},\end{cases}\]
which is equivalent to the gradient curve of \(\mathcal{E}_{\tau,+\infty}\) with initial condition given by
\[\mu_{0,m}:=\frac{1}{m}\sum_{i=1}^{m}\delta_{\theta_{i,0}}. \tag{27}\]
**Theorem 6**.: _Let \(\mu_{0}\in\mathcal{P}_{2}(\Omega)\) compactly supported, \((\mu_{0,m})_{m\in\mathbb{N}^{*}}\) be such that for all \(m\in\mathbb{N}^{*}\), \(\mu_{0,m}\) is of the form (27) for some \((\theta_{i,0})_{1\leq i\leq m}\subset\mathrm{Supp}(\mu_{0})\) and \(\lim\limits_{m\to+\infty}W_{2}(\mu_{0,m},\mu_{0})=0\)._
_Let \(\mu:\mathbb{R}_{+}\to\mathcal{P}_{2}(\Theta)\) and \(\mu_{m}:\mathbb{R}_{+}\to\mathcal{P}_{2}(\Theta)\) be the gradient curves constructed in Theorem 5 associated respectively to the initial conditions \(\mu(0)=\mu_{0}\) and \(\mu_{m}(0)=\mu_{0,m}\). Then for all \(T>0\), there exists a constant \(C_{T}>0\) such that_
\[\sup\limits_{0\leq t\leq T}W_{2}(\mu(t),\mu_{m}(t))\leq C_{T}W_{2}(\mu_{0},\mu _{0,m}).\]
This theorem is proved in Section 3.2.
#### 2.5.3 Convergence
Our convergence result towards a global optimum is based on the following hypothesis on the initial measure \(\mu_{0}\) :
**Hypothesis 1**.: _The support of the measure \(\mu_{0}\) verifies :_
\[\{0\}\times\{0\}\times S_{\mathbb{R}^{d}}(1)\times[-\sqrt{d}-2,\sqrt{d}+2] \subset\mathrm{Supp}(\mu_{0})\]
Under such hypothesis, one gets a result of convergence in the spirit of a previous work from Bach and Chizat [2] :
**Theorem 7**.: _If \(\mu_{0}\) satisfies Hypothesis 1 and \(\mu(t)\) converges towards \(\mu^{\star}\in\mathcal{P}_{2}(\Theta)\) as \(t\) goes to infinity, then \(\mu^{\star}\) is optimal for Problem 1._
This theorem is proved in Section 3.3.
## 3 Gradient curve
This section is dedicated to the proof of the two theorems stated in Section 2.5.
### Well-posedness
#### 3.1.1 Proof of Theorem 4
Let us fix some value of \(r>0\) in this section. In the following, \(C>0\) will denote an arbitrary constant which does not depend on \(\tau\) and \(r\). Let \(\mathfrak{P}\) be the set of geodesics of \(\Theta\)_ie_ the set of absolutely continuous curves \(\pi:[0,1]\to\Theta\) such that for all \(t_{1},t_{2}\in[0,1]\), \(d(\pi(t_{1}),\pi(t_{2}))=d(\pi(0),\pi(1))|t_{1}-t_{2}|\). Besides, it holds that for all \(0\leq t\leq 1\), we have \(|\dot{\pi}(t)|=d(\pi(0),\pi(1))\).
For all \(s\in[0,1]\), we define the application map \(e_{s}:\mathfrak{P}\to\Theta\) such that \(e_{s}(\pi):=\pi(s)\). Owing this, McCann interpolation gives the fundamental characterization of constant speed geodesics in \(\mathcal{P}_{2}(\Theta):\)
**Proposition 2**.: _[_17_, Proposition 2.10]_ _For all \(\mu,\nu\in\mathcal{P}_{2}(\Theta)\) and any geodesic \(\kappa:[0,1]\to\mathcal{P}_{2}(\Theta)\) between them (i.e. such that \(\kappa(0)=\mu\) and \(\kappa(1)=\nu\)) in the \(W_{2}\) sense, there exists \(\Pi\in\mathcal{P}_{2}(\mathfrak{P})\) such that :_
\[\forall t\in[0,1],\ \kappa(t)=e_{t}\#\Pi.\]
**Remark 5**.: _As \(e_{0}\#\Pi=\mu\) and \(e_{1}\#\Pi=\nu\), the support of \(\Pi\) is included in the set of geodesics \(\pi:[0,1]\to\Theta\) such that \(\pi(0)\) belongs to the support of \(\mu\) and \(\pi(1)\) belongs to the support of \(\nu\). In addition, it holds that \(\gamma:=(e_{0},e_{1})\#\Pi\) is then an optimal transport plan between \(\mu\) and \(\nu\) for the quadratic cost, i.e. \(W_{2}(\mu\cdot\nu)^{2}=\int_{\Theta\times\Theta}|\theta-\widetilde{\theta}|^ {2}\,d\gamma(\theta,\widetilde{\theta})\)._
The next result states smoothness properties of geodesics on \(\Theta\) which are direct consequences of the smoothness properties of geodesics on the unit sphere of \(\mathbb{R}^{d}\). It is a classical result and its proof is left to the reader.
**Lemma 4**.: _There exists \(C>0\) such that for all \((\theta,\tilde{\theta})\) in \(\Theta^{2}\), all geodesic \(\pi:[0,1]\to\Theta\) such that \(\pi(0)=\theta\) and \(\pi(1)=\widetilde{\theta}\) and all \(0\leq s\leq t\leq 1\),_
\[|\pi(t)-\pi(s)|\leq d(\pi(t),\pi(s))=(t-s)d(\theta,\tilde{\theta})\leq C(t-s)| \tilde{\theta}-\theta|\]
_and_
\[\left|\frac{d}{dt}\pi(t)\right|\leq d(\theta,\tilde{\theta})\leq C|\tilde{ \theta}-\theta|.\]
In order to prove the well-posedness, it is necessary to get information about the smoothness of \(\mathcal{E}_{\tau,r}\).
**Proposition 3**.: _The functional \(\mathcal{E}_{\tau,r}\) is proper, coercive, differentiable on \(\mathcal{P}_{2}(K_{r})\). Moreover, there exists a constant \(C_{r,\tau}>0\) such that for all \(\mu,\nu\in\mathcal{P}_{2}(K_{r})\), \(\gamma\in\Gamma(\mu,\nu)\) with support included in \(K_{r}\times K_{r}\):_
\[\left|\mathcal{E}_{\tau,r}(\nu)-\mathcal{E}_{\tau,r}(\mu)+\int_{\Theta^{2}}v _{\mu}(\theta)\cdot(\tilde{\theta}-\theta)d\gamma(\theta,\tilde{\theta}) \right|\leq C_{r,r}c_{2}(\gamma) \tag{28}\]
_with_
\[c_{2}(\gamma):=\int_{\Theta^{2}}(\theta-\tilde{\theta})^{2}\,d\gamma(\theta, \tilde{\theta}),\]
_and_
\[v_{\mu}(\theta):=\nabla_{\theta}\phi_{\mu}(\theta) \tag{29}\]
_where for all \(\theta\in K_{r}\),_
\[\begin{split}\phi_{\mu}(\theta)&:=\langle\nabla_{x} P_{\tau}(\mu),\nabla_{x}\Phi_{\tau}(\theta;\cdot)\rangle_{L^{2}(\Omega)}- \langle f,\Phi_{\tau}(\theta;\cdot)\rangle_{L^{2}(\Omega)}+\int_{\Omega}P_{ \tau}(\mu)(x)dx\times\int_{\Omega}\Phi_{\tau}(\theta;x)dx\\ &=d\,\mathcal{E}\,|_{\mathcal{P}_{\tau}(\mu)}(\Phi_{\tau}( \theta;\cdot)).\end{split} \tag{30}\]
The properness and coercivity are easy to prove and left to the reader. Before proving the differentiability property of \(\mathcal{E}_{\tau,r}\), we will need the following auxiliary lemma.
**Lemma 5**.: _There exists a constant \(C>0\) such that for all \(\tau>0\) and all \(\theta\in\Theta\), we have_
\[\begin{split}\|\Phi_{\tau}(\theta;\cdot)\|_{L^{\infty}(\Omega)} &\leq C|\theta|,\\ \|\nabla_{x}\Phi_{\tau}(\theta;\cdot)\|_{L^{\infty}(\Omega)}& \leq C|\theta|,\\ \|\nabla_{\theta}\Phi_{\tau}(\theta;\cdot)\|_{L^{\infty}(\Omega)} &\leq C|\theta|,\\ \|H_{\theta}\Phi_{\tau}(\theta;\cdot)\|_{L^{\infty}(\Omega)}& \leq C|\theta|\tau,\\ \|\nabla_{x}\nabla_{\theta}\Phi_{\tau}(\theta;\cdot)\|_{L^{\infty}( \Omega)}&\leq C|\theta|\tau,\\ \|\nabla_{x}H_{\theta}\Phi_{\tau}(\theta;\cdot)\|_{L^{\infty}( \Omega)}&\leq C|\theta|\tau^{2},\end{split}\]
_where for all \(\theta\in\Theta\) and \(x\in\Omega\), \(H_{\theta}\Phi_{\tau}(\theta;x)\) denotes the Hessian of \(\Phi_{\tau}\) with respect to the variable \(\theta\) at the point \((\theta,x)\in\Theta\times\Omega\)._
Proof.: Let \(\theta=(c,a,w,b)\in\Theta\). It then holds that, for all \(x\in\Omega\),
\[\begin{split}\begin{cases}\frac{\partial\Phi_{\tau}(\theta;x)}{ \partial c}&=1\\ \frac{\partial\Phi_{\tau}(\theta;x)}{\partial a}&=\sigma_{H,\tau}(w \cdot x+b)\\ \frac{\partial\Phi_{\tau}(\theta;x)}{\partial w}&=ax\sigma^{ \prime}_{H,\tau}(w\cdot x+b)\\ \frac{\partial\Phi_{\tau}(\theta;x)}{\partial b}&=a\sigma^{ \prime}_{H,\tau}(w\cdot x+b).\end{cases}\end{split} \tag{31}\]
This expression yields the first desired inequality. In addition, the nonzero terms of the Hessian matrix read as:
\[\left\{\begin{aligned} \frac{\partial^{2}\Phi_{\tau}(\theta;x)}{ \partial a\partial w}&=\sigma^{\prime}_{H,\tau}(w\cdot x+b)x\\ \frac{\partial^{2}\Phi_{\tau}(\theta;x)}{\partial a\partial b}& =\sigma^{\prime}_{H,\tau}(w\cdot x+b)\\ \frac{\partial^{2}\Phi_{\tau}(\theta;x)}{\partial^{2}w}& =a\sigma^{\prime\prime}_{H,\tau}(w\cdot x+b)xx^{T}\\ \frac{\partial^{2}\Phi_{\tau}(\theta;x)}{\partial w\partial b}& =a\sigma^{\prime\prime}_{H,\tau}(w\cdot x+b)x\\ \frac{\partial^{2}\Phi_{\tau}(\theta;x)}{\partial^{2}b}& =a\sigma^{\prime\prime}_{H,\tau}(w\cdot x+b).\end{aligned}\right. \tag{32}\]
From these expressions, together with (10), we easily get that, for all \(\theta\in K_{r}\),
\[\left\|H_{\theta}\Phi_{\tau}(\theta;\cdot)\right\|_{L^{\infty}(\Omega)}\leq Cr\tau,\]
for some constant \(C>0\) independent of \(\theta\), \(r\) and \(\tau\). Moreover, for all \(x\in\Omega\),
\[\nabla_{x}\Phi_{\tau}(\theta;x)=aw\sigma^{\prime}_{H,\tau}(w\cdot x+b), \tag{33}\]
which implies that
\[\left\{\begin{aligned} \frac{\partial\nabla_{x}\Phi_{\tau}( \theta;x)}{\partial c}&=0\\ \frac{\partial\nabla_{x}\Phi_{\tau}(\theta;x)}{\partial a}& =w\sigma^{\prime}_{H,\tau}(w\cdot x+b)\\ \frac{\partial\nabla_{x}\Phi_{\tau}(\theta;x)}{\partial w}& =a\sigma^{\prime}_{H,\tau}(w\cdot x+b)I_{d}+axw^{T}\sigma^{\prime \prime}_{H,\tau}(w\cdot x+b)\\ \frac{\partial\nabla_{x}\Phi_{\tau}(\theta;x)}{\partial b}& =aw\sigma^{\prime\prime}_{H,\tau}(w\cdot x+b).\end{aligned}\right. \tag{34}\]
This implies then that
\[\left\|\nabla_{\theta}\nabla_{x}\Phi_{\tau}(\theta;\cdot)\right\|_{L^{\infty} (\Omega)}\leq Cr\tau.\]
Moreover, it then holds, using again (10), that for all \(\theta\in K_{r}\),
\[\left\|H_{\theta}\nabla_{x}\Phi_{\tau}(\theta;\cdot)\right\|_{L^{\infty}( \Omega)}\leq Cr\tau^{2},\]
for some constant \(C>0\) independent of \(\theta\), \(r\) and \(\tau\).
The following corollary is also a prerequisite for the proof of Proposition 3.
**Corollary 1**.: _There exists a constant \(C_{\tau}>0\) and a constant \(C_{r,\tau}>0\) such that for all \(\mu,\nu\in\mathcal{P}_{2}(K_{r})\) :_
\[\left\|P_{\tau}(\mu)\right\|_{H^{1}(\Omega)}^{2}\leq C_{\tau}\int_{\Theta}| \theta|^{2}d\mu(\theta), \tag{35}\]
_and_
\[\left\|P_{\tau}(\mu)-P_{\tau}(\nu)\right\|_{H^{1}(\Omega)}^{2}\leq C_{r,\tau} W_{2}^{2}(\mu,\nu).\]
Proof.: From Lemma 5 we immediately obtain that, for all \(\tau,r>0\), there exists a constant \(C_{\tau,r}>0\) such that for all \(\theta_{1},\theta_{2}\in K_{r}\),
\[\left\{\begin{aligned} \left\|\nabla_{\theta}\Phi_{\tau}( \theta_{1};\cdot)\right\|_{H^{1}(\Omega)}&\leq& C_{r,r}|\theta_{1}|,\\ \left\|\nabla_{\theta}\Phi_{\tau}(\theta_{1};\cdot)-\nabla_{\theta} \Phi_{\tau}(\theta_{2};\cdot)\right\|_{H^{1}(\Omega)}&\leq& C_{r,r}|\theta_{1}-\theta_{2}|.\end{aligned}\right.\]
The corollary immediately follows from that fact.
Now we are able to prove Proposition 3.
Proof.: First, we focus on the proof of (28)-(30). As \(\Phi_{\tau}\) and \(\mathcal{E}\) are smooth, it holds that for all \(x\in\Omega\), \(\theta,\widetilde{\theta}\in\Theta\), \(u,\tilde{u}\in H^{1}(\Omega)\),
\[\begin{cases}\Phi_{\tau}(\tilde{\theta};x)=\Phi_{\tau}(\theta;x)+\nabla_{ \theta}\Phi_{\tau}(\theta;x)\cdot(\tilde{\theta}-\theta)+M_{\tau}(\theta, \tilde{\theta};x)\\ \mathcal{E}(\tilde{u})=\mathcal{E}(u)+d\,\mathcal{E}\,|_{u}(\tilde{u}-u)+N( \tilde{u}-u),\end{cases}\]
where \(N(u):=\dfrac{1}{2}\bar{a}(u,u)\) for all \(u\in H^{1}(\Omega)\) and \(M_{\tau}(\theta,\tilde{\theta};x):=\int_{0}^{1}(\tilde{\theta}-\theta)^{T}H_{ \theta}\Phi_{\tau}(\theta+t(\tilde{\theta}-\theta);x)(\tilde{\theta}-\theta) (1-t)dt.\) Using Lemma 5, there exists a constant \(C>0\) independent on \(r\) and \(\tau\) such that:
* \(\forall x\in\Omega,\ \forall\theta,\tilde{\theta}\in K_{r},\ |M_{\tau}( \theta,\tilde{\theta};x)|\leq Crr|\theta-\tilde{\theta}|^{2},\)
* \(\forall x\in\Omega,\ \forall\theta,\tilde{\theta}\in K_{r},\ |\nabla_{x}M_{\tau}( \theta,\tilde{\theta};x)|\leq Crr^{2}|\theta-\tilde{\theta}|^{2}.\)
Moreover, there exists a constant \(C>0\) such that for all \(u\in H^{1}(\Omega)\),
\[0\leq N(u)\leq C\|u\|_{H^{1}(\Omega)}^{2}. \tag{36}\]
Thus, for \(\mu,\nu\in\mathcal{P}_{2}(K_{r})\) and \(\gamma\in\Gamma(\mu,\nu)\) supported in \(K_{r}^{2}\), it holds that:
\[\mathcal{E}_{\tau,r}(\nu) =\mathcal{E}\,\Big{(}\int_{K_{r}}\Phi_{\tau}(\tilde{\theta}; \cdot)d\nu(\tilde{\theta})\Big{)}\] \[=\mathcal{E}\,\Big{(}\int_{K_{r}^{2}}\Phi_{\tau}(\tilde{\theta}; \cdot)d\gamma(\theta,\tilde{\theta})\Big{)}\] \[=\mathcal{E}\,\Big{(}\int_{K_{r}^{2}}\Big{[}\Phi_{\tau}(\theta; \cdot)+\nabla_{\theta}\Phi_{\tau}(\theta;\cdot)\cdot(\tilde{\theta}-\theta)+ M_{\tau}(\theta,\tilde{\theta};\cdot)\Big{]}\,d\gamma(\theta,\tilde{\theta})\Big{)}\] \[=\mathcal{E}_{\tau,r}(\mu)+d\,\mathcal{E}\,|_{P_{\tau}(\mu)} \Big{(}\int_{K_{r}^{2}}\Big{[}\nabla_{\theta}\Phi_{\tau}(\theta;\cdot)\cdot( \tilde{\theta}-\theta)+M_{\tau}(\theta,\tilde{\theta};\cdot)\Big{]}\,d\gamma (\theta,\tilde{\theta})\Big{)}\] \[+N\Big{(}\int_{K_{r}^{2}}M_{\tau}(\theta,\tilde{\theta};\cdot)d \gamma(\theta,\tilde{\theta})+\int_{K_{r}^{2}}\nabla_{\theta}\Phi_{\tau}( \theta;\cdot)\cdot(\tilde{\theta}-\theta)\,d\gamma(\theta,\tilde{\theta}) \Big{)},\]
Using standard derivation integral theorems, a bound on \(M_{\tau}\) is available :
\[\Big{\|}\int_{K_{r}^{2}}M_{\tau}(\theta,\tilde{\theta};\cdot)d \gamma(\theta,\tilde{\theta})\Big{\|}_{H^{1}(\Omega)}^{2} =\Big{\|}\int_{K_{r}^{2}}M_{\tau}(\theta,\tilde{\theta};\cdot)d \gamma(\theta,\tilde{\theta})\Big{\|}_{L^{2}(\Omega)}^{2}+\Big{\|}\int_{K_{r}^ {2}}\nabla_{x}M_{\tau}(\theta,\tilde{\theta};\cdot)d\gamma(\theta,\tilde{ \theta})\Big{\|}_{L^{2}(\Omega)}^{2}\] \[\leq\int_{K_{r}^{2}}\|M_{\tau}(\theta,\tilde{\theta};\cdot)\|_{L^ {2}(\Omega)}^{2}d\gamma(\theta,\tilde{\theta})+\int_{K_{r}^{2}}\|\nabla_{x}M_ {\tau}(\theta,\tilde{\theta};\cdot)\|_{L^{2}(\Omega)}^{2}d\gamma(\theta, \tilde{\theta})\] \[\leq C(r^{2}\tau^{2}+r^{2}\tau^{4})\int_{\Theta^{2}}|\tilde{ \theta}-\theta|^{4}d\gamma(\theta,\tilde{\theta})\] \[\leq C(r^{4}\tau^{2}+r^{4}\tau^{4})\int_{\Theta^{2}}|\tilde{ \theta}-\theta|^{2}d\gamma(\theta,\tilde{\theta})\] \[=C(r^{4}\tau^{2}+r^{4}\tau^{4})c_{2}(\gamma),\]
where we used Jensen inequality to get the first inequality and Lemma 4 to get the last inequality. Using Corollary 1 and the uniform continuity of \(d\,\mathcal{E}\), it holds :
\[\left|d\,\mathcal{E}\,|_{P_{\tau}\mu}\left(\int_{K_{r}^{2}}M_{\tau}(\theta, \tilde{\theta};\cdot)d\gamma(\theta,\tilde{\theta})\right)\right|\leq C_{\tau, r}c_{2}(\gamma).\]
Moreover, using similar calculations, it holds that
\[\left\|\int_{K_{x}^{2}}\nabla_{\theta}\Phi_{\tau}(\theta;\cdot)\cdot (\tilde{\theta}-\theta)\,d\gamma(\theta,\tilde{\theta})\right\|_{H^{1}(\Omega)}^{2} =\left\|\int_{K_{x}^{2}}\nabla_{\theta}\Phi_{\tau}(\theta;\cdot) \cdot(\tilde{\theta}-\theta)\,d\gamma(\theta,\tilde{\theta})\right\|_{L^{2}( \Omega)}^{2}\] \[+\left\|\int_{K_{x}^{2}}\nabla_{x}\nabla_{\theta}\Phi_{\tau}( \theta;\cdot)\cdot(\tilde{\theta}-\theta)\,d\gamma(\theta,\tilde{\theta})\right\| _{L^{2}(\Omega)}^{2},\] \[\leq\int_{K_{x}^{2}}\left\|\nabla_{\theta}\Phi_{\tau}(\theta; \cdot)\cdot(\tilde{\theta}-\theta)\right\|_{L^{2}(\Omega)}^{2}d\gamma(\theta,\tilde{\theta})\] \[+\int_{K_{x}^{2}}\left\|\nabla_{x}\nabla_{\theta}\Phi_{\tau}( \theta;\cdot)\cdot(\tilde{\theta}-\theta)\right\|_{L^{2}(\Omega)}^{2}\,d \gamma(\theta,\tilde{\theta}),\] \[\leq C(r^{2}+r^{2}r^{2})\int_{\Theta^{2}}|\tilde{\theta}-\theta| ^{2}d\gamma(\theta,\tilde{\theta})\] \[\leq C(r^{2}+r^{2}r^{2})c_{2}(\gamma).\]
Hence, together with the previous bounds and (36), we easily obtain that there exists a constant \(C_{r,\tau}>0\) such that for all \(\mu,\nu\in\mathcal{P}_{2}(K_{r})\), it holds that
\[\left|\mathcal{E}_{r,r}(\nu)-\mathcal{E}_{\tau,r}(\mu)+d\,\mathcal{E}\left|{}_ {P_{\tau}(\mu)}\left(\int_{K_{x}^{2}}\nabla_{\theta}\Phi_{\tau}(\theta;\cdot )\cdot(\tilde{\theta}-\theta)d\gamma(\theta,\tilde{\theta})\right)\right|\leq C _{r,\tau}c_{2}(\gamma). \tag{37}\]
Now we focus on the first order term and by Fubini and standard integral derivation theorem, we obtain that:
\[d\,\mathcal{E}\left|{}_{P_{\tau}(\mu)}\left(\int_{K_{x}^{2}} \nabla_{\theta}\Phi_{\tau}(\theta;\cdot)\cdot(\tilde{\theta}-\theta)d\gamma \right) =\left\langle\nabla_{x}P_{\tau}(\mu),\nabla_{x}\int_{K_{x}^{2}} \nabla_{\theta}\Phi_{\tau}(\theta;\cdot)\cdot(\tilde{\theta}-\theta)d\gamma( \theta,\tilde{\theta})\right\rangle_{L^{2}(\Omega)}\] \[-\left\langle f,\int_{K_{x}^{2}}\nabla_{\theta}\Phi_{\tau}( \theta;\cdot)\cdot(\tilde{\theta}-\theta)d\gamma(\theta,\tilde{\theta})\right\rangle _{L^{2}(\Omega)}\] \[+\int_{\Omega}P_{\tau}(\mu)(x)\,dx\times\int_{\Omega}\int_{K_{x} ^{2}}\nabla_{\theta}\Phi_{\tau}(\theta;x)\cdot(\tilde{\theta}-\theta)d\gamma (\theta,\tilde{\theta})dx\] \[=\int_{K_{x}^{2}}\langle\nabla_{x}P_{\tau}(\mu),\nabla_{x} \nabla_{\theta}\Phi_{\tau}(\theta;\cdot)\cdot(\tilde{\theta}-\theta)\rangle_{ L^{2}(\Omega)}d\gamma(\theta,\tilde{\theta})\] \[-\int_{K_{x}^{2}}\nabla_{\theta}\langle f,\Phi_{\tau}(\theta; \cdot)\rangle_{L^{2}(\Omega)}\cdot(\tilde{\theta}-\theta)d\gamma(\theta, \tilde{\theta})\] \[+\int_{K_{x}^{2}}\int_{\Omega}P_{\tau}(\mu)(x)dx\times\int_{ \Omega}\nabla_{\theta}\Phi_{\tau}(\theta;x)\cdot(\tilde{\theta}-\theta)dxd \gamma(\theta,\tilde{\theta})\] \[=\int_{K_{x}^{2}}v_{\mu}(\theta)\cdot(\tilde{\theta}-\theta)d \gamma(\theta,\tilde{\theta}),\]
where
\[v_{\mu}(\theta):=\nabla_{\theta}\phi_{\mu}(\theta)\quad\gamma-\text{almost everywhere}, \tag{38}\]
with
\[\phi_{\mu}(\theta):=\langle\nabla_{x}P_{\tau}(\mu),\nabla_{x}\Phi_{\tau}( \theta;\cdot)\rangle_{L^{2}(\Omega)}-\langle f,\Phi_{\tau}(\theta;\cdot) \rangle_{L^{2}(\Omega)}+\int_{\Omega}P_{\tau}(\mu)(x)dx\times\int_{\Omega} \Phi_{\tau}(\theta;x)dx.\]
Note that (38) is equivalent to
\[v_{\mu}(\theta):=\nabla_{\theta}\phi_{\mu}(\theta)\quad\mu-\text{almost everywhere},\]
as \(v_{\mu}\) only depends on \(\theta\).
To prove a well-posedness result, some convexity is needed. More precisely, one should check that \(\mathcal{E}_{\tau,r}\) is convex along geodesics.
**Proposition 4**.: _For all \(\tau,r>0\), there exists \(\lambda_{\tau,r}>0\) such that for all \(\mu,\nu\in\mathcal{P}_{2}(K_{r})\) with associated geodesic \(\kappa(t):=e_{t}\#\Pi\) given by Proposition 2, the functional \([0,1]\ni t\mapsto\dfrac{d}{dt}\,(\mathcal{E}_{\tau,r}(\kappa(t))\) is \(-\lambda_{\tau,r}\)-Lipschitz._
Proof.: First of all, one has to check that for all \(t\in[0,1]\), \(\kappa(t)\in\mathcal{P}_{2}(K_{r})\). This is a direct consequence of the fact that \(\mu,\nu\) are supported in \(K_{r}\), Remark 5 and that \(K_{r}\) is convex (in the geodesic sense).
Let \(t,s\in[0,1]\) and define \(\alpha(t,s):=(e_{t},e_{s})\#\Pi\in\Gamma(\kappa(t),\kappa(s))\). By (37), it holds that
\[\bigg{|}\mathcal{E}_{\tau,r}(\kappa(s))-\mathcal{E}_{\tau,r}(\kappa(t))+\int_ {\Theta^{2}}d\,\mathcal{E}\,|_{P_{r}(\kappa(t))}\Big{(}\nabla_{\theta}\Phi_{ \tau}(\theta;\cdot)\cdot(\tilde{\theta}-\theta)\Big{)}d\alpha(t,s)(\theta, \tilde{\theta})\bigg{|}\leq C_{r,r}c_{2}(\alpha(t,s)),\]
which reads equivalently as
\[\bigg{|}\frac{\mathcal{E}_{\tau,r}(\kappa(s))-\mathcal{E}_{\tau, r}(\kappa(t))}{s-t}-\int_{\mathfrak{P}}d\,\mathcal{E}\,|_{P_{r}(\kappa(t))} \Big{(}\nabla_{\theta}\Phi_{\tau}(\pi(t);\cdot)\cdot\Big{(}\frac{\pi(s)-\pi(t )}{s-t}\Big{)}\Big{)}d\Pi(\pi)\bigg{|}\] \[\leq C_{r,\tau}\frac{1}{|s-t|}\int_{\Theta^{2}}|\theta-\tilde{\theta} |^{2}\,d\alpha(t,s)(\theta,\tilde{\theta})\] \[= C_{r,\tau}\frac{1}{|s-t|}\int_{\mathfrak{P}}|\pi(t)-\pi(s)|^{2} \,d\Pi(\pi)\] \[= C_{r,\tau}|s-t|\int_{\mathfrak{P}}|\pi(1)-\pi(0)|^{2}\,d\Pi(\pi)\] \[\leq C_{r,\tau}|s-t|,\]
where the value of the constant \(C_{r,\tau}\) only depends on \(r\) and \(\tau\). Letting \(s\) go to \(t\) and using the dominated convergence theorem, one concludes that \([0,1]\ni t\mapsto\mathcal{E}_{\tau,r}(\kappa(t))\) is differentiable with derivative equal to :
\[h(t):=\dfrac{d}{dt}\,(\mathcal{E}_{\tau,r}(\kappa(t)))=\ \ \int_{\mathfrak{P}}d\, \mathcal{E}\,|_{P_{r}(\kappa(t))}\Big{(}\nabla_{\theta}\Phi_{\tau}(\pi(t); \cdot)\cdot\Big{(}\dfrac{d}{dt}\pi(t)\Big{)}\Big{)}d\Pi(\pi).\]
To conclude, one has the decomposition :
\[|h(t)-h(s)| \leq\Big{|}\int_{\mathfrak{P}}d\,\mathcal{E}\,|_{P_{\tau}(\kappa( t))}\Big{(}(\nabla_{\theta}\Phi_{\tau}(\pi(t);\cdot)-\nabla_{\theta}\Phi_{ \tau}(\pi(s);\cdot))\cdot\Big{(}\dfrac{d}{dt}\pi(t)\Big{)}\Big{)}d\Pi(\pi) \Big{|} \tag{39}\] \[+\Big{|}\int_{\mathfrak{P}}d\,\mathcal{E}\,|_{P_{r}(\kappa(t))} \Big{(}\nabla_{\theta}\Phi_{\tau}(\pi(s);\cdot)\cdot\Big{(}\dfrac{d}{dt}\pi(t )-\dfrac{d}{dt}\pi(s)\Big{)}\Big{)}d\Pi(\pi)\Big{|}.\]
Recalling (39), denoting \(\alpha:=(e_{0},e_{1})\#\Pi\) and using the previous estimates, we obtain that, for all
\(t,s\in[0,1]\),
\[|h(t)-h(s)| \leq C_{r,\tau}\Big{(}\|P_{\tau}(\kappa(t))\|_{H^{1}(\Omega)}\int_{ \mathfrak{P}}|\pi(t)-\pi(s)|\Big{|}\frac{d}{dt}\pi(t)\Big{|}d\Pi(\pi)\] \[+\|P_{\tau}(\kappa(t))-P_{\tau}(\kappa(s))\|_{H^{1}(\Omega)}\int_ {\mathfrak{P}}|\pi(s)|\Big{|}\frac{d}{dt}\pi(t)\Big{|}d\Pi(\pi)\] \[+\|P_{\tau}(\kappa(s))\|_{H^{1}(\Omega)}\int_{\mathfrak{P}}|\pi( s)|\Big{|}\frac{d}{dt}\pi(t)-\frac{d}{dt}\pi(s)\Big{|}d\Pi(\pi)\Big{)}\] \[\leq C_{r,\tau}\Big{(}|t-s|\|P_{\tau}(\kappa(t))\|_{H^{1}(\Omega) }\int_{\mathfrak{P}}|\pi(1)-\pi(0)|^{2}d\Pi(\pi)\] \[+\|P_{\tau}(\kappa(t))-P_{\tau}(\kappa(s))\|_{H^{1}(\Omega)} \int_{\mathfrak{P}}\sup_{u\in[0,1]}|\pi(u)||\pi(1)-\pi(0)|d\Pi(\pi)\] \[+|t-s|\|P_{\tau}(\kappa(s))\|_{H^{1}(\Omega)}\int_{\mathfrak{P}} \sup_{u\in[0,1]}|\pi(u)|\sup_{u\in[0,1]}\Big{|}\frac{d^{2}\pi(u)}{dt^{2}}\Big{|} d\Pi(\pi)\Big{)}\] \[\leq C_{r,\tau}\left(|t-s|\left(\sqrt{\int_{\Theta^{2}}|\theta|^{2 }d\kappa(t)(\theta)}+\sqrt{\int_{\Theta^{2}}|\theta|^{2}d\kappa(s)(\theta)} \right)(1+c_{2}(\alpha))+W_{2}(\kappa(t),\kappa(s))c_{2}(\alpha)\right)\]
where we have used Lemma 4 to get the second inequality and the fact that \(\sup_{u\in[0,1]}\bigg{|}\frac{d^{2}\pi(u)}{dt^{2}}\bigg{|}\) is uniformly bounded (since the curvature of \(\Theta\) is bounded) to get the last one. We also have the following estimates:
* By Remark 5 and the convexity of \(K_{r}\) (in the geodesic sense), for all \(0\leq t\leq 1\) : \[\int_{\Theta}|\theta|^{2}d\kappa(t)(\theta)\leq C(1+r^{2}).\]
* Moreover, \[W_{2}^{2}(\kappa(t),\kappa(s)) \leq\int_{\Theta^{2}}d(\theta,\tilde{\theta})^{2}d\alpha(t,s)( \theta,\tilde{\theta})\] \[\leq\int_{\Gamma}d(\pi(t),\pi(s))^{2}d\Pi(\pi)\] \[=|t-s|\int_{\Gamma}d(\pi(1),\pi(0))^{2}d\Pi(\pi)\] \[=|t-s|\int_{\Theta^{2}}d(\theta,\tilde{\theta})^{2}d\alpha( \theta,\tilde{\theta}).\]
This allows us to conclude that :
\[|h(t)-h(s)|\leq C_{r,\tau}(1+c_{2}(\alpha))|t-s|.\]
As the measure \(\alpha\) is supported in \(K_{r}^{2}\), we get :
\[|h(t)-h(s)|\leq\lambda_{r,r}|t-s|.\]
for some \(\lambda_{r,r}>0\), which yields the desired result.
The characterization of the velocity field allows to get a bound on its amplitude. This is given by the next corollary which will be useful later in the paper.
**Corollary 2**.: _There exists a constant \(C_{\tau}>0\) such that for all \(r>0\), all \(\mu\in\mathcal{P}_{2}(K_{r})\) and \(\theta\in\Theta\):_
\[|v_{\mu}(\theta)|\leq C_{\tau}r|\theta|.\]
Proof.: This can be proved combining (35), (31) and (34). The rest is just basic computations and left to the reader.
An important consequence of Proposition 4 is that \(\mathcal{E}_{\tau,r}\) is \((-\lambda_{\tau,r})\)-convex along geodesics. Now we are able to prove Theorem 4.
Proof of Theorem 4.: The functional \(\mathcal{E}_{\tau,r}\) is lower semicontinuous by Remark 4 and it is \((-\lambda_{\tau,r})\)-convex along generalized geodesics. Moreover, the space \(\Theta\) has a curvature bounded from below which ensures that it is an Alexandrov space of curvature bounded from below. We apply [18, Theorem 5.9, 5.11] to get the existence and the uniqueness of a gradient curve \(\mu^{r}:\mathbb{R}_{+}\to\mathcal{P}_{2}(K_{r})\) in the sense of [18, Definition 5.8]. Being a gradient curve, it is also a curve of maximal slope in the sense of [16, Definition 1.3.2]. Note that in [18], the space on which the probability measures are defined (here this is \(\Theta\)) is supposed to be compact. This is not a problem here since the domain of the functional \(\mathcal{E}_{\tau,r}\) is reduced to probability measures whose support is included in \(K_{r}\) which is compact and geodesically convex.
The existence of the vector field \(v_{t}^{r}\) for almost all \(t\geq 0\) is given by the absolute continuity of the curve \([0,1]\ni t\mapsto\mu^{r}(t)\) (because it is a gradient curve) and by [19, Proposition 2.5].
The work is not finished here since we do not have any knowledge about the velocity field \(v_{t}^{r}\) and the well-posedness result is proved only for \(\mathcal{E}_{\tau,r}\) with \(r<\infty\). In the following sections, we prove that this velocity field can be related to \(v_{\mu^{r}(t)}\) and use a bootstrap argument to prove an existence result for the gradient curve of \(\mathcal{E}_{\tau,+\infty}\).
#### 3.1.2 Identification of the vector field \(v_{t}^{r}\)
In the following, we denote by \(T\Theta\) the tangent bundle of \(\Theta\), i.e.
\[T\Theta:=\bigcup_{\theta\in\Theta}\{\theta\}\times T_{\theta}\Theta,\]
where \(T_{\theta}\Theta\) is the tangent space to \(\Theta\) at \(\theta\). It is easy to check that for all \(\theta=(c,a,w,b)\in\Theta\), it holds that \(T_{\theta}\Theta=\mathbb{R}\times\mathbb{R}\times\mathrm{Span}\{w\}^{\perp} \times\mathbb{R}\), where \(\mathrm{Span}\{w\}^{\perp}\) is the subspace of \(\mathbb{R}^{d}\) containing all \(d\)-dimensional vectors orthogonal to \(w\).
We also introduce the operators \(G\) and \(S_{h}\) for \(0<h\leq 1\) as follows :
\[G:=\left\{\begin{array}{rcl}\mathfrak{P}&\to&T\Theta\\ \pi&\mapsto&(\pi(0),\dot{\pi}(0))\end{array}\right.\]
and
\[S_{h}:=\left\{\begin{array}{rcl}T\Theta&\to&T\Theta\\ (\theta,v)&\mapsto&\left(\theta,\frac{v}{h}\right).\end{array}\right.\]
The next lemma concerns the local behaviour of couplings along a curve of maximal slope \(\mu^{r}:\mathbb{R}_{+}\to\mathcal{P}_{2}(K_{r})\). In the following, for any \(\mu,\nu\in\mathcal{P}_{2}(\Theta)\), we denote by \(\Gamma_{o}(\mu,\nu)\) the set of optimal transport plans between \(\mu\) and \(\nu\) in the sense of the quadratic cost. In other words, for all \(\gamma\in\Gamma_{o}(\mu,\nu)\), it holds that \(W_{2}^{2}(\mu,\nu)=\int_{\Theta\times\Theta}|\theta-\widetilde{\theta}|^{2}\, d\gamma(\theta,\widetilde{\theta})\).
**Lemma 6**.: _Let \(\mu^{r}:\mathbb{R}_{+}\to\mathcal{P}_{2}(K_{r})\) be a solution to (23) and for all \(0<h\leq 1\), let \(\Pi_{h}\in\mathcal{P}_{2}(\mathfrak{P})\) such that \(\gamma_{h}:=(e_{0},e_{1})\#\Pi_{h}\in\Gamma_{o}(\mu^{r}(t),\mu^{r}(t+h))\) (i.e. satisfying the condition of Proposition 2 with \(\mu=\mu^{r}(t)\) and \(\nu=\mu^{r}(t+h)\)). Then, for almost all \(t\geq 0\), it holds that_
\[\lim_{h\to 0}(S_{h}\circ G)\#\Pi_{h}=(i\times v_{t}^{r})\#\mu^{r}(t)\text{ in }\,\mathcal{P}_{2}(T\Theta),\]
_where \((v_{t}^{r})_{t\geq 0}\) is given by Theorem 4, and \(i:\Theta\to\Theta\) is the identity map._
_Moreover,_
\[\lim_{h\to 0}\frac{W_{2}^{2}(\mu^{r}(t+h),\exp(hv_{t}^{r})\#\mu^{r}(t))}{h^{2}}=0,\]
_where \(\exp(hv_{t}^{r}):\Theta\ni\theta\mapsto\exp_{\theta}(hv_{t}^{r}(\theta))\)._
Proof.: Let \(\phi\) be in \(C_{c}^{\infty}(\Theta)\). The continuity equation gives :
\[\int_{\mathbb{R}_{+}}\eta^{\prime}(t)\int_{\Theta}\phi\,d\mu^{r}(t)dt=-\int_{ \mathbb{R}_{+}}\eta(t)\int_{\Theta}\nabla_{\theta}\phi\cdot v_{t}\,d\mu^{r}(t)dt\]
for \(\eta\) smooth compactly supported in \(\mathbb{R}_{+}\). Taking \(\eta\) as an approximation of the characteristic function of \([t,t+h]\), owing to the fact that \(\mu^{r}\) is locally Lipschitz and passing to the limit, one gets :
\[\int_{\Theta}\phi\,d\mu^{r}(t)-\int_{\Theta}\phi\,d\mu^{r}(t+h)=-\int_{t}^{t+h} \int_{\Theta}\nabla_{\theta}\phi\cdot v_{t}^{r}\,d\mu^{r}(t)dt.\]
Passing to the limit as \(h\) goes to \(0\), one gets the differentiability almost everywhere of \(\mathbb{R}_{+}\ni t\mapsto\int_{\Theta}\phi\,d\mu^{r}(t)\) and :
\[\lim_{h\to 0}\frac{\int_{\Theta}\phi\,d\mu^{r}(t+h)-\int_{\Theta}\phi\,d\mu^{r} (t)}{h}=\int_{\Theta}\nabla_{\theta}\phi\cdot v_{t}^{r}\,d\mu^{r}(t).\]
For all \(0<h\leq 1\), let us introduce \(\nu_{h}:=(S_{h}\circ G)\#\Pi_{h}\) and let \(\nu_{0}\) be an accumulation point of \((\nu_{h})_{0<h\leq 1}\) with respect to the narrow convergence on \(\mathcal{P}_{2}(T\Theta)\).
Then, it holds that
\[\frac{\int_{\Theta}\phi\,d\mu^{r}(t+h)-\int_{\Theta}\phi\,d\mu^{r} (t)}{h} =\frac{1}{h}\int_{\Theta^{2}}(\phi(\tilde{\theta})-\phi(\theta)) \,d\gamma_{h}(\theta,\tilde{\theta})\] \[=\frac{1}{h}\int_{\Phi}(\phi(\pi(1))-\phi(\pi(0)))\,d\Pi_{h}(\pi)\] \[=\frac{1}{h}\int_{T\Theta}(\phi(\exp_{\theta}(v))-\phi(\theta)) \,dG\#\Pi_{h}(\theta,v)\] \[=\frac{1}{h}\int_{T\Theta}(\phi(\exp_{\theta}(hv))-\phi(\theta)) \,d(S_{h}\circ G)\#\Pi_{h}(\theta,v)\] \[=\int_{T\Theta}\nabla_{\theta}\phi(\theta)\cdot v\,d\nu_{h}( \theta,v)\] \[+\int_{T\Theta}R_{h}(\theta,v)\,d\nu_{h}(\theta,v)\] \[\underset{h\to 0}{\longrightarrow}\int_{T\Theta}\nabla_{ \theta}\phi(\theta)\cdot vd\nu_{0}(\theta,v),\]
where \(R_{h}(\theta,v):=\frac{\phi(\exp_{\theta}(hv))-\phi(\theta)}{h}-\nabla_{ \theta}\phi(\theta)\cdot v\) is bounded by \(C(\phi)|v|^{2}h\) (\(\phi\in C_{c}^{\infty}(\Theta)\) and the euclidean curvature in \(\Theta\) is uniformly bounded; see [20, Chapter 8] for the definition of euclidean curvature). Actually, to get the last limit, we need the following arguments detailed below :
* For the first term, \(\nabla\phi(\theta)\cdot v\) is quadratic in \((\theta,v)\) and consequently the passage to the limit is allowed.
* For the second one, \[\int_{T\Theta}|R_{h}(\theta,v)|d\nu_{h}(\theta,v) \leq C(\phi)h\int_{T\Theta}|v|^{2}d\nu_{h}(\theta,v)\] \[= C(\phi)h\frac{W_{2}^{2}(\mu^{r}(t),\mu^{r}(t+h))}{h^{2}}\] and using again the local Lipschitz property, we can pass to the limit which is zero.
As a consequence,
\[\int_{T\Theta}\nabla_{\theta}\phi(\theta)\cdot vd\nu_{0}(\theta,v)=\int_{ \Theta}\nabla_{\theta}\phi(\theta)\cdot v_{t}^{r}(\theta)\,d\mu^{r}(t)(\theta)\]
which is no more than (by disintegration) :
\[\int_{\Theta}\nabla_{\theta}\phi(\theta)\cdot\int_{T_{\Theta}\Theta}v\,d\nu_{0, \theta}(v)\,d\mu^{r}(t)(\theta)=\int_{\Theta}\nabla_{\theta}\phi(\theta)\cdot v _{t}^{r}(\theta)\,d\mu^{r}(t)(\theta).\]
Noting \(\tilde{v_{t}}(\theta):=\int_{T_{\Theta}\Theta}v\,d\nu_{0,\theta}(v)\), the last equation is equivalent to :
\[\operatorname{div}((\tilde{v_{t}}-v_{t}^{r})\mu^{r}(t))=0.\]
In addition, as \(T\Theta\ni(\theta,v)\mapsto|v|^{2}\) is positive and lower semicontinuous and as for almost all \(t\geq 0\) we have that \(\lim_{h\to 0}\dfrac{W_{2}(\mu^{r}(t),\mu^{r}(t+h))}{h}=|(\mu^{r})^{ \prime}|(t)\) (as \(\mu^{r}\) is locally Lipschitz):
\[\begin{split}\int_{\Theta}\int_{T_{\Theta}\Theta}|v|^{2}\,d\nu_ {0,\theta}(v)\,d\mu^{r}(t)(\theta)&\leq\liminf_{h\to 0}\,\int_{T \Theta}|v|^{2}\,d\nu_{h}(\theta,v)\\ &=\liminf_{h\to 0}\,\frac{1}{h^{2}}\int_{T\Theta}|v|^{2}\,dG\#\Pi_{h}( \theta,v)\\ &=\liminf_{h\to 0}\,\frac{1}{h^{2}}\int_{\mathfrak{P}}|\dot{ \pi}(0)|^{2}\,d\Pi_{h}(\pi)\\ &=\liminf_{h\to 0}\,\frac{1}{h^{2}}\int_{\Theta^{2}}d(\theta, \tilde{\theta})^{2}\,d\gamma_{h}(\theta,\tilde{\theta})\\ &=\liminf_{h\to 0}\,\frac{W_{2}^{2}(\mu^{r}(t),\mu^{r}(t+h))}{h^{ 2}}\\ &=|(\mu^{r})^{\prime}|^{2}(t).\end{split} \tag{40}\]
As a consequence and by Jensen inequality,
\[\|\tilde{v_{t}}\|_{L^{2}(\Theta;d\mu^{r}(t))}^{2}\leq\int_{\Theta}\int_{T_{ \Theta}\Theta}|v|^{2}\,d\nu_{0,\theta}(v)\,d\mu^{r}(t)(\theta)\leq|(\mu^{r}) ^{\prime}|^{2}(t)=\|v_{t}^{r}\|_{L^{2}(\Theta;d\mu^{r}(t))}^{2}. \tag{41}\]
By [19, Lemma 2.4], one gets \(\tilde{v}_{t}=v_{t}^{r}\). Reconsidering (41), one gets the equality case in Jensen inequality _ie_ :
\[\int_{\Theta}|\tilde{v_{t}}(\theta)|^{2}\,\,d\mu^{r}(t)(\theta)=\int_{\Theta} \int_{T_{\Theta}\Theta}|v|^{2}d\nu_{h,\theta}(v)\,d\mu^{r}(t)(\theta),\]
and as a consequence \(\nu_{0,\theta}=\delta_{v_{t}^{r}(\theta)}\), \(\mu^{r}(t)\)-almost everywhere in \(\Theta\). In addition,
\[\lim_{h\to 0}(S_{h}\circ G)\#\Pi_{h}=(i\times v_{t}^{r})\#\mu^{r}(t),\]
in the sense of the narrow convergence. The convergence of the \(v\) moment is given by (40)-(41) where inequalities can be replaced by equalities (as \(\tilde{v}_{t}=v_{t}^{r}\)) and the \(\liminf\) can be replaced by a lim as \(\lim_{h\to 0}\dfrac{W_{2}(\mu^{r}(t),\mu^{r}(t+h))}{h}=|(\mu^{r})^{ \prime}|(t)\) exists :
\[\int_{\Theta}\int_{T_{\Theta}\Theta}|v|^{2}d\nu_{0,\theta}(v)\,d\mu^{r}(t)( \theta)=\lim_{h\to 0}\int_{T\Theta}|v|^{2}d\nu_{h}(\theta,v). \tag{42}\]
For the \(\theta\) moment, it is more obvious as for all \(0<h\leq 1\) :
\[\int_{T\Theta}|\theta|^{2}\,d\nu_{h}(\theta,v)=\int_{\Theta}|\theta|^{2}\,d \mu^{r}(t)(\theta)\]
and
\[\int_{T\Theta}|\theta|^{2}\,d\nu_{0}(\theta,v)=\int_{T\Theta}|\theta|^{2}d(i \times v_{t}^{r})\#\mu^{r}(t)(\theta)=\int_{\Theta}|\theta|^{2}\,d\mu^{r}(t)( \theta).\]
Consequently,
\[\int_{T\Theta}|\theta|^{2}d\nu_{0}(\theta,v)=\lim_{h\to 0}\int_{T\Theta}| \theta|^{2}d\nu_{h}(\theta,v). \tag{43}\]
With (42)-(43), the convergence of moments is asserted. The narrow convergence combined with the convergence of moments gives the convergence in \(\mathcal{P}_{2}(\Theta)\) and the proof of the first part of the
lemma is finished.
For the second part, it holds that \((\exp(hv_{t}^{r})\times i)\#\gamma_{h}\) belongs to \(\Gamma(\exp(hv_{t}^{r})\#\mu^{r}(t),\mu^{r}(t+h))\). Hence,
\[\frac{W_{2}^{2}(\mu^{r}(t+h),\exp(hv_{t}^{r})\#\mu^{r}(t))}{h^{2}} \leq\frac{1}{h^{2}}\int_{\Theta^{2}}\,d(\theta,\tilde{\theta})^{2 }d(\exp(hv_{t}^{r})\times i)\#\gamma_{h}(\theta,\tilde{\theta})\] \[\leq\frac{1}{h^{2}}\int_{\Theta^{2}}d(\exp_{\theta}(hv_{t}^{r}( \theta)),\tilde{\theta})^{2}\,d\gamma_{h}(\theta,\tilde{\theta})\] \[\leq\frac{1}{h^{2}}\int_{T\Theta}d(\exp_{\theta}(hv_{t}^{r}( \theta)),\exp_{\theta}(hv))^{2}\,d\nu_{h}(\theta,v)\] \[\leq C\int_{T\Theta}|v_{t}^{r}(\theta)-v|^{2}\,d\nu_{h}(\theta,v)\] \[\underset{h\to 0}{\longrightarrow}0,\]
where we have used the boundedness of the euclidean curvature of the manifold \(\Theta\) in the last inequality and the fact that \(\nu_{h}\to(i\times v_{t}^{r})\#\mu^{r}(t)\), which was proved earlier. Hence the desired result.
We now introduce the projection operator on the manifold \(\Theta\) :
**Definition 5**.: _For all \(\theta\) in \(\Theta\), the orthogonal projection on the tangent space of \(\Theta\) is given by the operator \(\mathbf{P}_{\theta}:\mathbb{R}^{d+3}\to T_{\Theta}\Theta\). The operator \(\mathbf{P}:L^{1}_{\mathrm{loc}}(\Theta;\mathbb{R}^{d+3})\to L^{1}_{\mathrm{ loc}}(\Theta;\mathbb{R}^{d+3})\) denotes the corresponding projection on vector fields, i.e. for all \(X\in L^{1}_{\mathrm{loc}}(\Theta;\mathbb{R}^{d+3})\), \((\mathbf{P}X)(\theta):=\mathbf{P}_{\theta}X(\theta)\) for almost all \(\theta\in\Theta\)._
Now we are able to identify the velocity field given in Theorem 4 under a support hypothesis.
**Proposition 5**.: _Let \(t\geq 0\). If there exists \(\delta>0\) such that \(\mathrm{Supp}(\mu^{r}(t))\subset K_{r-\delta}\), then the velocity field \(v_{t}^{r}\) in (23) is equal to \(-\mathbf{P}v_{\mu^{r}(t)}\)\(\mu^{r}(t)\)-almost everywhere._
Proof.: On the one hand, for \(\gamma_{h}:=(e_{0},e_{1})\#\Pi_{h}\in\Gamma_{o}(\mu^{r}(t),\mu^{r}(t+h))\), by Proposition 3 and the fact that for all \(t\geq 0\), \(\mu^{r}(t)\in\mathcal{P}_{2}(K_{r})\) :
\[\left|\mathcal{E}_{\tau,r}(\mu^{r}(t+h))-\mathcal{E}_{\tau,r}(\mu^{r}(t))-\int _{\Theta^{2}}v_{\mu^{r}(t)}(\theta)\cdot(\tilde{\theta}-\theta)\,d\gamma_{h}( \theta,\tilde{\theta})\right|\leq C_{r,\tau}W_{2}(\mu^{r}(t),\mu^{r}(t+h))^{2},\]
which is equivalent to
\[\left|\frac{\mathcal{E}_{\tau,r}(\mu^{r}_{t+h})-\mathcal{E}_{\tau,r}(\mu^{r}( t))}{h}-\int_{T\Theta}v_{\mu^{r}(t)}(\theta)\cdot\frac{\exp_{\theta}(hv)- \theta}{h}\,d(S_{h}\circ G)\#\Pi_{h}(\theta,v)\right|\leq C_{r,\tau}\frac{1}{h} W_{2}(\mu^{r}(t),\mu^{r}(t+h))^{2}.\]
Then, one can use the decomposition :
\[\int_{T\Theta}v_{\mu^{r}(t)}(\theta)\cdot\frac{\exp_{\theta}(hv)- \theta}{h}\,d(S_{h}\circ G)\#\Pi_{h}(\theta,v) =\int_{T\Theta}v_{\mu^{r}(t)}(\theta)\cdot v\,d(S_{h}\circ G)\# \Pi_{h}(\theta,v)\] \[+\int_{T\Theta}v_{\mu^{r}(t)}(\theta)\cdot R_{h}(\theta,v)\,d(S_{ h}\circ G)\#\Pi_{h}(\theta,v),\]
where \(R_{h}(\theta,v):=\frac{\exp_{\theta}(hv)-\theta}{h}-v\) is bounded by \(Ch|v|^{2}\) due to the uniform boundedness of euclidean curvature in \(\Theta\). Passing to the limit as \(h\) goes to zero and using Lemma 6, one gets the differentiability of \(\mathbb{R}_{+}\ni t\to\mathcal{E}_{\tau,r}(\mu^{r}(t))\) almost everywhere and for almost all \(t\geq 0\) :
\[\frac{d}{dt}\left[\mathcal{E}_{\tau,r}(\mu^{r}(t))\right]=\int_{\Theta}v_{\mu^{ r}(t)}(\theta)\cdot v_{t}^{r}(\theta)\,d\mu^{r}(t)(\theta).\]
Note that to pass to the limit to obtain the last equation, we need the two following points :
* First, \(v\cdot v_{\mu^{r}(t)}(\theta)\) is at most quadratic in \((\theta,v)\) which is given by Corollary 2.
* Second, it holds that \(|v_{\mu^{r}(t)}(\theta)\cdot R_{h}(\theta,v)|\leq Cr|\theta|h|v|^{2}\) by Corollary 2 and consequently : \[\left|\int_{T\Theta}v_{\mu^{r}(t)}(\theta)\cdot R_{h}(\theta,v)\,d(S _{h}\circ G)\#\Pi_{h}(\theta,v)\right|\leq C_{r}h\int_{T\Theta}|\theta||v|^{2}\,d(S_{h}\circ G) \#\Pi_{h}(\theta,v)\] \[\leq C_{r}h\int_{T\Theta}|v|^{2}\,d(S_{h}\circ G)\#\Pi_{h}(\theta,v)\] \[\leq C_{r}h\frac{W_{2}(\mu^{r}(t),\mu^{r}(t+h))^{2}}{h^{2}}\] where we used the fact that \(\Pi_{h}\) is supported in \(K_{r}\) in its first variable to get the second inequality. The last term converges to zero since \((\mu_{r}(t))_{t}\) is local Lipschitz.
Next as \(\mathbf{P}v_{t}^{r}=v_{t}^{r}\), it holds that:
\[\frac{d}{dt}\left[\mathcal{E}_{\tau,r}(\mu^{r}(t))\right]=\int_{\Theta^{2}} \mathbf{P}v_{\mu^{r}(t)}(\theta)\cdot v_{t}^{r}(\theta)\,d\mu^{r}(t)(\theta). \tag{44}\]
On the other hand, consider the curve \(\tilde{\mu_{h}}:\mathbb{R}_{+}\rightarrow\mathcal{P}_{2}(\Theta)\) satisfying :
\[\forall t\geq 0,\quad\tilde{\mu}_{h}(t):=\exp(-h\mathbf{P}v_{\mu^{r}(t)}) \#\mu^{r}(t).\]
As \(\text{Supp}(\mu^{r}(t))\subset K_{r-\delta}\), there exists a small time interval around zero such that \(\tilde{\mu}_{h}(t)\) is in \(\mathcal{P}_{2}(K_{r})\) for \(h>0\) small enough. So, with \(\gamma_{h}:=(i\times\exp(-h\mathbf{P}v_{\mu^{r}(t)}))\#\mu^{r}(t)\in\Gamma(\mu ^{r}(t),\tilde{\mu}_{h}(t))\),
\[\left|\mathcal{E}_{\tau,r}(\tilde{\mu}_{h}(t))-\mathcal{E}_{\tau,r}(\mu^{r}(t ))-\int_{\Theta^{2}}\mathbf{P}v_{\mu^{r}(t)}(\theta)\cdot(\tilde{\theta}- \theta)\,d\gamma_{h}(\theta,\tilde{\theta})\right|\leq C_{r,\tau}W_{2}^{2}( \mu^{r}(t),\tilde{\mu}_{h}(t))\]
and it holds that
\[\int_{\Theta^{2}}\mathbf{P}v_{\mu^{r}(t)}(\theta)\cdot(\tilde{\theta}-\theta) \,d\gamma_{h}(\theta,\tilde{\theta})=h\int_{\Theta^{2}}\mathbf{P}v_{\mu^{r}( t)}(\theta)\cdot\frac{\exp_{\theta}(-h\mathbf{P}v_{\mu^{r}(t)}(\theta))-\theta}{h} \,d\mu^{r}(t)(\theta).\]
Hence,
\[\frac{\mathcal{E}_{\tau,r}(\tilde{\mu}_{h}(t))-\mathcal{E}_{\tau,r}(\mu^{r}(t ))}{W_{2}(\tilde{\mu}_{h}(t),\mu^{r}(t))}=\frac{h}{W_{2}(\tilde{\mu}_{h}(t), \mu^{r}(t))}\int_{\Theta^{2}}\mathbf{P}v_{\mu^{r}(t)}(\theta)\cdot\frac{\exp_ {\theta}(-h\mathbf{P}v_{\mu^{r}(t)}^{r}(\theta))-\theta}{h}\,d\mu^{r}(t)( \theta)+o_{h}(1)\]
and getting the limsup as \(h\) goes to zero (proceeding in the similar way as above to get the limit of the first term on the right hand side) and owing to the fact that \(\limsup_{h\to 0}\frac{W_{2}(\tilde{\mu}_{h}(t),\mu^{r}(t))}{h}\leq\|\mathbf{P}v_{\mu ^{r}(t)}\|_{L^{2}(\Theta;d\mu^{r}(t))}\), we obtain that
\[|\nabla^{-}\,\mathcal{E}_{\tau,r}\,|(\mu^{r}(t))\geq\|\mathbf{P}v_{\mu^{r}(t) }\|_{L^{2}(\Theta;d\mu^{r}(t))}. \tag{45}\]
As \(\mu^{r}\) is a curve of maximal slope with respect to the upper gradient \(|\nabla^{-}\,\mathcal{E}_{\tau,r}\,|\) of \(\mathcal{E}_{\tau,r}\), one has :
\[\frac{d}{dt}\left[\mathcal{E}_{\tau,r}(\mu^{r}(t))\right]=\int_{ \Theta}\mathbf{P}v_{\mu^{r}(t)}(\theta)\cdot v_{t}^{r}(\theta)\,d\mu^{r}(t)( \theta)\leq-\frac{1}{2}\|v_{t}^{r}\|_{L^{2}(\Theta;d\mu^{r}(t))}-\frac{1}{2}| \nabla^{-}\,\mathcal{E}_{\tau,r}\,|^{2}(\mu^{r}(t))\] \[\leq-\frac{1}{2}\|v_{t}^{r}\|_{L^{2}(\Theta;d\mu^{r}(t))}^{2}- \frac{1}{2}\|\mathbf{P}v_{\mu^{r}(t)}\|_{L^{2}(\Theta;d\mu^{r}(t))}^{2}\]
where we have used (45). As a consequence,
\[\int_{\Theta}\left(\frac{1}{2}(\mathbf{P}v_{\mu^{r}(t)})^{2}(\theta)+\frac{1} {2}|v_{t}^{r}(\theta)|^{2}-\mathbf{P}v_{\mu^{r}(t)}(\theta)\cdot v_{t}^{r}( \theta)\right)\,d\mu^{r}(t)(\theta)\leq 0\]
and
\[v_{t}^{r}=-\mathbf{P}v_{\mu^{r}(t)}\quad\mu^{r}(t)\text{-a.e.}\]
The identification of the velocity field when the support condition is satisfied allows to give an explicit formula for the gradient curve. It is given by the characteristics :
**Proposition 6**.: _Let \(\chi^{r}:\mathbb{R}_{+}\times\Theta\to\Theta\) be the flow associated to the velocity field \(-\mathbf{P}v_{\mu^{r}(t)}\) :_
\[\begin{cases}\partial_{t}\chi^{r}(t)=-\mathbf{P}v_{\mu^{r}(t)}\\ \chi^{r}(0;\theta)=\theta.\end{cases}\]
_Then \(\chi^{r}\) is uniquely defined, continuous, and for all \(t\geq 0\), \(\chi^{r}(t)\) is Lipschitz on \(K_{r}\). Moreover, as long as \(\operatorname{Supp}(\mu^{r}(t))\subset K_{r-\delta}\) for some \(\delta>0\) :_
\[\mu^{r}(t)=\chi^{r}(t)\#\mu_{0}.\]
Proof.: This is a direct consequence of the fact that \(v_{t}^{r}=-\mathbf{P}v_{\mu^{r}(t)}=-\mathbf{P}\nabla_{\theta}\phi_{\mu^{r}(t)}\) is \(C^{\infty}\).
Next lemma relates the curve \([0,1]\ni h\mapsto\exp(hv_{t}^{r})\#\mu^{r}(t)\) with \(\nabla_{-}\operatorname{\mathcal{E}}_{\tau,r}(\mu^{r}(t))\). This will be useful later to prove that the velocity field characterizes the gradient curve.
**Lemma 7**.: _For all \(\mu\in\mathcal{P}_{2}(\Theta)\) with \(\operatorname{Supp}(\mu)\subset K_{r-\delta}\) for some \(\delta>0\), the map \(\nu:[0,1]\ni h\mapsto\exp(-h\mathbf{P}v_{\mu}/\|\mathbf{P}v_{\mu}\|_{L^{2}( \Theta;d\mu)})\#\mu\) is differentiable at \(h=0\). Moreover, it holds that_
\[\nu^{\prime}(0)=\nabla_{-}\operatorname{\mathcal{E}}_{\tau,r}(\mu)/|\nabla_{ -}\operatorname{\mathcal{E}}_{\tau,r}|(\mu).\]
Proof.: First, we claim that \(|\nabla_{-}\operatorname{\mathcal{E}}_{\tau,r}(\mu)|=\|\mathbf{P}v_{\mu}( \theta)\|_{L^{2}(\Theta;d\mu)}\). In order to prove it, take an arbitrary unit speed geodesic \([0,1]\ni s\mapsto(e_{s})\#\Pi\) starting at \(\mu\) for which there exists a time interval around zero such that \((e_{s})\#\Pi\) belongs to \(\mathcal{P}_{2}(K_{r})\). As a consequence, one can write for all \(s>0\) sufficiently small :
\[\left|\operatorname{\mathcal{E}}_{\tau,r}((e_{s})\#\Pi)-\operatorname{ \mathcal{E}}_{\tau,r}(\mu)+\int_{\Theta^{2}}v_{\mu}(\theta)\cdot(\tilde{\theta }-\theta)\,d(e_{0},e_{s})\#\Pi(\theta,\tilde{\theta})\right|\leq C_{r,\tau}W_ {2}^{2}(\mu,(e_{s})\#\Pi).\]
with
\[\int_{\Theta^{2}}v_{\mu}(\theta)\cdot(\tilde{\theta}-\theta)\,d(e_{0},e_{s}) \#\Pi(\theta,\tilde{\theta})=\int_{T\Theta}v_{\mu}(\theta)\cdot(\exp_{\theta} (sv)-\theta)\,dG\#\Pi(\theta,v).\]
Dividing by \(s\) and passing to the limit as \(s\) goes to zero, one obtains :
\[\frac{d}{ds}\left[\operatorname{\mathcal{E}}_{\tau,r}((e_{s})\#\Pi)\right]= \int_{T\Theta}v_{\mu}(\theta)\cdot v\,dG\#\Pi(\theta,v).\]
Note that, to get the last equation, we need to prove that for all \(s\) sufficiently small the function \(\eta(s):T\Theta\ni(\theta,v)\mapsto v_{\mu}(\theta)\cdot\frac{\exp_{\theta}( sv)-\theta}{s}\) is uniformly integrable with respect to \(G\#\Pi\). In fact, this is given by Corollary 2 and the uniform curvature bound on \(\Theta\) giving \(|\eta(s)|(\theta,v)\leq Csr|\theta||v|^{2}\). As the term \(Cr|\theta||v|^{2}\) is integrable with respect to the measure \(G\#\Pi\) (recall that it has finite second-order moments and is supported in \(K_{r}\) in the \(\theta\) variable), we have the desired uniform integrability property.
Moreover, by Cauchy-Schwartz inequality:
\[\frac{d}{ds}\left[\operatorname{\mathcal{E}}_{\tau,r}((e_{s})\#\Pi)\right] \geq-\|\mathbf{P}v_{\mu}\|_{L^{2}(\Theta;d\mu)}\sqrt{\int_{T \Theta}v^{2}dG\#\Pi(\theta,v)}\] \[=-\|\mathbf{P}v_{\mu}\|_{L^{2}(\Theta;d\mu)},\]
where the last equality comes from :
\[\int_{T\Theta}v^{2}\,dG\#\Pi(\theta,v) =\int_{\mathfrak{P}}\dot{\pi}(0)^{2}\,d\Pi(\pi)\] \[=\int_{\mathfrak{P}}d(\pi(0),\pi(1))^{2}\,d\Pi(\pi)\] \[=W_{2}^{2}((e_{0})\#\Pi,(e_{1})\#\Pi)\] \[=1.\]
The last equality is derived from the fact that \([0,1]\ni s\mapsto(e_{s})\#\Pi\) is a unit speed geodesic. To conclude, we have proved that for all unit speed geodesic \((\alpha,1)\in C_{\mu}(\mathcal{P}_{2}(K_{r}))\)
\[D_{\mu}\,\mathcal{E}_{\tau,r}((\alpha,1))\geq-\|\mathbf{P}v_{\mu}\|_{L^{2}( \Theta;d\mu)}\]
which by [18, Lemma 4.3], asserts that :
\[|\nabla_{-}\,\mathcal{E}_{\tau,r}\,|(\mu)\leq\|\mathbf{P}v_{\mu}\|_{L^{2}( \Theta;d\mu)}. \tag{46}\]
Aside that, let \(h>0\) :
\[W_{2}^{2}(\nu(h),\nu(0)) \leq\int_{\Theta}d^{2}\left(\exp_{\theta}\left(-h\mathbf{P}v_{ \mu}(\theta)/\|\mathbf{P}v_{\mu}^{\tau}\|_{L^{2}(\Theta;d\mu)}\right),\theta \right)\,d\mu(\theta)\] \[\leq h^{2}\int_{\Theta}d^{2}\left(\exp_{\theta}\left(-\mathbf{P} v_{\mu}(\theta)/\|\mathbf{P}v_{\mu}\|_{L^{2}(\Theta;d\mu)}\right),\theta\right)\,d\mu(\theta)\] \[=h^{2},\]
and
\[\limsup_{h\to 0}\frac{W_{2}(\nu(h),\nu_{0})}{h}\leq 1. \tag{47}\]
Moreover as \(\mathrm{Supp}(\mu)\subset K_{r-\delta}\), \(v_{\mu}\) is bounded in \(L^{\infty}(K_{r})\) by Corollary 2 and for a small time interval around zero \(\nu(h)\in\mathcal{P}_{2}(K_{r})\). Consequently, as \(h\) goes to \(0\),
\[\mathcal{E}_{\tau,r}(\nu(h))-\mathcal{E}_{\tau,r}(\mu) =\int_{\Theta^{2}}v_{\mu}(\theta)\cdot(\tilde{\theta}-\theta)\,d( i\times\exp(-h\mathbf{P}v_{\mu}/\|\mathbf{P}v_{\mu}\|_{L^{2}(\Theta;d\mu)}))\#\mu(\theta)\] \[+o\left(h\right)\] \[=\int_{\Theta}v_{\mu}(\theta)\cdot\left(\exp\left(-h\mathbf{P}v_ {\mu}(\theta)/\|\mathbf{P}v_{\mu}\|_{L^{2}(\Theta;d\mu)}\right)-\theta\right) \,d\mu(\theta)+o(h).\]
Dividing by \(h\) and passing to the limit as \(h\) goes to zero (justifying the passage to the limit as above), it holds that:
\[\lim_{h\to 0}\frac{\mathcal{E}_{\tau,r}(\nu(h))-\mathcal{E}_{\tau,r}(\mu)}{h}= -\|\mathbf{P}v_{\mu}^{\tau}(\theta)\|_{L^{2}(\Theta;d\mu)}. \tag{48}\]
Additionally, with (47) :
\[\limsup_{h\to 0}\frac{\mathcal{E}_{\tau,r}(\nu(h))-\mathcal{E}_{\tau,r}(\mu)}{W _{2}(\nu(h),\nu(0))}\leq-\|\mathbf{P}v_{\mu}\|_{L^{2}(\Theta;d\mu)}. \tag{49}\]
To conclude :
* With (49) and (46), the claim is proved : \[|\nabla_{-}\,\mathcal{E}_{\tau,r}\,|(\mu)=\|\mathbf{P}v_{\mu}\|_{L^{2}(\Theta ;d\mu)}.\]
* Owing to this, (47) and (49) the curve \([0,1]\ni h\mapsto\nu(h)\) is differentiable at \(h=0\) by [18, Proof of (ii) Lemma 5.4] and : \[\nu^{\prime}(0)=\nabla_{-}\,\mathcal{E}_{\tau,r}(\mu)/|\nabla_{-}\, \mathcal{E}_{\tau,r}\,|(\mu).\]
This finishes the proof of the lemma.
#### 3.1.3 Existence without support limitation
Note that for the moment the definition domain of \(\mathcal{E}_{\tau,r}\) is reduced to measures supported in \(K_{r}\). Using a bootstrapping argument, we will prove that the existence theorem 5 can be extended to the energy \(\mathcal{E}_{\tau,+\infty}\).
Proof of Theorem 5.: Let :
* \(r_{0}>0\) be such that \(\operatorname{Supp}(\mu_{0})\subset K_{r_{0}}\),
* \(\mu^{r}:\mathbb{R}_{+}\ni t\mapsto\mu^{r}(t)\) the gradient curve associated to \(\mathcal{E}_{\tau,r}\) for \(r>r_{0}\).
By Corollary 2, it holds that \(|v_{\mu^{r}(t)}(\theta)|\leq Cr|\theta|\) for all \(t\geq 0\). Hence, for all \(\theta\in K_{r_{0}}\), \(|\chi^{r}(t;\theta)|\leq r_{0}e^{Crt}\) for all time \(t\in\left[0,T_{r}:=\frac{1}{Cr}\log\left(\frac{r+r_{0}}{2r_{0}}\right)\right]\) and \(\operatorname{Supp}(\mu^{r}(t))\subset K_{(r+r_{0})/2}\subset K_{r}\). By the definition of the gradient curve :
\[\forall t\in[0,T_{r}],\ (\mu^{r})^{\prime}(t)=\nabla_{-}\,\mathcal{E}_{\tau,r} (\mu^{r}(t))=g^{\prime}(0) \tag{50}\]
with \(g:[0,1]\ni h\mapsto\exp\left(-\mathbf{P}v_{\mu^{r}(t)}h\right)\), by Lemma 7. Note that the right hand side of last equation does not depend explicitly on \(r\) but on \(\mu^{r}_{\cdot}\).
We construct the curve \(\mu:[0,T_{r}]\to\mathcal{P}_{2}(\Theta)\) as follows:
\[\forall t\in[0,T_{r}],\ r>r_{0}\ \mu(t):=\mu^{r}(t).\]
This is well-defined since by uniqueness of the gradient curve with respect to \(\mathcal{E}_{\tau,r}\), \(\mu^{r_{1}}(t)=\mu^{r_{2}}(t)\) on \([0,\min(T_{r_{1}},T_{r_{2}})]\) for \(r_{0}<r_{1}\leq r_{2}\). Defining for all \(n\in\mathbb{N}^{*}\)
\[r_{n}:=(n+1)r_{0},\]
we can build inductively a gradient curve on \(\left[0,\frac{1}{Cr_{0}}\sum_{i=1}^{n}\frac{1}{(i+1)}\log\left( \frac{i+2}{2(i+1)}\right)\right]\). As the width of this interval is diverging, it is possible to construct a gradient curve on \(\mathbb{R}^{+}\).
All the properties given by the theorem comes from the properties of \(\mu^{r}\) derived in Theorem 4 and Proposition 6.
**Remark 6**.: _We make here two important remarks:_
* _We did not prove the existence of a gradient curve with respect to_ \(\mathcal{E}_{\tau,\infty}\) _because this functional is not proved to be convex along geodesics and it is impossible to define gradients without such an assumption._
* _The uniqueness of a solution to (_24_) is out of the scope of this article. To prove it, one should link (_24_) and the support condition to prove that locally in time, a solution to (_24_) coincides with the unique gradient curve of_ \(\mathcal{E}_{\tau,r}\) _for some_ \(r>0\) _large enough._
### Link with backpropagation in neural network
Here, we give a proof of Theorem 6.
Proof of Theorem 6.: Returning back to the proof of Theorem 5 and for all time \(T>0\), one can find \(r>0\) large enough such that \(\mu\), \(\mu_{m}\) coincide with gradient curves on \([0,T]\) with respect to \(\mathcal{E}_{\tau,r}\) starting from \(\mu_{0}\) and \(\mu_{0,m}\) respectively. As gradient curves with respect to \(\mathcal{E}_{\tau,r}\) verifies the following semigroup property [18, Theorem 5.11]
\[\forall t\in[0,T],\ W_{2}(\mu(t),\mu_{m}(t))\leq e^{\lambda_{\tau,r}t}W_{2}( \mu_{0},\mu_{0,m}),\]
the expected convergence on \(C([0,T],\mathcal{P}_{2}(\Omega))\) holds by the convergence of initial measures.
### Convergence of the measure towards the optimum
In the following, a LaSalle's principle argument is invoked in order to prove Theorem 7. For simplicity, we note \(\mathcal{E}_{\tau}:=\mathcal{E}_{\tau,\infty}\) for \(0<\tau<+\infty\).
#### 3.3.1 Characterization of optima
In this part, we focus on a characterization of global optima. For convenience, we extend the functional \(\mathcal{E}_{\tau}\) to the set of signed finite measures on \(\Theta\), denoted by \(\mathcal{M}(\Theta)\).
**Lemma 8**.: _For all \(\mu\in\mathcal{M}(\Theta)\), there exists a probability measure \(\mu_{p}\) such that \(\mathcal{E}_{\tau}(\mu)=\mathcal{E}_{\tau}(\mu_{p})\)._
Proof.: Let us first consider a positive signed measure \(\mu\in\mathcal{M}^{+}(\Theta)\). If \(\mu(\Theta)=0\), \(\Phi(\theta,.)=0\)\(\mu\)-almost everywhere and \(\mathcal{E}_{\tau}(\mu)=0\). Taking \(\mu_{p}:=\delta_{(0,0,w,b)}\) with \(w,b\) taken arbitrary eis sufficient to prove the desired result. Now, if \(\mu(\Theta)\neq 0\), consider \(\mu_{p}:=T\#\left(\frac{\mu}{\mu(\Theta)}\right)\) where \(T:(c,a,w,b)\to(c\mu(\Theta),a\mu(\Theta),w,b)\). In this case :
\[\int_{\Theta}\Phi(\theta;\cdot)d\mu =\int_{\Theta}\mu(\Theta)\Phi(\theta;\cdot)\frac{d\mu(\theta)}{ \mu(\Theta)}\] \[=\int_{\Theta}\Phi(T\theta;\cdot)\frac{d\mu(\theta)}{\mu(\Theta)}\] \[=\int_{\Theta}\Phi(\theta;\cdot)d\mu_{p}(\theta)\]
where we have used the form of \(\Phi\) (18)-(19) to get the last inequality.
Now take an arbitrary signed measure \(\mu\in\mathcal{M}(\Theta)\). By Hahn-Jordan decomposition theorem, there exists \(P,N\)\(\mu\)-measurable sets such that \(P\cup N=\Theta\) and \(\mu\) is non-negative (respectively non-positive) on \(P\) (respectively \(N\)). The signed measure \(\mu\) can be written as :
\[\mu=\mu_{P}-\mu_{N}\]
where \(\mu_{P},\mu_{N}\in\mathcal{M}^{+}(\Theta)\). Consider following map :
\[G(c,a,w,b):=\left\{\begin{array}{rl}(-c,-a,w,b)&\text{if }(a,b,w,c)\in N \\ (c,a,w,b)&\text{if }(a,b,w,c)\in P\end{array}\right.\]
and the measure :
\[\mu_{G}:=G\#(\mu_{P}+\mu_{N})\in\mathcal{M}^{+}(\Theta).\]
By construction, we have \(P_{\tau}\left(T\#\left(\frac{\mu_{G}}{\mu_{G}(\Theta)}\right)\right)=P_{\tau} (\mu)\) and consequently, \(\mathcal{E}_{\tau}(\mu)=\mathcal{E}_{\tau}\left(T\#\left(\frac{\mu_{G}}{\mu_ {G}(\Theta)}\right)\right)\).
**Lemma 9**.: _The measure \(\mu\in\mathcal{P}_{2}(\Theta)\) is optimal for Problem 1 if and only if \(\phi_{\mu}(\theta)=0\) for all \(\theta\in\Theta\)._
Proof.: Suppose \(\mu\in\mathcal{P}_{2}(\Theta)\) optimal and let \(\zeta\in L^{1}(\Theta;\mu)\). Then, for all \(\nu:=\zeta\mu+\nu^{\perp}\in\mathcal{M}(\Theta)\) (Lebesgue decomposition of \(\nu\) with respect to \(\mu\) with \(\zeta\in L^{1}(\Theta;\mu)\)) and owing to Lemma 8, as \(t\) goes to \(0\),
\[\mathcal{E}_{\tau}(\mu+t\nu) =\mathcal{E}(P_{\tau}(\mu)+tP_{\tau}(\nu))\] \[=\mathcal{E}_{\tau}(\mu)+td\mathcal{E}\,|_{P_{\tau}(\mu)}(P_{\tau} (\nu))+o(t).\]
Hence as \(\mu\) is optimal
\[0=\frac{d}{dt}\left[\mathcal{E}_{\tau}(\mu+t\nu)\right]|_{t=0} =d\,\mathcal{E}\,|_{P_{\tau}(\mu)}(P_{\tau}(\nu))\] \[=\int_{\Theta}\,d\,\mathcal{E}\,|_{P_{\tau}(\mu)}(\Phi_{\tau}( \theta;\cdot))d\nu(\theta)\] \[=\int_{\Theta}\phi_{\mu}(\theta)d\nu(\theta)\] \[=\int_{\Theta}\phi_{\mu}(\theta)\zeta(\theta)d\mu(\theta)+\int_{ \Theta}\phi_{\mu}(\theta)d\nu^{\perp}(\theta).\]
As this is true for all \(\zeta\in L^{1}(\Theta,\mu)\), one gets:
\[\phi_{\mu}=0\ \mu\text{-almost everywhere},\quad\phi_{\mu}=0\ \nu^{\perp}\text{- almost everywhere} \tag{51}\]
for all \(\nu^{\perp}\perp\mu\). As \(\phi_{\mu}\) is continuous, this is equivalent to \(\phi_{\mu}=0\) everywhere in \(\Theta\). Indeed, let \(\theta\in\Theta\). If \(\theta\) belongs to \(\operatorname{Supp}(\mu)\), then by definition of the support, \(\mu(B(\theta,\varepsilon))>0\) for all \(\varepsilon>0\). Thus, one can take \(\theta_{\varepsilon}\in B(\theta,\varepsilon)\) with \(\phi_{\mu}(\theta_{\varepsilon})=0\). As \(\theta_{\varepsilon}\mathop{\longrightarrow}\limits_{\varepsilon\to 0}\theta\), using the continuity of \(\phi_{\mu}\), we obtain \(\phi_{\mu}(\theta)=0\). If \(\theta\not\in\operatorname{Supp}(\mu)\), then \(\delta_{\theta}\perp\mu\) and necessarily, \(\phi_{\mu}(\theta)=0\). The reverse implication is trivial.
Conversely suppose now \(\phi_{\mu}=0\) everywhere in \(\Theta\) and take \(\nu\in\mathcal{P}_{2}(\Theta)\), then by previous computations and the convexity of \(\mathcal{E}\) (slopes are increasing)
\[0=\frac{d}{dt}\left[\mathcal{E}(\mu+t(\mu-\nu))\right]=\frac{d}{dt}\left[ \mathcal{E}(P_{\tau}(\mu)+tP_{\tau}(\mu-\nu))\right]\leq\mathcal{E}(P_{\tau}( \nu))-\mathcal{E}(P_{\tau}(\mu))\]
which implies that
\[\mathcal{E}_{\tau}(\mu)\leq\mathcal{E}_{\tau}(\nu)\]
and \(\mu\) is optimal.
#### 3.3.2 Escape from critical points
In this section, we use the notation :
\[\theta=(a,c,w,b)=:(a,c,\omega)\]
to make the difference between "linear" variables and "nonlinear" ones.
**Lemma 10**.: _For all \(\mu,\nu\) in \(\mathcal{P}_{2}(\Theta)\), it holds that_
\[\forall\theta\in\Theta,\ |\phi_{\mu}(\theta)-\phi_{\nu}(\theta)|\leq C \left(\int_{\Theta}|\theta_{1}|^{2}d\mu(\theta_{1})+\int_{\Theta}|\theta_{2}|^ {2}d\nu(\theta_{2})\right)W_{2}^{2}(\mu,\nu)(1+|\theta|^{2})\]
\[\forall\theta\in\Theta,\ |v_{\mu}(\theta)-v_{\nu}(\theta)|\leq C\left(\int_{ \Theta}|\theta_{1}|^{2}d\mu(\theta_{1})+\int_{\Theta}|\theta_{2}|^{2}d\nu( \theta_{2})\right)W_{2}^{2}(\mu,\nu)(1+|\theta|^{2})\]
Proof.: Here we focus on \(v_{\mu}\), the proof for \(\phi_{\mu}\) being very similar. Considering (29)-(30), one can decompose \(v_{\mu}\) as
\[v_{\mu}=:v_{\mu,1}+v_{2}+v_{\mu,3}, \tag{52}\]
with
\[v_{\mu,1} :=\nabla_{\theta}\left[\langle\nabla_{x}P_{\tau}(\mu),\nabla_{x} \Phi_{\tau}(\theta;\cdot)\rangle_{L^{2}(\Omega)}\right],\] \[v_{2} :=\nabla_{\theta}\left[-\langle f,\Phi_{\tau}(\theta;\cdot) \rangle_{L^{2}(\Omega)}\right],\] \[v_{\mu,3} :=\nabla_{\theta}\left[\int_{\Omega}P_{\tau}(\mu)(x)dx\times\int _{\Omega}\Phi_{\tau}(\theta;x)dx\right].\]
Using standard integral derivation and Fubini theorems, it holds that for all \(\gamma\in\Gamma_{o}(\mu,\nu)\),
\[v_{\mu,1}(\theta)-v_{\nu,1}(\theta)=\int_{\Theta^{2}}\int_{\Omega}\nabla_{ \theta}\nabla_{x}\Phi_{\tau}(\theta;x)(\nabla_{x}\Phi_{\tau}(\theta_{1};x)- \nabla_{x}\Phi_{\tau}(\theta_{2};x))dxd\gamma(\theta_{1},\theta_{2}).\]
Owing to (33)-(34), one gets
\[|v_{\mu,1}(\theta)-v_{\nu,1}(\theta)| \leq C(\tau)\int_{\Theta^{2}}\max(|\theta_{1}|,|\theta_{2}|)| \theta_{1}-\theta_{1}||\theta|^{2}dxd\gamma(\theta_{1},\theta_{2})\] \[\leq C(\tau)\left(\int_{\Theta}|\theta_{1}|^{2}d\mu+\int_{\Theta} |\theta_{2}|^{2}d\nu\right)W_{2}^{2}(\mu,\nu)|\theta|^{2},\]
where \(C(\tau)\) is a positive constant which only depends on \(\tau\), and where we used the Cauchy-Schwartz inequality. For the third term in the decomposition (52), one has :
\[v_{\mu,3}-v_{\nu,3}=\int_{\Theta^{2}}\int_{\Omega}\Phi_{\tau}(\theta_{1};\cdot)- \Phi_{\tau}(\theta_{2};\cdot)dxd\gamma(\theta_{1},\theta_{2})\times\int_{\Omega }\nabla_{\theta}\Phi_{\tau}(\theta;\cdot)dx.\]
Owing to (31), one gets :
\[|v_{\mu,3}(\theta)-v_{\nu,3}(\theta)| \leq C(\tau)\int_{\Theta^{2}}\int_{\Omega}\max(|\theta_{1},\theta _{2}|)|\theta_{1}-\theta_{1}|dxd\gamma(\theta_{1},\theta_{2})|\theta|\] \[\leq C(\tau)\left(\int_{\Theta}|\theta_{1}|^{2}d\mu+\int_{\Theta} |\theta_{2}|^{2}d\nu\right)W_{2}^{2}(\mu,\nu)|\theta|\]
where we used again the Cauchy-Schwartz inequality. Hence the desired result.
**Proposition 7**.: _Let \(\mu\in\mathcal{P}_{2}(\Theta)\) such that there exists \(\theta\in\Theta\), \(\phi_{\mu}(\theta)\neq 0\). Then there exist a set \(A\subset\Theta\) and \(\varepsilon>0\) such that if there exists \(t_{0}>0\) with \(W_{2}(\mu(t_{0}),\mu)\leq\varepsilon\) and \(\mu(t_{0})(A)>0\), then there exists a time \(0<t_{0}<t_{1}<+\infty\) such that \(W_{2}(\mu(t_{1}),\mu)>\varepsilon\)._
Proof.: As \(\phi_{\mu}\) is linear in \(a\) and \(c\), it can be written under the form
\[\phi_{\mu}(\theta)=:a\psi_{\mu}(\omega)+cr_{\mu}.\]
By hypothesis, the set
\[A_{0}:=\{\theta\in\Theta\ |\ \phi_{\mu}(\theta)\neq 0\}\]
is a non empty (open set). This is equivalent to say that either there exists \(\omega\) such that \(\psi_{\mu}(\omega)\neq 0\) or \(r_{\mu}\neq 0\). Suppose that \(\psi_{\mu}\neq 0\) is non zero somewhere, the case for \(r_{\mu}\) being similar. For all \(\alpha\in\mathbb{R}\), we denote by
\[\begin{cases}A_{\alpha}^{+}=\psi_{\mu}^{-1}(]0,+\infty[),\\ A_{\alpha}^{-}=\psi_{\mu}^{-1}(]-\infty,0[).\end{cases}\]
Now we focus on \(A_{0}^{-}\) and suppose that this set is non empty. The case where \(A_{0}^{+}\) is non empty is similar to handle and left to the reader.
By Lemma 11 and the regular value theorem, there exists \(\eta>0\) such that \(\partial A_{-\eta}^{-1}=\psi_{\mu}^{-1}(\{-\eta\})\) is a \((d+1)-\)orientable manifold on which \(\nabla_{\omega}\psi_{\mu}\) is non zero. With our choice of activation function \(\sigma_{H,\tau}\), it is easy to prove that \(A_{-\eta}^{-}\) is a bounded set. Indeed, if \(b\) is large enough, then \(\Omega\ni x\mapsto\sigma_{H,\tau}(w\cdot x+b)\) is zero and \(\psi_{\mu}(w,b)\) is zero.
On \(A_{-\eta}^{-}\), the gradient \(\nabla_{\omega}\psi_{\mu}\) is pointing outward \(A_{-\eta}^{-}\) and, denoting by \(n_{\text{out}}\) the outward unit vector to \(A_{-\eta}^{-}\), there exists \(\beta>0\) such that \(|\nabla_{\omega}\psi_{\mu}\cdot n_{\text{out}}|>\beta\) for on \(\partial A_{-\eta}^{-}\), since this continuous function is nonzero on a compact set. Hence, defining :
\[A:=\{(a,c,\omega)\in\Theta\ |\ \omega\in A_{-\eta}^{-},\ a\geq 0\}\]
and owing to the fact that \(v_{\mu}=(v_{\mu,a},v_{\mu,c},v_{\mu,\omega})\) with \(v_{\mu,a}=\psi_{\mu}(\omega)\), \(v_{\mu,c}=r_{\mu}\), \(v_{\mu,\omega}=a\nabla_{\omega}\psi_{\mu}(\omega)\), it holds :
\[\begin{cases}\qquad v_{\mu,a}<\eta\ \text{on}\ A\\ v_{\mu,\omega}\cdot n_{out}>\beta a\ \text{on}\ \mathbb{R}_{+}\times\mathbb{R} \times\partial A_{-\eta}^{-}.\end{cases} \tag{53}\]
By contradiction, suppose that \(\mu(t_{0})\) has non zero mass on \(A\) and that \(W_{2}(\mu,\mu(t))\leq\varepsilon\) (with \(\varepsilon\) fixed later) for all time \(t\geq t_{0}\). Then using Lemma 10, one has :
\[|v_{\mu(t)}(\theta)-v_{\mu}(\theta)|\leq C(\tau,\mu)(1+|\theta|^{2})\varepsilon \tag{54}\]
and
\[|\phi_{\mu(t)}(\theta)-\phi_{\mu}(\theta)|\leq C(\tau,\mu)(1+|\theta|^{2})\varepsilon.\]
One takes \(\varepsilon:=\sqrt{\frac{\eta}{2C(\tau,\mu)R}}\) where \(R>0\) satisfies :
\[(R-1)\mu(t_{0})(A)>\int|\theta|^{2}d\mu+\frac{\eta}{2C(\tau,\mu)R} \tag{55}\]
which exists since \(\mu(t_{0})(A)>0\) by hypothesis. On the set \(\{\theta\in A\ |\ 1+|\theta|^{2}\leq R\}\) and by (54), we have :
\[|v_{\mu(t)}(\theta)-v_{\mu}(\theta)|\leq\frac{\eta}{2}\]
and so by (53) and the fact that \(v_{t}=-v_{\mu(t)}\):
\[\left\{\begin{aligned} v_{t,a}&>\eta/2\ \text{on}\ A\\ v_{t,\omega}\cdot n_{out}&<-\beta/2\times a\ \text{on}\ \partial A^{-}_{-\eta}.\end{aligned}\right.\]
The general picture is given by Figure 3. As a consequence, there exists a time \(t_{1}\) such that the set \(\{\theta\in A\ |\ 1+|\theta|^{2}\leq R\}\) has no mass and
\[\int|\theta|^{2}d\mu(t)(\theta)\geq(R-1)\mu(t)(A)\geq(R-1)\mu(t_{0})(A).\]
At the same time, as \(W_{2}(\mu,\mu(t))\leq\varepsilon\) :
\[\int|\theta|^{2}d\mu(t)(\theta)\leq\int|\theta|^{2}d\mu(\theta)+\varepsilon^{2 }=\int|\theta|^{2}d\mu(\theta)+\frac{\eta}{2C(\tau,\mu)}\]
and this a contradiction with the condition (55) on \(R\).
**Remark 7**.: _The set \(A\) constructed in the proof of previous lemma is of the form :_
\[A:=\{(a,c,\omega)\in\Theta\ |\ \omega\in A^{-}_{-\eta_{1}}\}\cup\{(a,c,\omega)\ |\ \omega\in A^{+}_{\eta_{2}}\} \tag{56}\]
_where \(\eta_{1},\eta_{2}\) are strictly positive._
**Lemma 11**.: _For all \(\mu\in\mathcal{P}_{2}(\Theta)\), if \(\psi_{\mu}<0\) somewhere, there exists a strictly negative regular value \(-\eta\) (\(\eta>0\)) of \(\psi_{\mu}\)._
Proof.: As \(\psi_{\mu}<0\) somewhere and by continuity, there exists a non empty open \(O\subset]-\infty,0[\) such that \(O\subset range(\psi_{\mu})\). Next, we use the Sard-Morse theorem recalled below :
**Theorem 8** (Sard-Morse).: _Let \(\mathcal{M}\) be a differentiable manifold and \(f:\mathcal{M}\to\mathbb{R}\) of class \(\mathcal{C}^{n}\), then the image of the critical points of \(f\) (where the gradient is zero) is Lebesgue negligible in \(\mathbb{R}\)._
This result applies to \(\phi_{\mu}\) and the image of critical points of \(\phi_{\mu}\) is Lebesgue negligible. As a consequence, there exists a point \(o\in O\) which is a regular value of \(\phi_{\mu}\). As \(o\in O\), it is strictly negative and this finishes the proof of the lemma.
#### 3.3.3 Convergence
This preliminary lemma gives an insight of why Hypothesis 1 is useful :
**Lemma 12**.: _For all \(\mu\in\mathcal{P}_{2}(\Theta)\), \(\theta\notin\mathbb{R}^{2}\times S_{\mathbb{R}^{d}}(1)\times]-\sqrt{d}-2,\sqrt{d }+2[,\tau>1\), the potential writes :_
\[\phi_{\mu}(\theta)=cr_{\mu}\]
_where \(r_{\mu}\) is a constant that depends on \(\mu\). In particular, \(\phi_{\mu}(\theta)\) does not depend on \(a,w,b\)._
Proof.: For all \(x\in\Omega,|b|>\sqrt{d}+2,\tau>1\) :
\[|w\cdot x+b|\geq|b|-|x|_{\infty}|w|_{1}>2\]
and
\[\sigma_{H,\tau}(w\cdot x+b)=0.\]
This implies that for \(|b|\geq\sqrt{d}+2,\mu\in\mathcal{P}_{2}(\Theta)\), the potential \(\phi_{\mu}\) writes \(\phi_{\mu}=cr_{\mu}\) where \(r_{\mu}\) is a constant.
In fact Hypothesis 1 is verified by the gradient curve \((\mu(t))_{t\geq 0}\) for all time. This is proved in the next lemma.
**Lemma 13**.: _If \(\mu_{0}\) satisfies Hypothesis 1 then for all \(t\geq 0\) and all open set \(O\subset S_{\mathbb{R}^{d}}(1)\times[-\sqrt{d}-2,\sqrt{d}+2]\),_
\[\mu(t)(\mathbb{R}^{2}\!\times\!O)>0\]
The arguments of the proof of last lemma are based on fine tools of algebraic topology. One can find a nice introduction to the topic in the reference book [21]. With simple words, we enjoy the homotopy properties on the sphere to prove that the measure \(\mu(t)\) keeps a large enough support.
Proof.: For all \(t\geq 0\), as \(\mu(t)=(\chi(t))\#\mu_{0}\), we have [2, Lemma C.8] :
\[\operatorname{Supp}(\mu(t))=\overline{\chi(t)\left(\operatorname{Supp}(\mu_ {0})\right)}. \tag{57}\]
Now let \(\xi_{t}(w,b):=(P_{S_{\mathbb{R}^{d}}(1)\times\mathbb{R}}\circ\chi(t))((0,0,w,b))\) where \(P_{S_{\mathbb{R}^{d}}(1)\times\mathbb{R}}\) is the projection on \(S_{\mathbb{R}^{d}}(1)\times\mathbb{R}\) (\(w,b\) variables). We claim that the choice of the function of activation lets the extremal spheres invariant **ie**\(\xi_{t}(w,\pm(\sqrt{d}+2))=(w,\pm(\sqrt{d}+2))\). Indeed, by Lemma 12 for \(\theta=(c,a,w,\pm(\sqrt{d}+2))\), \(\phi_{\mu}(\theta)=cr_{\mu}\) giving :
\[\left\{\begin{array}{rcl}v_{\mu,w}(\theta)=&0,\\ v_{\mu,b}(\theta)=&0\end{array}\right.\]
and the claim is proven. Consequently by Lemma 14, the continuous map \(\xi_{t}\) is surjective.
Now let \(O\subset S_{\mathbb{R}^{d}}(1)\times[-\sqrt{d}-2,\sqrt{d}+2]\) be an open set. By what precedes, there exists a point \(\omega\in S_{\mathbb{R}^{d}}(1)\times[-\sqrt{d}-2,\sqrt{d}+2]\) such that \(\xi_{t}(\omega)\in O\) and \(\chi(t)((0,0,\omega))\in\mathbb{R}^{2}\times O\). As \((0,0,\omega)\) belongs to the support of \(\mu_{0}\) by hypothesis then \(\chi(t)((0,0,\omega))\) belongs to the support of \(\mu(t)\) by (57) and :
\[\mu(t)(\mathbb{R}^{2}\!\times\!O)>0\]
which finishes the proof of the lemma.
Lemma 14 gives conditions for the surjectivity of a continuous map on a cylinder.
Figure 3: The escape of mass towards large values of \(a\)
**Lemma 14**.: _Let \(f\) be a continuous map \(f:S_{\mathbb{R}^{d}}(1)\times[0,1]\to S_{\mathbb{R}^{d}}(1)\times[0,1]=:C\), homotopic to the identity such that :_
\[\forall w\in S_{\mathbb{R}^{d}}(1),\ \begin{cases}f(w,0)=&(w,0),\\ f(w,1)=&(w,1).\end{cases}\]
_Then \(f\) is surjective._
Proof.: Suppose that \(f\) misses a point \(p\), then necessarily \(p=(w,t)\) with \(0<t<1\). We can write :
\[g:C\to C\setminus\{p\}\]
the restriction of \(f\) on its image. The induced homomorphism on homology groups writes :
\[g_{\star}:H_{d-1}(C)\to H_{d-1}(C\setminus\{p\}).\]
Aside that, we have the classic information on homology groups of \(C\) and \(C\setminus\{p\}\) :
\[\begin{cases}H_{d-1}(C)=H_{d-1}(S_{\mathbb{R}^{d}}(1))&\simeq\mathbb{Z},\\ H_{d-1}(C\setminus\{p\})=H_{d-1}(S_{\mathbb{R}^{d}}(1)\lor S_{\mathbb{R}^{d}}( 1))&\simeq\mathbb{Z}^{2}\end{cases}\]
where \(\vee\) designates the wedge sum. Thus, the homomorphism \(g_{\star}\) can be written as :
\[g_{\star}:\mathbb{Z}\to\mathbb{Z}^{2}.\]
As \(g\) lets the two spheres \(w\to(w,0),w\to(w,1)\) invariant, we have :
\[g_{\star}(1)=(1,1).\]
Now we note \(i:C\setminus\{p\}\to C\) the canonical inclusion map. For all \((a,b)\in\mathbb{Z}^{2}\),
\[i_{\star}(a,b)=a+b.\]
By hypothesis, \(f\) is homotopic to the identity so \(f_{\star}=I_{\star}\) and \(f_{\star}(1)=1\) but at the same time :
\[f_{\star}(1)=i_{\star}g_{\star}(1)=i_{\star}((1,1))=2\]
which gives a contradiction.
It allows to conclude on the convergence and prove Theorem 7.
Proof of Theorem 7.: By contradiction, suppose \(\mu^{\star}\) is not optimal. Then by Lemma 9, \(\phi_{\mu^{\star}}\neq 0\) somewhere. Reusing the separation of variables (see the proof of Proposition 7), \(\phi_{\mu^{\star}}\) writes :
\[\phi_{\mu^{\star}}(\theta)=a\psi_{\mu}(w,b)+cr_{\mu}.\]
Hence either :
* \(r_{\mu}\) is not zero and \(v_{\mu,c}\neq 0\) and one can prove that some mass escapes at \(c=\infty\) as in the proof of Proposition 7.
* \(\psi_{\mu}\) is not identically zero and the set \(A\) defined in (56) is not empty and verifies : \[A\subset\mathbb{R}^{2}\times S_{\mathbb{R}^{d}(1)}\times[-\sqrt{d}-2,\sqrt{d}+2]\] (58) by Lemma 12.
We focus on the last item. By Proposition 7, there exists \(\varepsilon>0\) such that if \(W_{2}(\mu_{t_{0}},\mu^{\star})\leq\varepsilon\) for some \(t_{0}\) and \(\mu(t_{0})(A)>0\) then there exists a further time \(t_{1}\) with \(W_{2}(\mu(t_{0}),\mu^{\star})>\varepsilon\). As \((\mu(t))_{t\geq 0}\) converges towards \(\mu^{\star}\), there exists \(t_{0}\) such that :
\[\forall t\geq t_{0},\ W_{2}(\mu(t_{0}),\mu^{\star})\leq\varepsilon.\]
But by Lemma 13 and (58), for all time \(\mu(t)(A)>0\) and consequently there exists a time \(t_{1}>t_{0}\) with :
\[W_{2}(\mu(t_{0}),\mu^{\star})>\varepsilon\]
which gives the contradiction.
## 4 Numerical experiments
In this section, we will conduct numerical experiments to evaluate the potential of the proposed method.
### The effect of frequency
First, the influence of the frequency on the approximation is investigated. To do so, we consider \(d=1\) and the following source term for which the solution is a cosinus mode :
\[f_{k}(x):=\pi^{2}|k|^{2}\cos(\pi k\cdot x).\]
In higher dimension, we use the corresponding source term which is a tensor product of its one dimensional counterpart :
\[f_{k}(x_{1},\cdots,x_{d}):=\pi^{2}|k|_{l^{2}}^{2}\cos(\pi k_{1}\cdot x_{1}) \cdots\cos(\pi k_{d}\cdot x_{d}).\]
The code is written using python supplemented with Keras/Tensorflow framework. One should remember the following implementation facts :
* The neural network represents the numerical approximation taking values of \(x\in\Omega\) as input and giving a real as output.
* The loss function is approximated with a Monte Carlo sampling for the integrals where the measure is uniform on \(\Omega\). For each training phase, we use batches of size \(10^{2}\) obtained from a dataset of \(10^{5}\) samples, the number of epochs is calculated to have a time of optimization equals to \(2\) (learning rate \(\times\) number steps \(=2\)). Note that the dataset is shuffled at each epoch.
* The derivative involved in the loss is computed thanks to automatic differentiation.
* The training routine is given by the algorithm of backpropagation coupled with a gradient descent optimizer for which the learning rate \(\zeta:=\dfrac{1}{2\eta m}\) where \(n\) is the batch size and \(m\) is the width of the neural network involved. This choice will be explained later in the analysis.
* In all the plots, the reader will see the mean curve and a shaded zone representing the interval whose width is two times the standards deviation. Each simulation is run \(4\) times to calculate these statistical parameters.
For \(d=1\) and a width \(m=1000\), the simulations are reported in Figure 4 for which very satisfactory results for \(k=1,3\) are observed, the same conclusions hold for \(d=2\).
Figure 4: The effect of frequency on the approximation when \(d=1\) and \(m=1000\)
Figure 5: The numerical solutions when \(d=1\) and \(m=1000\)
**Remark 8**.: _In this remark, we expose some heuristic arguments for the present choice of scaling related to the learning rate :_
\[\xi:=\frac{1}{2nm}.\]
_It is possible to write the learning scheme as follows :_
\[\frac{\theta_{t+1}-\theta_{t}}{dt}=-\nabla_{\theta}\phi_{\mu_{t}^{m}}^{n}( \theta_{t}) \tag{59}\]
_where :_
\[\phi_{\mu_{t}^{m}}^{n}(\theta):=\frac{1}{nm}\sum_{i,j}\nabla\Phi( \theta_{j},x_{i})\cdot\nabla\Phi(\theta,x_{i})-f(x_{i})\Phi(\theta,x_{i})+ \left(\frac{1}{nm}\sum_{i,j}\Phi(\theta,x_{i})\right)^{2} \tag{60}\]
_with \((x_{i})_{i}\) are \(n\) samples taken uniformly on the \(d\) dimensional cube._
_By analogy, equations (59)-(60) can be interpreted as an explicit finite element scheme for the heat equation where the space discretization parameter is \(h:=\frac{1}{\sqrt{nm}}.\) This gives the CFL condition :_
\[2dt\leq h^{2}\]
_which is equivalent to :_
\[dt\leq\frac{1}{2nm}.\]
_In practice, one can observe that if one takes \(dt>O\left(\frac{1}{nm}\right)\) then the scheme diverges in the same way as a classic finite elements scheme._
Figure 6: The effect of frequency on the approximation when \(d=2\)
_The CFL condition is bad news since it prevents the use of large batch sizes necessary to get a good precision. In practice, the maximum on can do with a standard personal computer is \(n,m=10^{2}\)._
### The effect of dimension
To evaluate the effect of dimension on performance, we consider frequencies of the form \(k=(\bar{k},0,\cdots,0)\) where \(\bar{k}\) is an integer, and plot the \(L^{2}\) error as a function of the dimension for different \(\bar{k}\). This is done in Figure 7 where several observations can be made :
* For low frequency, the precision is not affected by dimension.
* At high frequency, performance are deteriorated as dimension increases.
* Having a larger neural network captures better high frequency modes up to a certain dimension.
* Variance increases with frequency but not with dimension.
For completeness we plot in Figure 8 a high dimensional example where \(d=10\), \(k=(1,1,0,\cdots,0)\) to show that the proposed method works well in the high dimensional/low frequency regime. The contour plot shows the function's values on the slice \((x_{1},x_{2},0.5,\cdots,0.5)\).
Figure 7: The effect of dimension for different frequencies and width
Finally we show an example where a lot of low frequencies are involved in the high dimensional regime :
\[f(x)=2\pi^{2}\sum_{k=1}^{d-1}\cos(\pi\cdot x_{k})\cos(\pi\cdot x_{k+1})\]
whose solution is :
\[u^{\star}(x)=\sum_{k=1}^{d-1}\cos(\pi\cdot x_{k})\cos(\pi\cdot x_{k+1}).\]
For \(d=6\), \(m=1000\) and all other parameters being identical to previous cases, one gets convergence of the solution on Figure 9 where the contour plot still shows the function's values on the slice \((x_{1},x_{2},0.5,\cdots,0.5)\).
## 5 Conclusion
In this article, the ability of two-layer neural networks to solve Poisson equation is investigated. First the PDE problem commonly understood in the Sobolev sense, is reinterpreted in the perspective of probability measures by writing the energy functional as a function over probabilities. Then, we propose to solve the obtained minimization problem thanks to gradient curves for which an existence result is shown. To justify this choice of method, the convergence towards an optimal measure is proved assuming the convergence of the gradient curve. Finally, numerical illustrations with a detailed analysis
Figure 8: The case \(d=10\), \(k=(1,1,0,\cdots,0)\) and \(m=1000\)
Figure 9: The mixed mode solution
of the effects of dimension and frequency are presented. With this work, it becomes clear that neural networks is a viable method to solve Poisson equation even in the high dimensional regime; something out of reach for classical methods. Nonetheless, some questions and extensions deserve more detailed developments. First, the main remark to observe is that the convergence is not proved theoretically even if it is observed in practice. Additionally, the domain considered is very peculiar \(\Omega=[0,1]^{d}\) and it is not obvious that one could generalize such theory on domain where \(\sin\)/cosine decomposition is not available. In numerical illustrations, integrals involved in the cost were not computed exactly but approximated by uniform sampling. It should be interesting to study the convergence of gradient curves with respect to the number of samples.
## Appendix A The differential structure of Wasserstein spaces over compact Alexandrov spaces
The aim of this section is to get acquainted of the differential structure of \(\mathcal{P}_{2}(\Theta)\). All the results presented here are not rigorously proved and we rather give a didactic introduction to the topic, the main reference being [18].
### The differential structure of Alexandrov spaces
An Alexandrov space \((A,d)\) is a geodesic space embedded with its distance \(d\) having a nice concave property on triangles. Roughly, Alexandrov spaces are spaces where the curvature is bounded from below by a uniform constant. Before going further, we need to introduce some notation :
**Definition 6**.: _Let \(\alpha\) be a unit speed geodesic with \(\alpha(0)=a\in A\) and \(s\geq 0\), then we introduce the notation :_
\[(\alpha,s):\mathbb{R}_{+}\ni t\mapsto\alpha(st)\]
_the associated geodesic of velocity \(s\). We then make the identification_
\["(\alpha,1)=\alpha"\]
_unit speed geodesic \(\alpha\)._
It is not so important to focus on a rigorous definition of such spaces but one should remember the following fundamental property of existence of a tangential cone structure :
**Theorem 9**.: _Let \(\alpha,\beta\) be two unit speed geodesics with \(\alpha(0)=\beta(0)=:a\in A\) and \(s,t\geq 0\). Then the limit :_
\[\sigma_{a}((\alpha,s),(\beta,t)):=\lim_{\varepsilon\to 0}\frac{1}{ \varepsilon}d(\alpha(s\varepsilon),\beta(t\varepsilon))\]
_exists. Moreover,_
\[\frac{1}{2st}\left(s^{2}+t^{2}-\sigma_{a}((\alpha,s),(\beta,t))\right) \tag{61}\]
_depends neither on \(s\) nor on \(t\)._
The previous theorem is very important as it enables to introduce a notion of angle and scalar product :
**Corollary 3**.: _One can define the local angle \(\angle_{a}((\alpha,s),(\beta,t))\) between \((\alpha,s)\) and \((\beta,t)\) by :_
\[\cos(\angle_{a}((\alpha,s),(\beta,t))):=\frac{1}{2st}\left(s^{2}+t^{2}-\sigma _{a}((\alpha,s),(\beta,t))\right)\]
_and a local scalar product :_
\[\langle(\alpha,s),(\beta,t)\rangle_{a}:=st\cos(\angle_{a}((\alpha,s),(\beta,t) )).\]
We then have the following definitions.
**Definition 7**.: _The space of directions \(\Sigma_{a}(A)\) is the completion of_
\[\{(\alpha,1)\ |\ \alpha\text{ unit speed geodesic departing from a }\}\]
_quotiented by the relationship \(\sigma_{a}=0\) with respect to the distance \(\sigma_{a}\)._
_The tangent cone, i.e. the set of geodesics departing from \(a\) at speed \(s\), of the form \((\alpha,s)\) for some \((\alpha,1)\in\Sigma_{a}(A)\), is denoted by \(C_{a}(A)\)._
A major result from [18] is that if the underlying space \(A\) is Alexandrov and compact then the space over probabibilty \(\mathcal{P}_{2}(A)\) is also an Alexandrov space and all the differential structure presented above is available. The proof of this result is based on McCann interpolation which allows to make the link between probability geodesics and geodesics of the underlying space.
Moreover, it is possible to define a notion of differentiation.
**Definition 8**.: _For a curve \((a_{t})_{t\in\mathbb{R}}\) of \(A\), it is said to be differentiable at \(t=0\) if there exists \((\alpha,\tau)\in C_{a}(A)\) such that for all \((\alpha_{i},1)\in\Sigma_{a}(A)\), \(t_{i}\geq 0\) with \(\lim\limits_{i\to\infty}t_{i}=0\), linking \(a_{0}\) and \(a_{t_{i}}\) then :_
\[\lim\limits_{i\to\infty}(\alpha_{i},d(a_{0},a_{t_{i}})/t_{i})=(\alpha,\tau)\]
_where the convergence has to be understood in the sense of the distance \(\sigma_{a}\). Moreover, the derivative of the curve at \(t=0\) writes :_
\[a_{0}^{\prime}:=(\alpha,\tau).\]
### The notion of gradient
Now let us consider an energy \(\mathcal{E}:A\to\mathbb{R}\) with the following property of convexity.
**Definition 9**.: _We say that \(\mathcal{E}\) is convex along geodesics if there exists \(K\in\mathbb{R}\) such that for all rescaled geodesics \(\alpha:[0,1]\to A\) :_
\[\mathcal{E}(\alpha(\lambda))\leq(1-\lambda)\,\mathcal{E}(\alpha(0))+\lambda\, \mathcal{E}(\alpha(1))-\frac{K}{2}\lambda(1-\lambda)d(\alpha(0),\alpha(1)).\]
Assuming such convexity, it is possible to define the gradient's direction of \(\mathcal{E}\) using the differential structure of \(A\) (see [18, Lemma 4.3]). Before doing this, it is necessary to introduce the directional derivative :
**Definition 10**.: _For \(a\in A\) and \((\alpha,s)\in C_{a}(A)\), one defines :_
\[D_{a}\,\mathcal{E}((\alpha,s)):=\lim\limits_{\varepsilon\to 0}\frac{ \mathcal{E}(\alpha(s\varepsilon))-\mathcal{E}(\alpha(0))}{\varepsilon}.\]
One can prove that the limit above exists using the convexity assumption of \(\mathcal{E}\). Owing this, there exists a direction for which the local slope (see Definition 4) is attained in the sense defined below.
**Theorem 10**.: _For all \(a\in A\) such that \(|\nabla_{-}\,\mathcal{E}\,|(a)<\infty\), there exists a unique direction \((\alpha,1)\in\Sigma_{a}(A)\) such that :_
\[D_{a}\,\mathcal{E}((\alpha,1))=-|\nabla_{-}\,\mathcal{E}\,|(a).\]
_This direction \(\alpha\) is denoted by \(\frac{\nabla_{-}\,\mathcal{E}(a)}{|\nabla_{-}\,\mathcal{E}\,|(a)}\), which means that :_
\[D_{a}\,\mathcal{E}((\alpha,|\nabla_{-}\,\mathcal{E}\,|(a))):=-|\nabla_{-}\, \mathcal{E}\,|^{2}(a).\]
With this, it is straightforward to define the notion of gradient curve.
**Definition 11**.: _A Lipschitz curve \((a_{t})_{t\geq 0}\) is said to be a gradient curve with respect to \(\mathcal{E}\) if it is differentiable for all \(t\geq 0\) and :_
\[\forall t\geq 0,\ a_{t}^{\prime}=\left(\frac{\nabla_{-}\,\mathcal{E}(a_{t})}{| \nabla_{-}\,\mathcal{E}\,|(a_{t})},|\nabla_{-}\,\mathcal{E}\,|(a_{t})\right) \in C_{a_{t}}(A).\]
In [18], results about existence and uniqueness of gradient curve on \(\mathcal{P}_{2}(A)\) are given.
## Acknowledgements
The authors acknowledge funding from the Tremplin-ERC Starting ANR grant HighLEAP (ANR-22-ERCS-0012). |
2305.16528 | Topological photonics: fundamental concepts, recent developments, and
future directions | Topological photonics is emerging as a new paradigm for the development of
both classical and quantum photonic architectures. What makes topological
photonics remarkably intriguing is the built-in protection as well as intrinsic
unidirectionality of light propagation, which originates from the robustness of
global topological invariants. In this Perspective, we present an intuitive and
concise pedagogical overview of fundamental concepts in topological photonics.
Then, we review the recent developments of the main activity areas of this
field, categorized into linear, nonlinear, and quantum regimes. For each
section, we discuss both current and potential future directions, as well as
remaining challenges and elusive questions regarding the implementation of
topological ideas in photonics systems. | Mahmoud Jalali Mehrabad, Sunil Mittal, Mohammad Hafezi | 2023-05-25T23:21:06Z | http://arxiv.org/abs/2305.16528v1 | # Topological photonics: fundamental concepts, recent developments, and future directions
###### Abstract
Topological photonics is emerging as a new paradigm for the development of both classical and quantum photonic architectures. What makes topological photonics remarkably intriguing is the built-in protection as well as intrinsic unidirectionality of light propagation, which originates from the robustness of global topological invariants. In this Perspective, we present an intuitive and concise pedagogical overview of fundamental concepts in topological photonics. Then, we review the recent developments of the main activity areas of this field, categorized into linear, nonlinear, and quantum regimes. For each section, we discuss both current and potential future directions, as well as remaining challenges and elusive questions regarding the implementation of topological ideas in photonics systems.
###### Contents
* I Introduction
* I.1 Key demonstrations in developments of topological photonics
* I.2 Scope and aims
* II Linear topological photonics
* II.1 Concept of Topological Invariants
* II.2 Toy model: charged particles in strong magnetic field
* II.2.1 Classical picture
* II.2.2 Semi-classical picture
* II.2.3 Quantum picture: Landau quantization
* II.3 Hofstadter Butterfly
* II.4 Photonic lattice
* II.5 Various topological photonic models and their implementations
* II.6 Topological photonic crystals
* III Nonlinear Topological Photonics
* III.1 Nonlinearly Induced Topological Phase Transitions
* III.2 Spatial Solitons
* III.3 Dissipative Kerr Temporal Solitons and Frequency Combs
* IV Quantum Topological Photonics
* IV.1 Topological sources of quantum light
* IV.2 Topological robustness for propagating quantum states of light
* IV.3 Topological photonic systems coupled to quantum emitters
* V Remaining challenges and future directions
* V.1 Linear topological photonics
* V.2 Nonlinear topological photonics
* V.3 Quantum topological photonics
* V.4 Strong photon-photon interaction and coupled electron-photon systems
## I Introduction
### Key demonstrations in developments of topological photonics
The entry of topology into physics started with the discovery of the quantum Hall effect in 1980 [1], in which Hall conductance was demonstrated to be robustly quantized in a 2D electron gas. Subsequently, it was realized that such robustness is due to the topological properties of the system energy bands [2]. The idea of band structure topology was later extended to a wider class of systems known as topological insulators [3; 4]. Meanwhile, it was realized that such phenomena are not limited to electronic systems and they can be also realized in any bosonic system. This was initially considered in the context of ultracold atoms, both in rotating Bose-Einstein condensates and optical lattices with synthetic gauge fields [5] and followed up by other bosonic systems such as photonics [6; 7], acoustics [8], phononics [9], electronic circuits [10], and mechanics [11; 12]. Specifically, in the photonic context, an analog of the quantum Hall model was proposed to realize a one-way edge state for the propagation of electromagnetic field in gyromagnetic photonic crystals [13; 14], and subsequently demonstrated [15; 16]. However, to break time-reversal symmetry (TRS) this scheme relies on the presence of
external magnetic fields, while the magneto-optical response of materials is weak.
To address this issue, several theoretical proposals were put forward to synthesize magnetic fields for photons [17; 18; 19; 20]. This was followed by two experimental demonstrations of topological edge states in optical systems without external fields [21; 22]. To bring these ideas to photonic crystals, the realization of spin [23] and valley [24] quantum Hall models were theoretically proposed. Subsequently, the spin Hall [25] and valley Hall [26] topological photonic crystals were experimentally demonstrated.
However, topological invariants are not directly accessible in photonic systems. Specifically, photons are bosons, and quantization of conductance does not apply in this context. Nevertheless, quantum Hall physics can be manifested in the form of a spectral flow [27], which was experimentally observed in 2016 [28]. To expand the field of topological photonics into the nonlinear regime, several types of topological lasers were demonstrated [29; 30; 31; 32; 33]. Although the nature and degree of robustness of these lasers are still subject to investigation. Extending to the quantum regime, topological quantum sources of light were demonstrated [34; 35] around the same time. An intriguing direction is to explore strong light-matter coupling to induce strong interaction between photons. To achieve this, integration to quantum dots [25] and exciton-polariton were demonstrated in micro-cavities [36] and transition metal dichalcogenides [37]. Remarkably, the Laughlin state of two photons was realized [38] as a major step towards few-body interacting topological systems. Some other key developments include topological antennas [39; 40], 4-dimensional quantum Hall effect [41], higher-order topological insulators [42; 43; 44; 45; 46], simulation of Landau levels for photons in a cavity [47], and topological solitons [48; 49]. Finally, three recent demonstrations showed robust topological tunneling for light in a lattice geometry [50], photonic quantum Hall effect and generation of large orbital angular momenta [51] and topological beaming of light [52]. Some of these developments have been summarized in Fig. 1.
In order to have a broader perspective of the above-mentioned developments, one can classify the observed phenomena based on the involved photon number and the strength of photon-photon interaction. The classification can be seen in Fig. 2. In the regime of weak optical nonlinearity, classical photonic topological phenomena are shown along the vertical axis with increasing photon number, starting from low photon number cases such as silicon photonic coupled ring resonators [21; 22], to topological antennas [39; 40], spatial and temporal topological solitons [48; 49] and lasers [29; 30; 31]. Moving along the horizontal axis, strong light-matter interaction enables one to induce photon-photon interaction: Starting from the weak interaction regime (topological quantum light generation [34], and quantum optics interface between single emitters and photonic crystals [25]), to the strong interaction limit, enabling the generation of two-photon Laughlin states [38]. An example of the intermediate regime of interaction and large photon number is topological polaritons in micropillar semiconductor systems [36; 53].
### Scope and aims
The scope of this Perspective is to introduce basic concepts and discuss recent development and potential future directions in the field of topological photonics. We hope that this Perspective is useful for researchers with no background in topological physics who are interested to explore this exciting field.
This Perspective is structured in four sections consisting of linear, nonlinear, and quantum photonic topolog
Figure 1: A selection of key developments in topological photonics (references are listed in the main text).
Figure 2: A selection of emerging topological photonic systems categorized based on photon number and photon-photon-interaction strength, focused on the optical and infra-red domain [21; 22; 25; 30; 34; 36; 38; 39; 48; 49].
ical systems. The linear section will include a concise pedagogical section to introduce the minimum intuitive and mathematical descriptions of the key concepts required to study topological photonics. Then, linear photonic implementations such as topological photonic crystals and passive waveguides and routers are reviewed in this section. The nonlinear section focuses on nonlinear effects in topological systems such as lasers, spatial and temporal solitons, and frequency combs. The quantum section will review the topological quantum sources of light, topological protection for the propagation of quantum states, chip-integrated quantum emitters, and systems of strongly interacting electron-photons. The last section includes detailed remarks on current challenges and more specific potential future directions as well.
It is not possible to discuss all the developments in the field of topological photonics in all platforms and frequency domains. Here, the focus of this Perspective is on the optical and infrared domain. In particular, we do not focus on the microwaves regime, for which a comprehensive recent review is available elsewhere [54]. While we provide basic simple ideas behind linear topological photonics, we do not provide a pedagogical review of nonlinear and quantum topological photonics. We refer the reader to a comprehensive review that summarizes the development up to 2020 for nonlinear photonics [55] and quantum [56; 57]. Moreover, we refer the reader to a review on Non-Hermitian topological photonics [58], which is another emerging direction. Higher-dimension and higher-order topological photonics are reviewed in Ref.[59]. A review on topological lasers can be found here [60]. Synthetic dimension and other developments can be found in a recent exhaustive roadmap [7].
In this Perspective, we highlight challenges for specific topological platforms, some of which are fundamental and some are more technical, as potential directions for future research. These challenges will be separately discussed in linear, nonlinear, and quantum topological photonics.
Before proceeding, we make a clarifying remark. While the word "topological insulator" has been extensively used in the literature of topological photonics, we refrain from its usage in this Perspective to avoid confusion. Strictly speaking, almost all photonic states studied so far are _not_ insulating states, due to their bosonic nature. In electronic systems, either because of Pauli exclusion (Fermionic nature) or interaction, such as band or Mott insulators, the system can be in an insulating state if it is probed at the corresponding Fermi levels. In the photonic context, there is no notion of the Fermi level, since bosonic states can have unlimited occupation in the absence of interaction. Instead, one can have a _photonic bandgap_ and the transmission of light can be zero if photons are injected into the system within the frequency bandwidth of this bandgap.
Moreover, it is important to distinguish between general topological states and _topologically-ordered states_. We use the former as a general term for any state with classical or quantum topological properties, such as vortex states, Chern band insulators. We reserve the latter term for strongly interacting systems, where the order is a consequence of interaction and entanglement, and therefore, can be defined and classified accordingly. This field is an active area of research, mainly theoretical due to the lack of clean and unambiguous experimental platforms (see [61] for a recent review). With this definition, topological states encompass topologically-ordered states. In this Perspective, we mainly focus on only topological states, primarily in the single-particle/classical physics regime, and only briefly discuss topologically-ordered states.
## II Linear topological photonics
In the following sections, we start with a very broad introduction to the role of topology in photonic systems, and then we introduce models and relevant photonic implementation of these concepts.
### Concept of Topological Invariants
Topology is a branch of mathematics that studies the general or global characteristics of a system. For example, when studying a system of geometrical objects, instead of the specific shapes, topology primarily deals with how objects are connected. In other words, topology is concerned with the global geometrical characteristics of a system, rather than the specifics of its building blocks. As an intuitive example, one can consider the case of swimming around an _island_ versus a _peninsula_[62]. Note that regardless of the shape of an island, we call it an island, but once its topology is changed, we use a different word (Figure 3). Starting from a point and coming back to the same location, a swimmer can swim around the island, a process that does not depend on the shape of the island, or the path taken. However, the number of lapses around the island, which is an integer number (from \(\mathbb{Z}\)), is _topologically_ robust, and that number can be considered as a _topological invariant_. We associate the sign of the integer to the clockwise (CW) vs the counter-clockwise (CCW) orientation of the swimmer. Note that this number is always zero for a peninsula since a complete round trip around is not possible. The robustness here means that under small perturbations of the island's shape or the swimming path (specific "local" features of the swimmer's path and the island's shape), the integer number (global topological property of the system) will remain invariant. If one relates a physical observable (conduction, transmission, resistance, etc.) to such integer numbers, then that observable will be similarly topologically robust. This is one of the central motivations for the implementation of topology in physics.
The case of the island and peninsula is a classical example, with no notion of pha
quantum-mechanical analog such an example can be realized by considering how electron (or photon) wavefunction winds around a certain point. An example is a vortex state in two dimensions, where the wavefunction phase can wind an integer number of times. Considering a polar coordinate \((r,\theta)\), in the presence of rotational symmetry, the wavefunction can be of the form \(\psi(r,\theta)=\rho(r)e^{im\theta}\), where the phase winds an integer number of times \(m\), and the radial part of the wavefunction \(\rho(r)\) has a singularity at the center. More generally, in the absence of rotational symmetry, the wavefunction can be described as \(\psi(r,\theta)=\left|\psi(r,\theta)\right\rangle e^{i\phi(r,\theta)}\), where \(\phi(r,\theta)\) is the local phase. Therefore, in the context of photons, these states can be thought of as a solution to the Maxwell equation, where weak spatial variation in the dielectric constant can deform the spatial form of this wavefunction but can not change the winding number \(m\), i.e. \(\oint\nabla\phi(r,\theta)\cdot d\vec{l}=2\pi m\). This is already an example of the topological robustness of a photonic wavefunction in space.
How can we generalize this idea? Consider a typical periodic photonic system such as a photonic crystal, e.g., a bipartite lattice as shown in Fig. 4(a). Solving Maxwell's equation in such a periodic setting can provide several informative properties of the system, including the band gaps, group velocity, dispersion, etc.
Most physical properties are indeed a _local_ function of the band structure \(\varepsilon_{\alpha}(\vec{k})\), where \(\varepsilon_{\alpha}\) is the energy and \(\vec{k}\) is the wave vector in the corresponding Brillouin zone. Remarkably, there are other properties that depend on _global_ properties of the band structure. For example, let us take a two-band system, with wavefunctions denoted as \(\psi_{\uparrow}(\vec{k})\) and \(\psi_{\downarrow}(\vec{k})\), as shown in Fig. 4(b). In particular, for such a two-band model, the state of the system can be represented by the unit vector on a Bloch sphere. Then, let us consider how the wavefunction varies as we move in the Brillouin zone (Fig. 4(c)). If the system is topologically non-trivial, the wavefunction should accumulate a non-zero phase, when the unit vector is swept around a closed loop. Loosely speaking, there is an associated integer similar to the island and peninsula example. The latter case takes place in real space, while the former does so in momentum space. Nevertheless, the robustness of certain global properties of the system remains warranted. More specifically, as long as the spatial variation in the susceptibility and the dielectric function is weak, this global integer remains invariant. Therefore, the associated photonic observable, such as a transmission, to this invariant remains robust to a certain disorder. We briefly clarify this connection in the photonic context in the following section.
For a survey of band topological models and a step-by-step derivation, the reader can consult [63]. An introduction to quantum Hall physics can be found in [64; 65].
### Toy model: charged particles in strong magnetic field
In the following, we study a simple model of a charged particle in two dimensions, under a uniform magnetic field. While this model describes the topological physics behind the electronic quantum Hall effect, we later find it useful to synthesize similar physics in two-dimensional photonic systems. The key concepts, such as the role of gauge field, topological robustness, and topological edge state, can be understood using this simple model.
#### ii.2.1 Classical picture
In the classical picture, the electrons, with charge \(e\) and mass \(m\) undergo a cyclotron motion in the presence of an external magnetic field \(\vec{B}\). Considering the Lorentz force, the dynamics of the velocity \(\vec{v}\) in 2D is given by \(m\frac{d\vec{v}}{dt}=-e\vec{v}\times\vec{B}\). Assuming a circular motion with radius \(R\), and angular velocity \(\omega\), we have \(m\omega^{2}R=e\omega RB\). One immediately observes that the angular velocity is radius independent \(\omega_{c}=\frac{eB}{m}\) and \(\omega_{c}\) is called the cyclotron frequency. Note that the center and the radius of this orbit are not constrained.
Figure 3: Number of lapses when starting from a point (marked by the red cross) and going around an (a) island is an integer number, while it is zero for (b) a peninsula. Note that this integer number is invariant regarding the island’s shape, swimming direction, or the taken path.
Figure 4: (a) A periodic (honeycomb) photonic crystal with sub-lattices A and B. (b) Band structure of the photonic crystal, with an energy band gap separating the valence (blue) and conduction (yellow) bands. (c) Eigenvectors of the system in a unit sphere.
Semi-classical picture
For a semi-classical description, we can use the Bohr-Sommerfield quantization:\(\oint p.dq=n\hbar\) to realize such orbits should be quantized. In fact, the integer \(n\) is the phase winding number introduced earlier in our island analogy. In particular, one can set \(n=1\), to find an orbit with the smallest possible radius.
Alternatively, we can use a simple Heisenberg-limited picture, which gives us a lower limit on how small the radius of the orbits can be. Specifically, assuming \(\Delta p\) and \(\Delta x\) to the uncertainty in momentum and position, respectively, we have \(\triangle x\triangle p\simeq\hbar/2\), where for a circular motion, \(\Delta x\simeq R\) and \(\Delta p\simeq m\omega_{c}R\). Apart from a factor of two, this means the smallest orbit radius is set by
\[l_{B}=\sqrt{\frac{\hbar}{m\omega_{c}}}=\sqrt{\frac{\hbar}{eB}}, \tag{1}\]
where from now on, we call this the magnetic length. Based on the Pauli exclusion principle, we can have only a single electron in each state. Therefore, we evaluate how many of these orbits one can fit in an area of \(A=L_{x}L_{y}\), as shown in Fig. 5a. The total number of orbits that be fit in the system is
\[N_{\phi}=\frac{L_{x}L_{y}}{2\pi l_{B}^{2}}=\frac{AB}{\Phi_{0}} \tag{2}\]
where \(\Phi_{0}\) is the quantum of magnetic flux, \(AB\) is the total magnetic flux, and \(N_{\phi}\) is the total number of flux quanta. This suggests that the lowest-energy state of the system is \(N_{\phi}\)-fold degenerate.
Therefore, this suggests that as long as the number of electrons is less than \(N_{\phi}\), they can be easily fitted into the system, if we ignore the interaction between them. But if \(N_{e}>N_{\phi}\), we have to pay some energy price \(\hbar\omega_{c}\). We can revisit this argument more precisely in the quantum picture.
Moreover, in the presence of a confining potential at the boundary of the system, the degeneracy is lifted for the state at the edge of the system. These edge states are robust against a certain amount of disorder. In other words, the states form _skipping orbits_ to avoid disorder, instead of reversing their directions [66]. See below for a quantum description of edge states.
#### ii.1.3 Quantum picture: Landau quantization
To write the Hamiltonian of a particle with a charge (\(-e\)) in the presence of a classical electromagnetic field, characterized by the vector potential \(\vec{A}=(A_{x},A_{y})\), we should simply replace all momenta by \(\vec{p}\rightarrow\vec{p}+e\vec{A}\):
\[\begin{split}\hat{H}&=\frac{1}{2m}(\vec{p}+e\vec{ A})^{2}\\ &=\frac{1}{2m}[(p_{x}+eA_{x})^{2}+(p_{y}+eA_{y})^{2}].\end{split} \tag{3}\]
It is more common to choose the Landau gauge: \(\vec{A}=-yB\vec{x}\), and obtain the \(N_{\phi}\) degeneracy by applying the boundary conditions. Instead, we can choose the symmetric gauge: \(\vec{A}=-\frac{\imath B}{2}\vec{x}+\frac{\imath B}{2}\vec{y}\), since the derivation is more elegant and easily generalizable. Defining mechanical momenta as
\[\begin{split}\hat{\Pi}_{x}&=\vec{p}_{x}+e\vec{A}_{ x},\\ \hat{\Pi}_{y}&=\vec{p}_{y}+e\vec{A}_{y},\end{split} \tag{4}\]
for any magnetic field, we have
\[[\hat{\Pi}_{x},\hat{\Pi}_{y}]=-ie\hbar B. \tag{5}\]
Therefore, if the magnetic field is uniform, by properly re-scaling these momenta operators, we can consider them as position \(\hat{x}\) and momentum \(\hat{p}\) operators. More interestingly, the Hamiltonian is that of a harmonic oscillator \(\hat{H}=\frac{1}{2m}(\hat{\Pi}_{x}^{2}+\hat{\Pi}_{y}^{2})\). The ladder operators can be defined as:
\[\hat{a}=\frac{1}{\sqrt{2e\hbar B}}(\hat{\Pi}_{x}-i\hat{\Pi}_{y}) \tag{6}\]
\[\hat{a}^{\dagger}=\frac{1}{\sqrt{2e\hbar B}}(\hat{\Pi}_{x}+i\hat{\Pi}_{y}). \tag{7}\]
Consequently, \([\hat{a},\hat{a}^{\dagger}]=1\) and \(\hat{H}=\hbar\omega_{c}(\hat{a}^{\dagger}\hat{a}+\frac{1}{2})\), as shown in Fig. 5(b). This tells us that the energy levels are evenly
Figure 5: (a) cyclotron motion of electrons with a radius of R in a 2D electron gas under a perpendicular magnetic field which generates a quantum of flux due to each electron orbit. (b) energy levels of the system, with \(N_{\phi}\)-fold degeneracy. Note that the adjacent levels are separated by \(\hbar\omega_{c}\). (c) Wavefunction concentric orbits. (d) accumulated phase for a particle looping on the sites of a lattice.
spaced by \(\hbar\omega_{c}\), known as Landau Levels. In order to get the Landau level degeneracy, we can similarly identify another pair of operators that commute with \(\vec{H}\):
\[\begin{split}\widetilde{\Pi}_{x}&=\vec{p}_{x}-e\vec{A} _{x},\\ \widetilde{\Pi}_{y}&=\vec{p}_{y}-e\vec{A}_{y}\end{split} \tag{8}\]
and similarly,
\[[\widetilde{\Pi}_{x},\widetilde{\Pi}_{y}]=ie\hbar B \tag{9}\]
and all the other commutators are zero if we choose the symmetric gauge: \([\widetilde{\Pi}_{x},\hat{\Pi}_{x}]=[\widetilde{\Pi}_{y},\hat{\Pi}_{y}],[ \widetilde{\Pi}_{x},\hat{\Pi}_{y}],[\widetilde{\Pi}_{y},\hat{\Pi}_{x}]=0\). This gives us another harmonic oscillator, where ladder operators are \(\hat{b}=\frac{1}{\sqrt{2eB}}(\hat{\Pi}_{x}-i\hat{\Pi}_{y}),\hat{b}^{\dagger}= \frac{1}{\sqrt{2eB}}(\hat{\Pi}_{x}+i\hat{\Pi}_{y})\), and \([\hat{b},\hat{b}^{\dagger}]=1\). As shown in Fig.5(b), the eigenstates are simply number states corresponding to these two harmonic oscillators:
\[\ket{n,m}=\frac{\hat{a}^{\dagger 2}\hat{b}^{\dagger 2}}{\sqrt{n!m!}}\ket{0,0}. \tag{10}\]
The explicit form of the lowest Landau level wavefunction can be found in [64, 65]. Here we make a few remarks on the properties of these wavefunctions.
* The wavefunctions are concentric orbits with average radius \(r=\sqrt{2m}l_{B}\) and a width proportional to \(\frac{l_{B}}{\sqrt{m}}\), as shown in Fig. 5(c). Therefore, as \(m\) increases, the orbits become more packed, until the radius hits the system size. The maximum value of \(m\) is \(m_{max}=\frac{1}{2}(\frac{d}{l_{B}})^{2}\), assuming the system to be of a disk shape with radius \(d\). The expression is simply the area in units of the magnetic length, which is basically the total number of magnetic flux \(N_{\phi}\). Therefore, we recovered the Landau level degeneracy to be \(N_{\phi}\). More precisely, \(m\) from zero to \(N_{\phi}-1\).
* This physical meaning of \(\widetilde{\Pi}_{x}\) and \(\widetilde{\Pi}_{y}\), in the symmetric gauge, is simply the center of orbits. Specifically \(\hat{X}=-\frac{\widetilde{\Pi}_{y}}{eB}\), \(\hat{Y}=\frac{\widetilde{\Pi}_{y}}{eB}\). Using Eq. 9, we find that \([\hat{X},\hat{Y}]=il_{B}^{2}\), which means we can localize the orbits both in x and y coordinates. This is essentially the same argument we had in the semi-classical picture.
* Unfortunately, this model up to here can not explain the integer quantum Hall effect and one needs to add both a confining potential and disorder to the model. The introduction of the two ingredients leads to the confinement of the state in the _bulk_ of the system and the emergence of the _edge states_ at the system boundary [67, 68, 69]. In fact, this concept is more general and is known as bulk-boundary correspondence, where the bulk properties dictate the properties of the edge and vice versa. An intuitive understanding of this concept is based on gauge invariance, either through Laughlin's argument [27] (see [64, 65] for a pedagogical presentation) or the Chern-Simons response theory [70] (see Ref. [71] supplementary material for a derivation of this concept in photonic systems).
* The above orbits are also eigenstates of angular momentum: \(\hat{L}_{Z}=i\hbar(x\partial_{y}-y\partial_{x})\), with \(\hat{L}_{Z}\Psi_{LLL}{}^{(m)}=m\hbar\Psi_{LLL}{}^{(m)}\). In other words, these states have a well-defined non-zero phase winding, similar to our island analogy. In the presence of weak disorder, the orbits can deform but keep their phase winding. In the strong disorder limit, the states are completely washed out.
### Hofstadter Butterfly
So far we assumed the considered system is a 2D continuum. Let us now consider that the charged particles are confined to move on a square lattice, with lattice spacing \(l_{s}\), in the presence of a uniform magnetic field. We then investigate to see in what regimes these two models are equivalent.
Inspired by the Aharanov-Bohm phenomena, the essence of the magnetic field is a non-zero \(2\pi\alpha\) that the particle acquires on each plaquette. Formally, one needs to modify each hopping term by the corresponding gauge field on that link. This is known as Peierls substitution \(J\to J\exp\left[\frac{i\epsilon}{\hbar}\int_{\rm link}\vec{A}.\vec{d}\vec{r}\right]\), where \(J\) is the hopping rate. So naturally, different gauge conventions correspond to spreading the total phase \(2\pi\alpha\) along each link of a plaquette (Fig. 5(d)). Using the Landau gauge: \((A_{x},A_{y})=(-Byl_{s},0)\) where \((x,y)\in\mathbb{Z}\), and are simply locations on a \((N_{x},Ny)\) square lattice. The Hamiltonian describing the dynamics is
\[\begin{split}\hat{H}&=-J\sum\hat{a}^{\dagger}_{x+1, y}\hat{a}^{-2i\alpha y}_{x,y}+\hat{a}^{\dagger}_{x,y}\hat{a}^{2i\alpha y}_{x+1, y}\\ &+\hat{a}^{\dagger}_{x,y+1}\hat{a}_{x,y}+\hat{a}^{\dagger}_{x,y} \hat{a}_{x,y+1},\end{split} \tag{11}\]
where \(\hat{a}_{x,y}\) is the annihilation operator of particle at site \((x,y)\). So far, we are considering a single particle so the statistics of particles are not important. But in the following sections, we assume the operators obey the bosonic commutation relations.
One can verify that the total accumulated phase for a counter-clockwise propagation on a single plaquette is: \(-2\pi\alpha y+0+2\pi\alpha(y+1)+0=2\pi\alpha\). The number of magnetic flux in each plaquette is:
\[\frac{\Phi}{\Phi_{0}}=\frac{eBl_{s}^{2}}{h}=\alpha. \tag{12}\]
In other words, \(\alpha\) is the fraction of a flux in a plaquette. Note that the essence is the presence of this phase, and
the charge \(e\), \(\hbar\), etc. drop out. Therefore, one can generalize this model to neutral particles (atoms/photons) by "synthesizing" the phase. This is the key insight in Ref. [19].
The spectrum of this Hamiltonian is periodic when \(\alpha\rightarrow\alpha+1\), and is known as Hofstadter Butterfly [72], with many interesting fractal properties, as shown in Fig. 6(a). One key point is the presence of band gaps when the system is considered on periodic boundary conditions, i.e., torus. Let us recall a tight-binding 2D square lattice (\(\alpha=0\) in Eq.11) of the size (\(N_{x},N_{y}\)) has a single band with \(N_{x}N_{y}\) states with energies:
\[E(n,m)=-2J[\cos(k_{x}a)+\cos(k_{y}a)], \tag{13}\]
where \(k_{x}l_{s}N_{x}=2\pi n,k_{y}l_{s}N_{y}=2\pi m\). In this model, the translational symmetry is clearly broken, but if \(\alpha=\frac{p}{q}\), and we go around \(q\) plaquette, we get \(2\pi p\) phase, which is like having no phase and a zero \(\alpha\) 2D tight-binding model. This suggests that at \(\frac{p}{q}\) we have \(q\) bands, each containing \(\frac{N_{x}N_{y}}{q}\) states, as shown in Fig. 6(a).
When an open boundary condition is considered, in-gap states appear, as shown in Fig. 6(b). Such states are localized at the boundary and propagate in a chiral fashion.
Now, we can investigate the effect of the disorder. Imagine a disorder in the form of an onsite potential \(\hat{a}^{\dagger}_{x,y}\hat{a}_{x,y}\). When the chiral edge state encounters such an obstacle, it is energetically preferred to go around the disorder site, instead of reversing the propagation path. Recall that CW and CCW edge states have different energies. Loosely speaking, this is similar to a "quantum swimmer", where, in the presence of an obstacle, the path is modified to make sure the wavefunction remains single-valued, instead of reversing the path.
### Photonic lattice
Can we engineer a 2D array of optical resonators to simulate the previous Hamiltonian? As we observed above, the essence is the extra phase in hopping. Let's start with two coupled resonators:
\[\hat{H}=-J\hat{a}^{\dagger}_{L}\hat{a}_{R}-J\hat{a}^{\dagger}_{R}\hat{a}_{L} \tag{14}\]
where \(\hat{a}_{L},\hat{a}_{R}\) are the annihilation operators of a photon in left and right resonators, respectively, as shown in Fig. 7(a). \(J\) is the coupling strength and depends on the overlap of electromagnetic modes in the left and right resonators. The sign of \(J\) here depends on the definition of \(\hat{a}_{L,R}\) modes. Recall that phases usually do not have a meaning until they are evaluated for a closed loop. More importantly, two coupled resonators can not have a complex hopping phase, and our goal is to engineer one (see below Eq.15).
The form is simply that of the coupled mode theory. In fact, in the absence of nonlinearity, single photon and coherent state dynamics are the same (\(\hat{a}\rightarrow\langle a\rangle\)) and we remove the hats going forward.
For example, the Heisenberg picture dynamics is equivalent to two coupled mode equations of motions: \(\dot{a}_{L}=i[H,a_{L}]=iJa_{R}\), and \(\dot{a}_{R}=iJa_{L}\). Now, we want to consider two resonators coupled with a waveguide in between. One can use the transfer matrix formalism (see supplementary of [22]). Here we use the quantum input-output formalism [73] that is shorter and provides insight, following Ref. [22]. For a resonator mode (\(a\)) coupled to a waveguide, as shown in Fig. 7(b), decay/input can be described by: \(\dot{a}=-\kappa a-\sqrt{2k}E^{in}\) where \(E^{in},E^{out}\) is the input/output fields, respectively. The boundary condition is written as: \(E^{out}=E^{in}+\sqrt{2\kappa}a\). We want to engineer a situation where:
\[H=-Je^{i\phi}a^{\dagger}_{L}a_{R}-Je^{-i\phi}a^{\dagger}_{R}a_{L}. \tag{15}\]
Note that we can not put arbitrary coefficients in front of the two terms, the Hamiltonian should be Hermitian. Consider the scheme in Fig. 7)c, where two resonators are coupled through an "anti-resonant" resonator in an asymmetric way:
The total optical length of the middle ring is chosen such that photons resonant with left/right resonators do not interfere constructively in the middle, and therefore only circulate one on the above/below arm. The total optical path is \((4m+3)\pi\), where \(m\) is a positive integer. Under this condition, photons spend most of their time on the left or right resonator, and we can find an effective Hamiltonian with \(\hat{a}_{L}\) and \(\hat{a}_{R}\), without the middle ring. The equations of motion take the following form:
Figure 6: (a,b) The spectrum of Eq.(11) on a \(10\times 10\) lattice, for a closed and open boundary condition, respectively. The lower panel illustrates light intensity on the lattice corresponding to three typical states. Apart from the localized bulk states in the middle, in the open boundary case, edge states can form, while propagating in a clockwise (red) and counter-clockwise (blue) fashion.
\[E_{R}^{\text{in}} =E_{L}^{\text{out}}e^{2i\pi m+i\frac{2\pi}{2}-2\pi i\alpha}\] \[=-iE_{L}^{\text{out}}e^{-2\pi i\alpha},\] \[E_{L}^{\text{in}} =-iE_{R}^{\text{out}}e^{+2\pi i\alpha}, \tag{16}\] \[E_{R,L}^{\text{out}} =E_{R,L}^{\text{in}}+\sqrt{2\kappa}a_{R,L},\] \[\dot{a}_{R,L} =-\kappa a_{R,L}-\sqrt{2\kappa}a_{R,L}E_{R,L}^{\text{in}}.\]
By eliminating \(E_{R,L}^{in,out}\), we find the effective Hamiltonian:
\[H=-\kappa a_{R}{}^{\dagger}a_{L}e^{-2\pi i\alpha}-ka_{L}{}^{\dagger}a_{R}e^{ 2\pi i\alpha} \tag{17}\]
By choosing the length of the resonators accordingly, e.g. by increasing the \(\alpha\) linearly in row number \(y\), we can implement the Hofstader Hamiltonian.
If the system is driven with photons corresponding to the frequencies of the edge band (see Fig. 6), they circulate around the system either in CW or CCW way. In other words, photons experience an effective magnetic field \(B\), and orbiting around the system in CW/CCW fashion leads to opposite energies, in direct analogy to \(-\vec{L}\times\vec{B}\), where \(\vec{L}\) is the angular momentum.
We have not applied any external magnetic field that breaks the time-reversal symmetry (for example this is needed in an optical isolator). However, we have a magnetic field-like Hamiltonian for a passive system. How is that possible? In fact, we have two pseudo-spins \(\frac{1}{2}\) corresponding to CW/CCW circulating photons, _inside each resonator_, each experiencing an opposite magnetic field. Therefore, a more accurate analogy is spin-orbit interaction, where each spin orientation experiences an opposite magnetic field \(\vec{S}\times\vec{L}\) where \(\vec{S}\) is that photon pseudo-spin of photons. In other words, the TRS is preserved for the entire system, but we can selectively drive the system in a "spin-polarized" way, e.g. pumping the CW mode of the resonator. As long as photons do not get scattered from CW to CCW mode, each experience an opposite magnetic field.
Since the word _chiral_ is preserved for edge state with broken time-reversal symmetry, here we use _helical_ edge states.
In any physical realization, such scattering is present, however, if the rate of such backscattering processes is slower than the hopping rate, then we can ignore such processes. In the optics language, we need to operate in an unresolved mode coupling regime.
We treated the left/right rings as single-mode resonators, while the middle ring was treated as a waveguide. Is that correct? Yes, but this is only valid for photons close to the resonance of left/right rings. For a rigorous derivation, one needs to use the transfer matrix theory.
We again emphasize that the Hamiltonian and the second quantization formalism are not necessary for this part. However, this formalism allows one to understand its physics without getting lost in the details of transfer matrix theory.
### Various topological photonic models and their implementations
The above model was implemented in arrays of coupled ring resonators fabricated on \(SiO_{2}\) operating at 1550nm wavelength [22]. Generally, it is crucial to study how the topological invariants are manifested in such systems and what are the physical observables compared to electronic systems. For electrons, the system is filled up to the Fermi level, and then the electrical conductance is measured as the main physical observable. If the system has a non-zero integer topological invariant, the conduction is quantized, genetically with the same integer. Filling up the Fermi sea is simply a consequence of the Pauli exclusion principle that is absent for photons. In these photonic systems, however, one can probe the system with an incoming laser field with a given frequency. If the field is resonant with any of the system's modes, the photons enter (couple into) the lattice. Otherwise, the light is completely reflected [22]. This state spectroscopy can be used to measure topological invariants as a spectral flow when the system is subject to an extra magnetic flux [74; 28], in an analogy to Laughlin flux insertion argument [27].
From a more general point of view, during the development of topological photonics, several models have been developed and subject to intense research, starting from integer quantum Hall, followed by anomalous quan
Figure 7: (a) A pair of coupled ring resonators described by Eq. 14 (b) a ring resonator coupled to a waveguide that can be described by the input-out formalism described in the text. (c) two resonators coupled to each other using another resonator that is anti-resonator with the side resonators. By positioning the middle resonator and creating a differential optical path one gets Eq. 17. (d) Clockwise (left) and counter-clockwise (right) propagation under opposite magnetic fields, as illustrated by the blue arrows.
tum Hall (also known as the Haldane model), and subsequently, spin and valley-Hall effects. Other topological models have been also implemented in rings, for example, Su-Schrieffer-Heeger (SSH) model [55], and topological laser arrays [75]. Other models have been also implemented in helical Floquet waveguides [21]. A broad overview of these models, their characteristics, and implementations can be found here [55].
It is also important to note highlight here that, different from its electronic counterpart, topological photonics can offer an opportunity to harness several unique degrees of freedom which are either in part or completely unavailable in electronic systems. For example, various polarization [25] and orbital angular momentum [51] degrees of freedom of light can offer powerful design flexibility and novel functionalities. For example, synthetic modal dimensions have been recently implemented to synthetic hybrid spatial-modal lattice configurations beyond conventional lattice geometries [76]. A review of the relevant concepts and recent implementations can be found here [77].
### Topological photonic crystals
A simple and useful topological model can be formalized for photonic crystals based on band inversion and the formation of bound states. A useful way to think about this is to use continuum models, in particular the ones that led to the emergence of the topological insulators. The essence of these models is captured in the Jackiw-Rebbi (JR) model and the concept of band inversion. Considering a 2D system with the following dispersion:
\[\Big{[}-i\hbar v(-\sigma_{x}\partial_{x}+\sigma_{y}\partial_{y})+m\sigma_{z} \Big{]}\Psi=E\Psi,\]
where \(\Psi(x,y)\) is a spinor. \(v\) and \(m\) are the velocities and the effective mass, respectively. In contrast to the electron's spin states, here the spinor represents a pseudo-spin, e.g., two modes of the electric field. We assume the mass changes sign at the crossing point \(y=0\), specifically, \(m(x,y)=m(y)\), and \(m(0)=0\) and \(\frac{dm}{dy}<0\). There is a bound state solution at \(y=0\) that propagates along the \(x\)-axis, described by
\[\Psi(x,y)=\frac{1}{\sqrt{2}}\begin{pmatrix}1\\ 1\end{pmatrix}e^{\frac{1}{\hbar x}\int_{0}^{y}m(y^{\prime})\,dy^{\prime}}e^{ik _{x}x},\]
which is schematically shown in Fig. 8. Note that if TRS is not broken, e.g., in the valley and spin-Hall effects, we get two copies of the above Hamiltonian that are connected by TR. Therefore, we have helical states (instead of chiral) that propagate in opposite directions with opposite spin (polarization for photons) [59]. Here, for one of the polarization states, \(m(y)\) goes from a negative to a positive sign, while it changes sign in an opposite manner for the other polarization. A more detailed investigation of these concepts can be found here [78].
An exciting implementation of the JR model is engineering topologically distinct photonic crystals (TPCs). Specifically, by changing the photonic structure, one can engineer a band inversion between two topologically distinct photonic crystals to form propagating states at the interface, as first proposed in Ref. [23] and demonstrated in Ref.[25]. These states have three main characteristics: they are unidirectional (photons with opposite polarization travel in opposite directions), spatially confined (in \(y\) for an interface along the x-axis)) at the boundary, and robust against certain disorders. In the spin-Hall TPCs, as described in Ref. [81; 23], the opposite circular polarisation of the two edge modes can be described by the in-plane electric field profiles of the TPC's hexagonal unit cell. The in-plane electric field is highly circularly polarized, with opposite handedness for these two Jackiw-Rebbi solutions. The in-plane electric field circular polarization of \(\sigma_{\pm}\) can be considered as pseudo-spins for this topological photonic crystal.
Similarly, one can exploit the valley degree of freedom, and engineer a band inversion. This leads to valley-Hall TPCs as first proposed in [24] and demonstrated in [26]. Similar to spin-Hall TPCs, the in-plane electric field of the TPC's unit cell has two circular polarization that propagates in opposite directions, and forms two helical topological edge states.
Topological edge states in TPCs were imaged directly in a lattice of silicon Mie resonators [82], where the opening of photonic gaps around double degenerate Dirac cone as well as the formation of topological edge states was demonstrated using high-resolution optical microscopy. Another development was recently reported in which valley and spin degrees of freedom were shown to be presented simultaneously in a topological crystal [78].
Next, we will discuss some of the recent implementations of these types of TPCs in a variety of passive linear systems.
Figure 8: Illustration of formation of in-gap edge states at the interface between two topologically distinct mediums with inverted band structure. Adapted from [79; 80].
Examples of experimental demonstrations of linear topological photonic platforms are shown in 9. These systems include robust photonic waveguides and ring resonators in both all-pass and add-drop filter configurations. These devices use valley-Hall edge states. The quantum photonic section will cover the first demonstration of spin-Hall type photonic crystal waveguide [25]. Recently, it was proposed that adiabatic tuning of the topological bandgap in valley-Hall type photonic crystal can be utilized to form a topological mode-taper [83]. Moreover, a similar approach has been implemented to realize topological rainbow trapping in photonic crystals [84; 85].
## III Nonlinear topological photonics
Until very recently photonic systems (as well as other systems like acoustics, electrical circuits, etc) have been largely used to emulate single-particle electronic topological Hamiltonians, that is, systems where interactions between particles are negligible. This includes topological Hamiltonians such as the SSH model, the integer and anomalous quantum Hall effect, the spin and valley-Hall physics, higher-order topological insulators, Floquet topological insulators, and so on. Nevertheless, electronic topological systems also include effects such as the fractional quantum Hall effect where interactions between particles lead to a very rich physics. It is, therefore, natural to ask if one can use photonic topological systems to emulate "interacting" topological systems. Though single-photon interactions are very weak, we can still achieve "mean-field" nonlinear interactions between photons, at high enough photon flux, by using a nonlinear medium (whose polarization is a nonlinear function of the applied electric field). Examples of such nonlinear interactions include self-phase modulation, cross-phase modulation, sum, difference and harmonic-frequency generation, optical parametric oscillation, lasing, etc. Along these lines, one of the research directions explores if such nonlinear interactions affect the topology of the system: can they induce topological phase transitions, or are the topological edge states stable in the presence of such nonlinear interactions? On a more fundamental level, such nonlinear topological photonic systems have no counterparts in fermionic systems and can lead to the emergence of topological models that are unique to photons. In parallel, another research direction explores the applications of topological phenomena, like edge states, to engineer nonlinear processes in a medium, for example, for efficient and robust lasers, generation of quantum states of light, optical frequency conversion, etc. Even more so, one can also achieve true single-photon-level nonlinearities mediated by atoms or artificial atoms like quantum dots or superconducting qubits, or excitons in semiconductors. Such systems can then realize a photonic analog of interacting topological systems such as the fractional quantum Hall effect. In the following, we will review advances in these sub-fields of nonlinear topological photonics. We will limit our discussion to parametric nonlinearities like the Kerr effect.
### Nonlinearity Induced Topological Phase Transitions
Optical nonlinearities, like the Kerr effect, change the refractive index of a medium as a function of the optical intensity [88]. This refractive index change can lead to a change in the on-site potential or the coupling strength between waveguides or resonators and subsequently be used to induce topological phase transitions. One of the first demonstrations of such a topological phase transition was carried out in topo-electric circuits that realized the 1D SSH model [89]. Here the nonlinearity modified the couplings between alternate lattice sites and a topologically trivial phase at low intensities transitioned to a topological phase at high enough intensities.
At optical frequencies, a nonlinearity-induced topological phase transition was proposed by Leykam et al. in a 2D coupled ring resonator system that implemented the Haldane-like anomalous quantum Hall model in a bipartite lattice [90]. In this model, a topological phase transition can be introduced by adding unequal on-site potentials \(M\) (mass terms) to the two sets of lattice sites such that \(M>2J\). Leykam et al. considered a system with built-in (during fabrication) on-site potentials just below the transition threshold. A broadband high-intensity pump pulse was then injected into the link rings of the lattice such that their resonance frequency would red shift compared to the site rings. This relative frequency shift would reduce the effective coupling between
Figure 9: Implementation of topological photonic edge states in the linear regime. (a) a passive suspended valley-Hall photonic crystal waveguide [26]. (b) a topological photonic mode taper [83]. (c) a valley-Hall topological ring resonator with access bus waveguide, forming an all-pass filter [86]. (d) A topological add-drop filter, comprising a valley-Hall resonator and access waveguides [87].
the site rings and thereby, induce the topological phase transition.
An experimental realization of the nonlinearity-induced topological phase transition was demonstrated recently by Maczewsky et al. in a 2D coupled waveguide array implementing anomalous Floquet topological model [91]. The array is fabricated such that alternating waveguides have a non-zero on-site potential (introduced by alternating the waveguide width), and the system is topological only at each coupling region between the waveguides, the power transfer \(t>50\%\). In the linear regime, the presence of on-site potential ensures that this power transfer ratio is less than \(50\%\) and the system is topologically trivial. Nevertheless, on injecting high enough power into the waveguide with a thinner core (lower effective refractive index), the Kerr nonlinearity reduces the on-site potential difference between the waveguides and increases the coupling ratio such that the system transitions to a topological phase. Note that this pump is injected only into a single waveguide and this topological phase transition is local, meaning injecting a weaker beam elsewhere in the lattice would still experience a topologically trivial phase.
### Spatial Solitons
An optical beam propagating through a medium with optical nonlinearities, like the Kerr effect, can experience self-focusing wherein the central high-intensity region of the beam sees a higher refractive index compared to its low-intensity tails. At specific beam intensity, the self-focusing effect can exactly balance the diffraction of light, and lead to the formation of spatial solitons [92; 93; 94; 95]. Optical spatial solitons have been observed in many platforms, including coupled waveguide arrays - very similar to those used for the realization of photonic topological insulators [96]. This immediately leads to the question: can such photonic topological systems host spatial solitons? Will these solitons live on the edge of the bulk of the system? Are these solitons robust against disorders?
Very recently, spatial solitons were observed in topological waveguide arrays, in both bulk and edge states. Specifically, Mukherjee et al. observed bulk solitons in a 2D anomalous Floquet topological insulator [49], as shown in Fig. 10. The system consisted of a 2D array of waveguides with periodic/cyclic couplings to their nearest neighbors. On exiting the bulk waveguides at high enough input optical power, Mukherjee et al. observed solitons that undergo cyclotron motion while hopping neighboring waveguides. Because of this cyclotron motion, the intensity distribution of the soliton would repeat only after propagating through a complete period (along the waveguide) of the lattice. Nevertheless, as expected, the soliton would not diffract into the bulk of the lattice. Furthermore, the quasi-energies of the solitons were observed to be in the bandgap and the extent of localization of the solitons was observed to increase (decrease) with the increasing (decreasing) separation between the quasi-energies of the soliton and the linear band.
In another related article, Mukherjee et al. also observed a soliton-like solution on the edge states of the anomalous Floquet topological waveguide array [49]. As before, the input light is coupled to a single waveguide, but now on the edge of the array. Because the input is confined to a single waveguide, it can excite all the edge modes with different quasi-energies (in this case quasi-energy is propagation constant/momentum along the waveguides). The finite curvature of the edge band dispersion can then lead to the broadening of the edge excitation. Note that, even in linear topological systems, the excitations on the edge states stay localized to edge states. As such, the broadening here refers to the increase in the number of waveguides on the edge that are occupied by the beam as it propagates along the edge of the array. In the presence of nonlinearity, Mukherjee et al. observed minimal broadening - an indication of balancing the broadening of the beam against nonlinearity-induced self-focusing. Nevertheless, these soliton-like features on the edge were observed to scatter some power into the bulk of the lattice.
Following these observations of bulk and edge spatial solitons, another fascinating observation by Jurgensen et al. has been the demonstration of Thouless pumping of solitons [97; 98; 99]. For this experiment, they used a 1D array of coupled waveguides that simulates the off-diagonal Aubry-Andre-Harper (AAH) model for photons. In this array, the coupling strength (off-diagonal elements of the Hamiltonian matrix) between the waveguides varies periodically as a function of position along the waveguide length. This 1D model is related to the 2D Chern insulator model and exhibits an identical band structure. At the input of this array, a soliton, which is an eigenstate of the nonlinear Hamiltonian, was injected. Then, the Thouless pumping manifested as a quantized displacement of the soliton to neighboring waveguides by one unit cell after propagating one period of coupling-strength modulation. Even more, by choosing a soliton solution that bifurcated from a different band (with Chern number \(+2\)), the authors also observed quantized displacement by two unit cells in one period of propagation length. Evidently, the displacement of the soliton corresponded to the Chern number of the band from which the soliton bifurcated. Using this same platform the authors have also recently demonstrated nonlinear pumping of solitons by fractional integers that are quantized [100].
In another very different platform, that of cavity polaritons, Pernet et al. also observed spatial solitons [101]. Their system consisted of a 1D array of micropillars, each of which hosts a cavity polariton. The coupling between the neighboring micropillars was staggered to realize the 1D SSH model. The nonlinearity in this system originates from the coulomb repulsion between the excitonic part of the cavity polaritons. When the topological edge state at the interface between topologically trivial and
no-trivial regions was strongly pumped, a topological gap soliton was observed to be localized at the same interface state. Evidently, this topological soliton bifurcates from the mid-gap topological edge state and exhibits a spatial intensity distribution (localized on a single sublattice) that is similar to the linear case. More interestingly, when a dimer in the bulk of the array was pumped, with pump frequency in the topological bandgap, the authors observed the formation of topological bulk solitons. Furthermore, the pump power threshold for the formation of topological bulk solitons was found to be robust against defects (introduced by another laser) only in one sublattice and not in the other sublattice. Going further, the authors demonstrated that by controlling the phase of the pump excitation over two micropillars of the dimer, they could achieve sub-lattice-polarized topological solitons such that the soliton wavefunction was predominantly localized to only the sublattices, and this polarization could be controlled by controlling the relative phase of the two pump beams.
### Dissipative Kerr Temporal Solitons and Frequency Combs
The presence of optical Kerr nonlinearity in optical resonators with multiple free spectral ranges (FSRs) can lead to the fascinating physics of temporal dissipative Kerr solitons and optical frequency combs [102; 103; 104; 105; 106; 107; 108; 109]. Because of the spontaneous four-wave mixing process mediated by the Kerr nonlinearity, a continuous-wave pump beam with a frequency near one of the resonances leads to the generation of new frequencies in the resonator. The energy and momentum conservation dictates that the newly generated frequencies are also close to the resonator frequencies at other FSRs. In the limit of a weak pump, this process is spontaneous and as we discussed earlier, is used to generate energy-time entangled photon pairs. With increasing pump power, the newly generated frequencies beat with the pump and also with other frequencies, ultimately leading to a stimulated four-wave mixing process. Because the generated frequencies are almost aligned with the ring resonances, they constitute an optical frequency comb with frequency separation almost equal to the FSR of the resonator. However, the comb lines are, in general, not phase-locked and the frequency comb is chaotic. In this regime, the field profile inside the resonator is also chaotic
Figure 10: (a-c) Schematic of the 2D array of coupled waveguides and variation of their coupling strengths, used to observe spatial topological bulk solitons. (e-h) Intensity profile in the array showing soliton behavior. (i) Schematic of the 1D array of coupled waveguides used for pumping of topological solitons and the variation of the coupling strength between the waveguides. (j) Topological pumping in the linear regime. (k) Topological pumping of solitons at higher pump powers, and (l) trapping of solitons at very high pump powers.
By appropriately designing the dispersion of the resonator, mostly to be in the anomalous regime, and by tuning the pump frequency and power, it is indeed possible to get to a regime where the linear dispersion of the ring is exactly canceled by the dispersion introduced by the Kerr nonlinearity and the resonator loss is balanced by the four-wave mixing (FWM) gain. This dual balance leads to the generation of a coherent optical frequency comb where all the comb lines are phase-locked and precisely equally spaced by the FSR of the resonator. The corresponding field profile inside the resonator corresponds to one or more soliton pulses in time, called Dissipative Kerr Solitons, that propagate without dispersion. Such DKSs have been observed in a variety of resonator geometries, including whispering gallery mode, bottle, integrated ring, and Fabry-Perot resonators, and also in a number of material platforms, including silica, silicon, silicon-nitride, aluminum-nitride, silicon-carbide, and so on. From an application perspective, coherent optical frequency combs find a number of applications, for example, in precision time-keeping, spectroscopy, WDM transceivers, LiDARs, etc.
Recently, there has been growing interest in using coupled-resonator systems to engineer novel DKS solutions and comb spectra that are not accessible using single resonator geometries [110; 111; 112]. On a more fundamental level, these systems also explore the self-synchronization of coupled resonators. Some of the early demonstrations in this regard used resonators made of fiber loops or a fiber-loop coupled to an integrated ring resonator [113; 114]. More recently, the field of frequency combs has seen an influx of ideas from the field of topological photonics.
Specifically, Mittal et al. theoretically studied the generation of DKSs and optical frequency combs in two-dimensional ring resonator arrays that, as we discussed earlier, create a synthetic magnetic field for photons, and thereby, stimulate the integer or the anomalous quantum Hall physics for photons [115]. Given that this system realizes one copy of the anomalous Hall model near each of the single-ring resonance frequencies, it is effectively a three-dimensional system with two real and one synthetic dimension in frequency. For a linear system, the different copies at different ring resonance frequencies are uncoupled. However, the introduction of a four-wave mixing process (Kerr nonlinearity) couples these copies by mediating the hopping of photons between them. This demonstration is summarised in Fig. 11.
As we discussed earlier, the linear dispersion and the spatial confinement of topological edge states lead to efficient phase-matching of the spontaneous four-wave mixing process for the generation of entangled photon pairs. A similar phenomenon was observed for the generation of optical frequency combs in topological ring resonator arrays. The comb generation was efficient only when the pump beam was close to one of the edge mode resonances. This can be easily understood considering that the topological edge states circulate around the complete periphery of the lattice, and hence, realize a super-ring resonator composed of smaller rings. The edge state resonances then simply the longitudinal modes of this super-ring resonator. So when the pump beam is close to an edge-state resonance, in addition to the linear dispersion, the FWM process is resonantly enhanced by the edge-state super-ring resonator.
By tuning the pump frequency and pump power, Mittal et al. observed two very distinctive regimes, namely that of phase-locked Turing rolls and Nested Solitons. In the regime of Turing rolls, all resonators that lie on the edge of the lattice show the presence of multiple equidistant peaks in the rings and only a single edge mode resonance is oscillating. Remarkably, the phase of the Turing rolls in all the edge rings was locked. At higher pump powers, a regime of Nested solitons was observed. In this regime, there was a single pulse in the ring on the edge of the lattice, and also a single super-pulse in the super-ring resonator formed by the edge states. Once again, the pulse positions in the rings were phase-locked. This nested-soliton pulse would circulate around the edge of the lattice, and around defects, without losing its phase-locking. The comb spectrum in this regime showed oscillation of multiple edge modes resonances in each FSR (each copy of QAHE), and the underlying dispersion was canceled by that introduced by the Kerr nonlinearity. It is worth noting that merely exciting the edge state resonances of the lattice did not lead to the formation of nested solitons, it also required tuning the pump frequency around the edge state resonance and the pump power. As with single-ring resonators, the phase diagram (pump frequency vs pump power) of the topological frequency comb was largely dominated by a chaotic regime. Only in very narrow regimes of pump frequency and power, were these phase-locked patterns observed. It is expected that similar physics could be explored in similar coupled-resonator systems [116] or other platforms, for example, topological circuits where it is also possible to introduce nonlinearities. Nevertheless, an experimental realization of the topological frequency comb is yet to be realized in any platform.
## IV Quantum Topological Photonics
Due to their built-in robustness against decoherence, photonic systems are poised to play a central role in the development of quantum technologies. In addition to being the natural choice for quantum communications, photonic systems also offer a versatile platform for quantum simulations, for example, of random walks, molecular quantum dynamics, quantum-enhanced sensing, and full-scale quantum computation using measurement-based computing [117; 118; 119; 120; 121]. This is facilitated by the many photonic degrees of freedom, for example, polarization, orbital angular momentum, temporal and spectral modes, etc., onto which quantum states can be encoded, manipulated, and measured. However, the key challenge
in harnessing the full potential of quantum photonic systems and further diversifying their functionalities is to achieve a scalable route for quantum engineering of the various photonic degrees of freedom via large-scale integration of photonic elements on a single chip. This large-scale photonic integration is mainly hindered by the unavoidable fabrication disorder that leads to random variations in the photonic mode structure and manifests as device-to-device variations in behavior. Following the various demonstrations of topological robustness for classical photonic systems, it is then natural to investigate if topological protection could also be used to design robust quantum photonic devices. A number of recent theoretical and experimental works have explored such quantum topological photonic systems in various contexts [25, 28, 34, 35, 48, 122, 123, 124, 125, 126]. One broad category of these systems has explored the extent of topological robustness in the propagation of photons carrying quantum information, for example, encoded in temporal or spatial entanglement [28, 122, 123, 125]. Propagation of entangled photons through a disordered system can, in general, lead to the loss of quantum information. In contrast, using numerical simulations, Mittal et al. [28] and Rechtsman et al. [122] proposed that the topological edge states can reliably carry entangled photons. We note that the quantum information in these systems is generated outside of the topological device.
The second category of quantum topological photonic systems has explored the generation of quantum states of light [34, 35]. These systems use the second or third-order optical nonlinearities of the medium and implement spontaneous parametric processes that naturally lead to the creation of photon pairs correlated in energy-time, and space-momentum. The presence of topological edge states is then exploited as a novel and robust route to engineer the spectral or spatial correlations in generated photon pairs. The third category seeks to interface solid-state quantum emitters, for example, quantum dots with topological photonic systems [[25, 124]. The inherent directionality and the robustness of topological edge states in these systems lead to chiral light-matter interactions. In the following, we review some of the experimental demonstrations of topological robustness in quantum photonic systems.
### Topological sources of quantum light
Sources of quantum light, in particular, correlated and entangled photon pairs, have relied on spontaneous processes such as spontaneous parametric down-conversion (SPDC) and spontaneous four-wave mixing (SFWM), in optical media with \(\chi^{(2)}\) or \(\chi^{(3)}\) nonlinearity, respectively [88, 127]. In these processes, one (SPDC) or two (SFWM) photons from a strong, classical pump beam annihilate and create two daughter photons, called signal and idler photons. The parametric nature of these processes indicates that no energy or momentum is transferred between the photons and the nonlinear medium, and therefore, the pump and the generated photons conserve both energy and momentum. For example, in SFWM, \(2\omega_{p}=\omega_{s}+\omega_{i}\), and \(2\vec{k}_{p}=\vec{k}_{s}+\vec{k}_{i}\), where \(\omega\) and \(\vec{k}\) are the frequencies and the momenta of the pump (p), signal (s) or idler (i) photons. The underlying dispersion relation \(\omega\) of the photonic mode structure couples these two relations together, and eventually, leads to non-classical energy-time and position-momentum correlations in the generated photon pairs such that they are described by a two-photon wavefunction.
Implementing SFWM and SPDC on a photonic chip
Figure 11: (a) Schematic of the 2D array of ring resonators used to generate temporal nested solitons. The resonator array simulates the anomalous quantum Hall model for photons. The pump laser is coupled to the array using the input-output waveguide. The nested comb output is collected using the same waveguide. (b) Phase-locked Turing rolls along the edge of the 2D array. (c) Phase-locked nested solitons propagating along the edge of the array. Note that there is a single soliton in each ring resonator that is a part of the nested soliton. (d) Comb spectrum in the regime of phase-locked Turing rolls showing oscillation of a single edge mode in each FSR. (e) Comb spectrum in the regime of nested solitons showing the oscillation of multiple edge modes in each FSR.
offers a scalable and versatile platform to generate photon pairs with engineered spectral or spatial correlations [128; 129; 130; 131; 132]. In particular, on-chip quantum light sources, using SPDC or SFWM, have now been realized on a variety of material platforms, such as silicon, silicon-nitride, lithium-niobate, aluminum-nitride, etc. [133; 134; 135]. A common feature of these sources is the use of a ring resonator that can resonantly enhance the strength of nonlinear interactions and lead to higher generation rates [130; 131; 132].
With the aim of further enhancing the generation of photon pairs, and simultaneously, engineering their spectral and temporal correlations in a topologically robust way, Mittal et al. [34] used the system of coupled silicon ring resonators to implement SFWM. As we discussed earlier, this system realizes a synthetic magnetic field, and thereby, simulates the integer quantum Hall effect for photons [129; 22; 28]. They chose the synthetic magnetic field flux \(\phi=\pi/2\) such that the transmission spectrum of the device exhibits two edge bands, with edge states circulating around the lattice in clockwise and counterclockwise directions. Using transmission and delay measurements made over a number of devices, the edge states in this system have been shown to be quantitatively robust against common fabrication disorders, for example, a mismatch in the ring resonance frequencies [135].
While the topological robustness of transmission through photonic edge states has been extensively explored for applications in integrated photonic devices, Mittal et al. exploited the linear dispersion of the edge states to engineer the spectral correlations of generated photons. In particular, the spectral correlations between generated photon pairs, as well as their generation rate dictated mainly by the phase matching between the pump, the signal, and the idler photons, that is, \(2\vec{k}_{p}\left(\omega_{p}\right)=\vec{k}_{s}\left(\omega_{s}\right)+\vec{k} _{i}\left(\omega_{i}\right)\). To understand these spectral correlations, they measured the generation rate of photons as a function of the input pump frequency and the spectra of generated signal and idler photons Fig. 12c-f. They showed that the maximum number of photons is generated when the pump frequency is in the edge band of the device (highlighted by the white box). Furthermore, this also limits the spectra of generated photon pairs to the same edge band. This spectrally confined and enhanced generation of photon pairs is because of the linear dispersion of the edge states, which naturally satisfies the phase-matching condition when all the photon fields are in the edge band of the device. Furthermore, their confinement at the edge of the lattice ensures that they also have an excellent spatial overlap and enhanced the generation of photon pairs. In contrast to the edge modes, the bulk modes show a much weaker generation of photon pairs with no spectral confinement.
To test the robustness of spectral correlations between photons generated by their topological source, Mittal et al., made measurements over a number of devices, and also compared their results against a similar source implemented using a topologically-trivial one-dimensional array of ring resonators (Fig. 12a) [136; 137]. Although these devices were fabricated at state-of-the-art commercial silicon foundries, they had a significant disorder in the ring resonance frequencies, hopping strengths, as well as hopping phases, [135]. Nevertheless, as expected, they observed that for topological sources, the maximum number of photons was always generated when the pump, the signal, and the idler fields constituted the edge modes of the device, and therefore, the spectral correlations in the edge band were very similar across different devices. In contrast, the topologically-trivial 1D sources showed very significant variations in their correlations, with a much lower similarity across devices. Using second-order cross- and self-correlation measurements between generated photons, they confirmed that their source was operating in the quantum regime. More recently, this scheme has also been extended to generate path-entangled photon pairs [126; 48]. These results bode well for the use of topological sources to achieve quantum interference between photons generated by independent sources.
In another similar experiment, Blanco et al. [35] investigated the generation of correlated photon pairs in a 1D lattice of coupled waveguides. The coupling strength between the waveguides is modulated by alternating the gap (small or large) between the waveguides, such that the lattice simulates the SSH model [6], and the edge states appear at the physical boundary between the two topological phases. Furthermore, the edge state wavefunction vanishes at lattice sites immediately neighboring the edge site and alternating waveguides thereafter.
The waveguides were fabricated using silicon which allowed for the generation of correlated photon pairs via SFWM. In particular, the edge state of the lattice was pumped using a pulsed laser that generated signal and idler photon pairs as it propagated through the lattice. At the output of the lattice, Blanco et al. measured the spatial correlations in the generated signal and idler photons. They observed that similar to the classical (single-particle) edge state wavefunction in the SSH model, the spatial correlations in the two-photon wavefunction also showed zero amplitude at alternating waveguides. Furthermore, they showed that these zeros of the wavefunction were robust against disorder in the coupling strength between the waveguides.
From a more fundamental perspective, the SFWM process coherently adds or removes photon pairs from the topological lattice. The number of particles in the lattice is, therefore, not conserved and it operates in the non-Hermitian regime, with no analogs in fermionic systems. Furthermore, as theoretically shown by Peano et al. [138], an increase in the strength of the SFWM interaction would naturally lead to the generation of squeezed light such that only the topological edge modes of the lattice are effectively squeezed.
### Topological robustness for propagating quantum states of light
While photons do not interact with one another, they do exhibit quantum interference which forms the basis of many algorithms used in quantum communications, quantum simulations and quantum computation using photons [118; 119; 120]. This is best exemplified by the Hong-Ou-Mandel interference where two indistinguishable photons arriving in two different input ports of a beam-splitter tend to bunch at either of the output ports [139]. This interference phenomenon has led to the observation of quantum walks of correlated photons and the realization of boson sampling in spatial networks of integrated beam-splitters [140; 121].
However, scaling this multi-photon quantum interference and boson sampling schemes to a larger number of photons requires a significant reduction in the variations of the splitting ratio of the on-chip beam-splitters. It is therefore natural to investigate if topological protection can be used to design robust beam splitters.
Along these lines, Tambasco et al. [123] realized a beam-splitter using the topological edge modes. Their system consists of 1D arrays of coupled waveguides, such that the coupling strength between them is modulated both along the lattice and along the length of the waveguides. This system simulates the off-diagonal Harper model and hosts a pair of edge states at its boundaries, similar to the SSH model. However, modulation of the coupling strength along the length of the waveguides allows them to adiabatically delocalize the edge states from the boundary to the bulk of the lattice such that the photons traveling in the edge states can now interfere. The edge modes are then again localized at the boundaries of the lattice. This setup then realizes an integrated beam-splitter for photons but uses edge modes for guiding photons.
At the input of this topological beam-splitter, Tambasco et al. injected two indistinguishable photons generated via off-chip SPDC. By tuning the relative delay between the input photons and using coincidence measurements at the output, they observed a high visibility HOM interference dip, which confirmed the intended operation of their topological beam-splitter. However, the robustness of the beam-splitting ratio of this topological beam-splitter against fabrication disorder is yet to be studied.
In another experiment using similar 1D arrays of waveguides that simulates the off-diagonal Harper model, Wang et al. [125] investigated the robustness of intensity correlations between indistinguishable photon pairs as they propagate through the lattice. Similar to the experiment of Tambasco et al., the correlated photon pairs were generated off-chip using SPDC. Wang et al. showed that when both the photons are injected in the edge mode of the array, they maintain the intensity correlations. In contrast, when the photon pairs propagate through bulk modes, there is a suppression in their correlations.
In a similar context, Mittal et al. [28] numerically studied the propagation of time-bin entangled photons through their 2D topological system of coupled ring resonators. Similarly, Rechtsman et al. [122] investigated the propagation of spatially entangled photon pairs through their Floquet topological system of coupled helical waveguides. These investigations are similar in essence to quantum walks of photon pairs through networks of beam-splitters or coupled waveguides. They observed that propagation through edge states preserves the temporal and spatial correlation between photon pairs, respectively, even in the presence of disorder.
Figure 12: (a), 2D ring resonator array used to realize a topological source of correlated photon pairs generated via SFWM. Because of the synthetic magnetic field, photons acquire a non-zero, direction-dependent phase \(\phi\) when they circulate around a closed path of four site rings (cyan) and four links (yellow) rings. The clockwise (CW) and the counter-clockwise (CCW) edge states are highlighted in color. (b), Measured transmission spectrum showing edge and bulk bands. (c-f) Measured spectral correlations, that is, the number of photons generated as a function of the pump and signal frequencies. The dashed lines indicate the edge band region. The spectral correlations for 2D topological devices are very similar in the edge band region. (g) SEM image of a topologically trivial 1D array of coupled ring resonators. (h-k) Measured spectral correlations for 1D devices. The correlations differ significantly across devices because of disorder.
### Topological photonic systems coupled to quantum emitters
Coupling light to matter degrees of freedom, such as, quantum dots, can mediate the interaction between photons and lead to novel quantum states of light [141]. In turn, the photonic mode structure can significantly alter the properties of solid-state systems. For example, photonic cavities can be used to manipulate the emission spectra and the excitation lifetimes in quantum dots [141; 142]. Coupling quantum dots and solid-state emitters to topological photonic edge states is, therefore, an exciting avenue to investigate chiral light-matter interactions that could lead to many-body states [143].
Barik et al. [25] realized such a quantum optics interface between quantum dots and photonic edge states. Their topological photonic system was designed using a 2D photonic crystal with triangular holes in a GaAs membrane Fig. 13b. When the holes are arranged in a honeycomb lattice, the band structure of the photonic crystal exhibits a Dirac point, very similar to that of graphene [81; 23]. Nevertheless, a deformation of the unit cell of the lattice leads to the appearance of a bandgap. More specifically, expanding the unit cell of the lattice, that is, increasing the distance between the holes in the unit cell while keeping the boundaries of the unit cell constant, resulted in a bandgap that was topological in nature. In contrast, shrinking the unit cell also opened a bandgap, but a trivial one. Therefore, an interface between the shrunken and the expanded domains hosts topological edge states. This model realizes the quantum spin Hall effect where the in-plane circular polarization of the electric field constitutes two pseudo-spins of the system. The edge states corresponding to the two pseudo spins propagate along the interface in opposite directions.
The GaAs membrane used by [25] was embedded with InAs quantum dots, with their emission spectra well aligned to the bandgap of the photonic crystal structure [25]. To couple the quantum dots to the in-plane circularly polarized photonic edge states, they used an out-of-plane magnetic field that induced a Zeeman splitting in the excited state energies of the quantum dots (Fig. 13d). In this configuration, the two Zeeman-split energy levels were selectively coupled to the two circularly polarized photonic edge states propagating along opposite directions. In this work, [25], Barik et al. excited a single quantum dot in the middle (M) of the interface (see Fig. 13b) and measured the spectra of photons collected from either side (L or R) of the interface, as a function of the magnetic field strength. Because of the pseudo-spin selective coupling of the excited states to counter-propagating edge states, they observed that the lower wavelength (higher energy) photons were primarily guided by the edge states to the right end of the interface whereas the higher wavelength (lower energy) photons were guided towards the left end of the interface. To demonstrate the robustness of this topological quantum-optics interface, they showed that the chiral propagation of photons is robust against any disorders that do not flip the two pseudo-spins, for example, bends in the interface. Furthermore, they used second-order correlation measurements to observe the anti-bunching of photons which ensured that they were indeed single photons.
In another experiment, Ota et al. [124] explored the coupling of quantum dots to nanophotonic cavities realized using corner states of light in higher-order topological systems. Similar to Barik et al., their system comprised a GaAs photonic crystal membrane, with embedded InAs quantum dots. The photonic crystal was designed by etching two sets of square holes, with different lengths of their sides, in the GaAs membrane. This difference in the hole dimensions opened up a bandgap. More importantly, a 90\({}^{\circ}\) interface between two photonic crystal regions with swapped hole dimensions led to the emergence of corner states in the bandgap, physically located at the bend in the interface. This system is analogous to a two-dimensional SSH model with alternating coupling strengths in both dimensions. To probe the existence of corner states in their photonic crystal, Ota et al. used the photoluminescence from the ensemble of quantum dots as a broadband light source (pumped by a laser). As an indication of the corner states, they observed a sharp peak in the photoluminescence spectrum, in the region expected to host corner states. They confirmed their observation of the corner states by showing that this peak in the photoluminescence spectrum originated from a narrow spatial region at the 90\({}^{\circ}\) bend in the interface between topological and trivial domains of the photonic crystal. We note that unlike Barik et al. who coupled a single quantum dot to the topological edge states, Ota et al. used an ensemble of quantum dots.
## V Remaining challenges and future directions
Here, we highlight some of the challenges and potential future directions in topological photonics in linear, nonlinear, and quantum regimes, as well as coupled electron-photon systems, ranging from fundamental to application perspectives. For the latter, it is essential to go beyond proof-of-principle experiments, and a side-by-side comparison of the efficiency and yield of a topological design compared to trivial counterparts (with the same fabrication process and material) should be established. Such side-by-side comparisons and yield estimates remain scarce in the literature [145; 135].
### Linear topological photonics
* In spin-Hall PhC waveguides [25] and ring resonators [146] propagation length and Q factors are low since the edge states are above the light-cone, and therefore, radiative. Moreover, practical edge state bandwidths in spin-Hall PhC waveguides
in strong perturbation (shrinking and expanding) regimes [25] are limited. One intriguing improvement can be the realization of broadband spin-Hall TPC waveguides and high Q photonic cavities with below-light cone edge dispersion. Another remaining challenge is the efficiency of mode conversion at the topological-conventional waveguide interfaces [147; 148; 26]. Further optimization of such mode conversion is essential for the efficient integration of these optical components for scalable photonic circuitry. One recent approach to address this challenge can be found here [149]. More promising approaches may be using the recently demonstrated topological funneling of light and edge mode tapering, although it should be noted that topological funneling of light in these studies is based on non-Hermitian physics [50; 83]. Moreover, so far many valley-Hall PhC waveguides [147; 24; 26] have been proposed and studied. In addition to a recent investigation of the nature and degree of backscattering against sharp bends and fabrication imperfections [145], further studies are required to confirm if there is any protection against real-world defects and classify and quantify the strength of such protection rigorously. A recent study can be found here [150].
* While helical topological waveguides have been experimentally realized [122], their photonic applications have not been explored yet. This is in contrast to coupled rings used for solitons and lasers, and TPCs used for routing and QDs. It would be desirable to explore whether similar or other exclusive applications such as chip-integrated photonic circuits are realizable using the topological helical waveguides.
* One exciting direction can be the design and realization of reconfigurable topological devices with phase-changing materials. Such a reconfigurable platform can be inspired by recently demonstrated reconfigurable non-Hermitian topological photonic routing [151]. In particular, it is intriguing to be able to imprint a wide variety of optical components such as waveguides, ring resonators, and beam splitters all within the same device with a compact footprint. A recent proposal has explored the possibility of such systems in spin-Hall TPCs [152].
* Another potential direction in either topological ring resonator arrays or TPCs can be the realization of topological bandpass and notch filters, enabled by the robustness propagation of the edge states in larger device sizes. In particular, the realization of robust topological delay lines has been proposed in such devices [19]. One can investigate what other devices are possible to realize that can benefit from topological protection. For example, the realization of topological photonic taper [83] and application of topological photonic beaming [52] has not been reported yet. Moreover, the scalability and application of topological antennas have not been investigated fully yet [39; 40].
* Spin-flip in topological photonic resonators and photonic crystal waveguides is an undesirable feature. Using inverse design or more generally machine learning techniques to address these issues and design practical topological photonic systems can be an intriguing avenue to improve device functionality and scalability [153]. In this approach, topology can be hard coded in the optimization process. For example, a non-zero Berry phase can be encoded as a loss function.
* Despite decades of theoretical and experimental work on the electronic integer quantum Hall effect, the plateau transition remains an active area of debate [154; 155]. It is intriguing to explore whether photonic systems can shed light on the nature of
Figure 13: (a-c) SEM image, and the band structure of the shrunken and expanded honeycomb lattice used to realize a topological interface between quantum dots and helical edge states. (c) An applied magnetic field introduces a Zeeman splitting between the pseudo-spin (right and left-circular) polarized photons. (d-f) The pseudo-spin polarized edge states propagate along the interface in opposite directions, where they are collected using grating couplers. (g,h) The measured second-order correlation function \(g_{2}\left(\tau\right)\) shows the operation of quantum dots as single photon sources. [144].
extended states and transition between plateaus [156]. This might require strong interaction between photons.
* In TPCs, valuable recent work [145] has exhaustively investigated the nature and the degree of scattering in topological photonic waveguides. More experimental comparison is direly needed to further deepen the fundamental and technical limits of topological protection for device engineering.
* With recent remarkable advances in machine learning, it is interesting to investigate its implication in physics [157], and more specifically in topological photonic systems. This can be in diagnosing and classifying topological states and even finding new band structures with topological phases [158]. In addition to designing topological photonics devices, exploring the connection between topological data analysis (TDA) and machine learning and their potential implications in photonic systems may also be another emerging avenue in this field [159].
* Another novel direction can be the realization of reconfigurable topological photonic devices using phase-changing materials. In particular, the demonstration of multi-functional operation within the same photonic device footprint can be intriguing. A recent example of such a study can be found here [160].
### Nonlinear topological photonics
* Recent theoretical studies suggest a rich set of phenomena to occur in topological lasers [161; 162]. Experimental exploration of such ideas can significantly expand the use of topological edge states in real-world applications in novel lasers. Experimentally, an unambiguous demonstration of topological robustness in topological lasers and a comparison of their efficiency compared to trivial one-dimensional counterparts in a side-by-side comparison remains an active area of research. Another direction includes Dirac-point lasers in 2D geometries [163; 32; 33]. In particular, it is useful to optimize the stability and laser emission in broader chip areas without multi-mode operations in these devices. Moreover, Weyl-points in 3D photonic crystals are novel candidates for expanding topological lasers to more than 2D configurations [164; 165]. Polaritons, which are hybrid photon-exciton particles, and either photon, excitons, or their coupling form can also be engineered to have topological properties. In these topological exciton-Polaritons, which were demonstrated recently ([36]), the band gap is very small, which makes their broadband application and spectrally-resolved demonstration challenging. It would also be intriguing to explore concepts such as spin-selective strong light-matter interaction in topological exciton-polariton systems [166].
* Recently Topological frequency combs and nested temporal solitons have been theoretically proposed [48]. Nevertheless, the experimental realization of topological optical frequency combs using coupled ring resonators is expected to be challenging. In particular, the currently estimated pump power requirement for topological frequency combs is high, more than 10W. This is mainly set by the disorder in ring resonance frequencies which, even for state-of-the-art photonic integration, is of the order of a few tens of GHz. This sets a lower limit on the coupling strength \(J\) between the rings and limits the loaded quality factor of the rings. Furthermore, lowering pump power requirements will also reduce the deleterious thermal effects which are problematic even for single-ring combs. Another area of concern for ring-resonator-based topological combs will be the mode-mixing between transverse modes of the ring waveguides. To lower losses, single-ring resonator frequency combs often employ waveguides that support multiple transverse modes. However, for coupled ring resonators, mixing between different transverse modes can be a significant challenge. For proof-of-principle demonstration, many of these issues could be mitigated by designing rings with lower coupling strengths (higher loaded quality factors and lower topological edge bandwidth) and coarse-tuning the ring resonator frequencies (say using heaters) such that the disorder falls within the reduced topological edge bandwidth. Nevertheless, it will be interesting to investigate other topological photonic designs that could lead to lower pump power requirements and make them more appealing for practical applications. From a theoretical perspective, the topological frequency combs could host a much more diverse range of nonlinear solutions, such as breathing solitons, dark solitons, platicons, etc.; these solutions have not yet been explored. The development of an analytical approach to describe these multi-resonator systems might be very helpful, but again, it is expected to be challenging.
* While the nature of the linear and quantum many-body topological states has been heavily studied and understood, the nature of topological invariants in the nonlinear topological photonic systems remains elusive. Recent works have shown that in the nonlinear regime, the topological character of the linear regime is inherited in some form [167; 168; 99]. It is interesting to see whether there are genuinely nonlinear topological states. Moreover, is it possible to have a bulk-edge correspondence for the nonlinear topological states?
* Inspired by the above examples, are there other
photonic phenomena without electronic counterparts? An example can be using topological confinement for more efficient lasers [169]. In particular, it is useful to answer whether there are other implications for using the same confinement, such as optical sensing. Moreover, further investigation of the topological amplifier which was recently reported [170] is also another potential future direction.
### Quantum topological photonics
* In pair generation in a topological lattice, the key advantage is the robust phase-matching compared to conventional counterparts [34]. It would be intriguing to find more applications of this robust phase-matching advantage for the realization of non-linear quantum optical effects. Moreover, the device footprint in a topological quantum light generation device is considerably large since it is comprised of rings with several hundreds of microns.
* Position-dependence of chiral coupling in QD-coupled topological waveguides is a significant limitation in these quantum optics interfaces. Moreover, chirality and high Beta-factor areas are mostly present in the holes rather than the material [171; 172; 147], which is detrimental for coupling to solid-state quantum emitters. Also, low Purcell factor (currently only up to less than 5 is reported [144]) is another limitation in QD-coupled TPCs. In topological waveguides, the emitter's coupling efficiency, as well as emission enhancement, may be improved with either slow-light effect [173] or smaller mode volume (for example harnessing topological mode tapering [83]). Moreover, in whispering-galley mode ring resonators, [144; 147], possibilities for achieving higher Q factors (for example using surface passivation for suppression of the out-of-plane scattering) can be investigated. Similar to [174], coupling multiple quantum dots to edge states can be a very interesting direction to explore, in particular of interest is the collective dynamics between distance emitters [175], however, this is currently challenging due to the beta factor being position-dependent in current TPC waveguides. Chiral coupling is also position dependent in these systems so the directional sub and superradiance effects from embedded quantum emitters in topological waveguides are challenging. One avenue to explore can be using inverse design to address this issue, and in turn, investigate if such a platform can be utilized to address the spatial inhomogeneity challenge in chip-integrated solid-state quantum emitters. Finally, the realization of on-chip quantum interference of single photons in add-drop filter photonic crystal configurations, in which chiral coupling between emitters and several modes of the resonator was shown recently [148], would be another intriguing possibility to explore. In particular, combining recent demonstrations of broadband slow-light enhancement in a topological photonic ring resonator with an integrated add-drop filter configuration may be studied [176; 148].
* Another interesting development is the possibility of coupling emitters to topological photonic structures. This includes coupling emitters to 1D structures[177; 178; 179], and 2D systems[180; 181] with novel forms light-matter hybrids.
* An extremely exciting avenue is the realization of Laughlin states with a higher photon number than two [47], and more broadly, other topologically-ordered states and potentially braiding them [182; 183]. Specifically, the excitations above the ground states of such models can have _anyonic statistics_ this is neither fermionic nor bosonic [61]. The non-Abelian anyons have been proposed as a robust scheme for topological quantum computation [184]. Note that one can simulate such exotic statistics even with non-interacting photons [185; 186; 187]. However, the many-body features such as the topological robustness for quantum computation are absent in such non-interacting systems.
### Strong photon-photon interaction and coupled electron-photon systems
If the interaction between cavity photons is so strong that a single photon can prevent the transmission of another one (photon blockade), one can expect even more exotic topological states, such as the photonic counterpart of fractional quantum Hall states. In particular, there is a whole class of topological states, known as topologically ordered states, which are distinct from the states covered in the introduction. In these states, entanglement and strong interaction play a central role, for which a brief review can be found here [61]. In fact, once photons strongly interact with each other, they can be considered as spin-1/2 particles and therefore many of the topologically ordered models will be directly applicable. For the case of photonic fractional quantum Hall states, the essential ingredients are gauge fields (discussed earlier) and strong photon-photon interaction. The important parameter in such systems is the magnetic filling factor: the number of particles divided by the number of magnetic flux (introduced in Sec.II.2.1). The simplest case for bosons is \(\nu=1/2\), where the ground state is Laughlin state, and it is both unique and gapped. For a pedagogical review of electric and boson fractional quantum Hall refer to [188; 64], respectively.
Generically in such systems, the ground state on a torus (periodic boundary condition) has a finite degeneracy. Therefore, a Chern number can not be associated
with a single state, and indeed it is _shared_ among the degenerate states, and therefore, it can be fractional.
The strong interaction could be achieved in various ways such as Rydberg atoms or superconducting qubits [189]. Remarkably, in the Rydberg systems, fractional quantum Hall states (Laughlin states) of a few photons have been observed [38]. Scaling such a system to a larger number of particles remains a challenge. In fact, one may ask how small of a system one can call a topologically ordered matter. In other words, given a wavefunction on finite system size, is it possible to identify whether a system is topologically ordered or not? Can one extract the Chern number, without prior knowledge of the Hamiltonian and application of any field? The answer to these questions seems to be positive based on recent analytical and numerical works [190; 191; 192], however, there is no experimental demonstration to this date.
So far we considered purely photonic models, whereby by engineering a Hamiltonian with topological properties, directly for photons, one can observe various topological phenomena. An interesting direction is to consider light-matter coupling where either the photonic or matter part has some topological properties, and therefore, the coupled system inherits those topological features. In other words, the matter part is not _integrated out_ and the light-matter interplay is the essence.
We note that the above categorization might sometimes seem artificial since the underlying microscopic theory for all the cases in this Perspective is quantum electrodynamic. Specifically, what is purely photonic or matter is a matter of length scale over which we integrate out microscopic degrees of freedom to write an effective Hamiltonian for the system. For example, one can call quantum Hall states coupled to optical cavities also a topopolariton since the optical excitations in the cavity-quantum Hall system can be considered excitons that are coupled to the cavities. Below, we highlight several directions.
* Light-matter interaction in electronic quantum Hall systems: As we mentioned at the beginning of this Perspective, electronic quantum Hall systems are the first physical systems to manifest topological properties in transport measurements. However, from early on optical measurements were also performed on such systems [193], for example, to probe electronic incompressibility. More recently, there have been interacting experiments to couple such states to cavities, either in the THz, [194; 195] or optical domain [196], to probe and manipulate intra- and interband states, respectively. Regardless of being in the cavity or free space, it is intriguing to ask whether light-matter coupling could be exploited to create and manipulate electronic topological states, and eventually perform braiding.
Moreover, it has been theoretically argued that the light-matter interaction is dramatically modified in quantum Hall states, since the chirality and topological robustness of the electronic states may lead to the spatially large wave functions, which could be comparable to the corresponding optical transitions [197]. In particular, the dipole approximation can be violated and the system could be sensitive to the gradient of the electric field and the phase of an optical vortex beam. The latter is only possible if the electron is phase coherent around the optical vortex and _experiences_ the phase winding of the optical beam. It has been proposed that such light-matter coupling could lead to radial current in quantum Hall systems in the absence of any electric field bias [198; 199]. Such optical vortex beams could be used to optically create topological excitation in fractional quantum Hall systems [200]. A recent experiment demonstrated that photocurrent could be sensitive to the beam phase winding [201]. In the context of this section, these systems are particularly interesting because both the electronic and photonic states have topological properties and such topological interplay is an interesting direction of future research.
* Topological photonic crystals: In the linear section, we discussed photonic crystals that can have topological properties, manifested in the presence of helical waveguides and their coupling to point-like emitters, like QDs [172]. One can also couple extended excitons states, such as the ones in 2D materials with optical transition, to such helical states. Since layered 2D materials are essentially one or a few atomic layers thick, they can strongly couple to the confined electromagnetic modes of the topological photonic crystals. Particularly interesting are recent studies on the hybridization of topological photonics states to condensed matter systems, where 2D transition metal dichalcogenides were shown to be strongly coupled to topological photonic crystal metasurfaces, forming a polaritonic metasurface [37]. Similar to the case of quantum QDs [172], the chiral light-matter coupling is sensitive to the location of the emitter with respect to the transverse position of the waveguide. In fact, one would naively expect that chiral light-matter coupling for 2D material excitons would be absent in such systems, because excitons, with the same polarization, are present all along the transverse direction of the waveguide, half coupled to the left-propagating modes and half to the right-propagating modes. However, experimental observation does not agree with this argument, and this subject remains an active area of research.
###### Acknowledgements.
We wish to gratefully acknowledge Erin Knutson, Daniel Leykam, Mikael Rechtsman, and Alberto Amo for their insightful comments and discussions during the preparation of this manuscript. This work was supported by AFOSR FA9550-20-1-0223, FA9550-19-1-0399, and ONR N00014-20-1-2325, NSF IMOD DMR-2019444, ARL W911NF1920181, Mintra Martin and Simons Foundations.
|
2307.14546 | Corrections, Improvements, and Comments on Some Gradshteyn and Ryzhik
Integrals | In this paper, we prove that two integrals from Gradshteyn and Ryzhik (2014)
[1] (namely, Eqs. 3.937 1 and 3.937 2) provide incorrect results in certain
conditions. We derive those conditions herein and provide the corrections
required for those two formulas. We furthermore derive improved formulas for
the solutions to those integrals that are less complicated, avoid the errors of
the original formulas, and work under a larger range of parameter values. The
improved formulas are used to verify the results of a few other related
integrals from [1]; the previous results need correction in some instances, or
are correct but can be extended in other instances. Lastly, we also consider
the extended case of complex-valued parameters and derive the resulting
formulas. | Robert C. Elliott, Witold A. KrzymieÅ | 2023-07-27T00:06:16Z | http://arxiv.org/abs/2307.14546v3 | # Corrections, Improvements, and Comments on Some
###### Abstract
In this paper, we prove that two integrals from Gradshteyn and Ryzhik (2014) [1] (namely, Eqs. 3.937 1 and 3.937 2) provide incorrect results in certain conditions. We derive those conditions herein and provide the corrections required for those two formulas. We furthermore derive improved formulas for the solutions to those integrals that are less complicated, avoid the errors of the original formulas, and work under a larger range of parameter values. The improved formulas are used to verify the results of a few other related integrals from [1]; the previous results need correction in some instances, or are correct but can be extended in other instances. Lastly, we also consider the extended case of complex-valued parameters and derive the resulting formulas.
keywords: Table of functions, Definite integrals, Exponential and trigonometric functions, Hypergeometric function \({}_{0}F_{1}\)
2000 _Mathematics Subject Classification_: Primary 33; Secondary 33-00, 33B10, 33C10 +
## 1 Introduction
In Gradshteyn and Ryzhik's _Table of Integrals, Series, and Products_[1], the following results for two definite integrals ([1, Eq. 3.937 1] and [1, Eq. 3.937 2], respectively) are listed:
\[\int_{0}^{2\pi} \exp(p\cos x+q\sin x)\sin(a\cos x+b\sin x-mx)\,dx\] \[=i\pi\big{[}(b-p)^{2}+(a+q)^{2}\big{]}^{-m/2}\] \[\quad\times\Big{[}(A+iB)^{m/2}\,I_{m}\!\Big{(}\!\sqrt{C-iD}\Big{)} -(A-iB)^{m/2}\,I_{m}\!\Big{(}\!\sqrt{C+iD}\!\Big{)}\Big{]} \tag{1}\]
\[\int_{0}^{2\pi} \exp(p\cos x+q\sin x)\cos(a\cos x+b\sin x-mx)\,dx\] \[=\pi\big{[}(b-p)^{2}+(a+q)^{2}\big{]}^{-m/2}\] \[\quad\times\Big{[}(A+iB)^{m/2}\,I_{m}\!\Big{(}\!\sqrt{C-iD}\! \Big{)}+(A-iB)^{m/2}\,I_{m}\!\Big{(}\!\sqrt{C+iD}\!\Big{)}\Big{]} \tag{2}\]
\(I_{m}(z)\) is the modified Bessel function of the first kind. For these two equations, it is specified that \(m\in\mathbb{N}\) (i.e., \(m=0,1,2,\dots\)), \(A=p^{2}-q^{2}+a^{2}-b^{2}\), \(B=2(pq+ab)\), \(C=p^{2}+q^{2}-a^{2}-b^{2}\), \(D=2(ap+bq)\), and it is required that \((b-p)^{2}+(a+q)^{2}>0\). The original source for these two formulas is given as Grobner and Hofreiter ([2, Sec. 339, Eq. 9b] and [2, Sec. 339, Eq. 9a], respectively). Note that \(D\) as given here corrects a typo in the 8th edition of Gradshteyn and Ryzhik [1], which accidentally inserted a minus sign into the equation for \(D\). The correct version without the minus sign is in Grobner and Hofreiter [2] and earlier editions of [1].
In working with these two equations, we have determined that both of them can yield an incorrect result. Specifically, when \(m\) has an odd value, the sign of the result will be the opposite of what it should be. The conditions for this occurring depend on what the values of \(a\), \(b\), \(p\), and \(q\) are. In what follows, we show the source of the
error and derive the conditions in which the sign error occurs. We shall also derive improved alternative formulas that are shorter, avoid this error, and do not require \((b-p)^{2}+(a+q)^{2}>0\).
## 2 The Original Derivation
Grobner and Hofreiter [2] give a hint about how to obtain the original formulas, namely, by contour integration. Let us combine (2) and (1) together into one equation by using \(e^{i\theta}=\cos\theta+i\sin\theta\). This gives:
\[\begin{split} f&=\int_{0}^{2\pi}\exp(p\cos x+q\sin x )\exp[\,i(a\cos x+b\sin x-mx)]\,dx\\ &=\int_{0}^{2\pi}\exp\bigl{[}(p+ia)\cos x+(q+ib)\sin x\bigr{]}e^{ -imx}\,dx\end{split} \tag{3}\]
The results of the integral in (2) will then ultimately be given by \(\mathfrak{Re}(f)=(f+\overline{f})/2\), while the results of the integral in (1) will be given by \(\mathfrak{Im}(f)=(f-\overline{f})/(2i)\), where \(\overline{f}\) denotes the complex conjugate of \(f\).
We next make the substitution \(z=e^{ix}\), which transforms the integral into a contour integral with the contour \(\mathcal{C}\) being the complex unit circle.
\[\begin{split} f&=\oint_{\mathcal{C}}\exp\biggl{[}(p +ia)\left(\frac{z+z^{-1}}{2}\right)+(q+ib)\left(\frac{z-z^{-1}}{2i}\right) \biggr{]}\,z^{-m}\,\frac{dz}{iz}\\ &=\oint_{\mathcal{C}}\exp\biggl{[}\left(\frac{p+ia}{2}\right)(z+ z^{-1})+\left(\frac{b-iq}{2}\right)(z-z^{-1})\biggr{]}\,z^{-m-1}\,\frac{dz}{i}\\ &=\frac{1}{i}\oint_{\mathcal{C}}\exp\biggl{[}\frac{(p+b)+i(a-q)} {2}\,z+\frac{(p-b)+i(a+q)}{2}\,z^{-1}\biggr{]}\,z^{-m-1}\,dz\end{split} \tag{4}\]
For brevity, we shall denote \(X=\bigl{[}(p{+}b)+i(a{-}q)\bigr{]}/2\) and \(Y=\bigl{[}(p{-}b)+i(a{+}q)\bigr{]}/2\).
Next, we substitute in the power series representation \(e^{x}=\sum_{n=0}^{\infty}(x^{n}/n!)\):
\[\begin{split} f&=\frac{1}{i}\oint_{\mathcal{C}}\, \sum_{n=0}^{\infty}\frac{z^{-m-1}(Xz+Yz^{-1})^{n}}{n!}\,dz\\ &=\frac{1}{i}\oint_{\mathcal{C}}\,\sum_{n=0}^{\infty}\frac{z^{-m- 1}}{n!}\left[\sum_{\ell=0}^{n}\binom{n}{\ell}(Xz)^{\ell}(Yz^{-1})^{n-\ell} \right]dz\\ &=\frac{1}{i}\oint_{\mathcal{C}}\,\sum_{n=0}^{\infty}\sum_{\ell= 0}^{n}\frac{z^{2\ell-m-n-1}}{\ell!\,(n-\ell)!}X^{\ell}\,Y^{n-\ell}\,dz\\ &\stackrel{{\text{(a)}}}{{=}}\frac{1}{i}\oint_{ \mathcal{C}}\,\sum_{\ell=0}^{\infty}\sum_{n=\ell}^{\infty}\frac{z^{2\ell-m-n- 1}}{\ell!\,(n-\ell)!}X^{\ell}\,Y^{n-\ell}\,dz\\ &\stackrel{{\text{(b)}}}{{=}}\frac{1}{i}\oint_{ \mathcal{C}}\,\sum_{\ell=0}^{\infty}\sum_{k=-\infty}^{\ell}\frac{z^{k-m-1}}{ \ell!\,(\ell-k)!}X^{\ell}\,Y^{\ell-k}\,dz\\ &\stackrel{{\text{(c)}}}{{=}}\frac{1}{i}\oint_{ \mathcal{C}}\,\sum_{k=-\infty}^{\infty}\left[\sum_{\ell=\max(0,k)}^{\infty} \frac{X^{\ell}\,Y^{\ell-k}}{\ell!\,(\ell-k)!}\right]z^{k-m-1}\,dz\end{split} \tag{5}\]
Equalities (a) and (c) come from reversing the order of the summations, while equality (b) substitutes \(k=2\ell-n\). We are left with a contour integral over a Laurent series of \(z\). From the residue theorem for contour integration, the value of the integral will be \(2\pi i\) times the coefficient of the \(z^{-1}\) term of the series; see e.g. Krantz [3]. This coefficient occurs when \(k=m\). We therefore obtain:
\[f=\frac{2\pi i}{i}\sum_{\ell=m}^{\infty}\frac{X^{\ell}\,Y^{\ell-m}}{\ell!\,( \ell-m)!}=2\pi\sum_{j=0}^{\infty}\frac{X^{j+m}\,Y^{j}}{(j+m)!\,j!}=2\pi X^{m} \sum_{j=0}^{\infty}\frac{X^{j}\,Y^{j}}{j!\,(j+m)!} \tag{6}\]
with the substitution \(j=\ell-m\).
We now note the following power series representation of \(I_{\nu}(z)\) (Gradshteyn and Ryzhik [1, Eq. 8.445]):
\[I_{\nu}(z)=\sum_{k=0}^{\infty}\frac{(z/2)^{\nu+2k}}{k!\,\Gamma(\nu+k+1)}=\left( \frac{z}{2}\right)^{\nu}\sum_{k=0}^{\infty}\frac{(z/2)^{2k}}{k!\,\Gamma(\nu+k+1)} \tag{7}\]
Note also that for \(\nu\in\mathbb{N}\), \(\Gamma(\nu+k+1)=(\nu+k)!\). Then, a comparison of (6) and (7) shows they are very similar in form. We may manipulate (6) to obtain
\[f=2\pi(X^{1/2})^{m}\,(\overline{Y}^{1/2})^{m}\,(\overline{Y}^{1/2})^{-m}\,(Y^{ 1/2})^{-m}\,(Y^{1/2})^{m}\,(X^{1/2})^{m}\,\sum_{j=0}^{\infty}\frac{(X^{1/2})^{2 j}\,(Y^{1/2})^{2j}}{j!\,(j+m)!} \tag{8}\]
Then, combining alike exponents:
\[f=2\pi(X\overline{Y})^{m/2}(Y\overline{Y})^{-m/2}\left[(XY)^{1/2}\right]^{m} \sum_{j=0}^{\infty}\frac{\left[(XY)^{1/2}\right]^{2j}}{j!\,(j+m)!} \tag{9}\]
and by assigning \(z/2=(XY)^{1/2}\), we get
\[f=2\pi(X\overline{Y})^{m/2}(Y\overline{Y})^{-m/2}\,I_{m}\big{(}2\sqrt{XY} \big{)} \tag{10}\]
From basic algebra, one can find that \(Y\overline{Y}=|Y|^{2}=[(b-p)^{2}+(a+q)^{2}]/4\), \(X\overline{Y}=(A-iB)/4\), and \(2\sqrt{XY}=\sqrt{C+iD}\). The factor of \(1/4\) cancels in the first two terms of (10). We then finally obtain
\[f=2\pi\big{[}(b-p)^{2}+(a+q)^{2}\big{]}^{-m/2}\,(A-iB)^{m/2}I_{m}\big{(}\sqrt{ C+iD}\big{)} \tag{11}\]
It then just remains to use the properties \(\overline{(zw)}=\overline{z}\,\overline{w}\), \(\overline{z^{\alpha}}=\overline{z}^{\,\alpha}\) for \(\alpha\in\mathbb{R}\), and \(I_{\nu}(\overline{z})=\overline{I_{\nu}(z)}\) for \(\nu\in\mathbb{R}\) (NIST [4, Eq. 10.34.7]). The result in (2) is obtained by taking \((f+\overline{f})/2\), and the result in (1) is obtained by taking \((f-\overline{f})/(2i)\). Due to the factor of \(Y^{-m/2}\) in (8), which propagates to later equations, the formulas cannot be used if \(Y=0\) (or equivalently \(|Y|^{2}=0\)) to avoid dividing by zero. The solutions therefore require \((b-p)^{2}+(a+q)^{2}>0\).
**3. The Error**
The reader may have intuited the source of the error in the derivation by this point. Perhaps surprisingly, the error is not itself in breaking apart terms into
products of square roots in (8), though the act of doing so is eventually what leads to the error. Rather, the error is in the recombination of terms in (9). In general, raising a complex number to a power is not well-defined, as the result may be multi-valued. (This is experienced even with purely real values. For instance, it is well known that \(x^{1/2}\) in fact has two valid solutions: the positive and negative square roots.) A unique and/or principal value for \(z^{\alpha}\) can be defined in two cases: 1) if \(z\) is complex, \(\alpha\) is an integer, and \(z\neq 0\) if \(\alpha\leq 0\), then \(z^{\alpha}\) is single-valued; 2) if \(z\) is positive real and \(\alpha\) is complex, then \(z^{\alpha}=e^{\alpha\log z}\). Unfortunately, neither of these cases may apply in general for this problem.
The result of this is that \(z^{\alpha}w^{\alpha}\) does not always equal \((zw)^{\alpha}\). A simple counterexample is \(z\!=\!w\!=\!-1\) and \(\alpha=1/2\). In this case, \(z^{\alpha}=w^{\alpha}=i\), so \(z^{\alpha}w^{\alpha}=i\!\times\!i=-1\). However, \((zw)^{\alpha}=(-1\times-1)^{1/2}=1^{1/2}=1\). One may denote \(z\) as \(|z|e^{i\theta_{z}}\), where \(\theta_{z}=\arg z\). One could therefore define \(z^{\alpha}\) as \(|z|^{\alpha}e^{i\alpha\theta_{z}}\), and thus \(z^{\alpha}w^{\alpha}=(|z|\!\cdot\!|w|)^{\alpha}e^{i\alpha(\theta_{z}+\theta_{w })}\). However, \(z\) may be equally be denoted as \(|z|e^{i(\theta_{z}+2n\pi)}\) for any integer \(n\). \(\theta_{z}\), being the _principal_ argument of \(z\), corresponds to the branch where \(n=0\), such that \(\theta_{z}\in(-\pi,\pi]\). One may also denote \(y=(zw)^{\alpha}=|y|e^{i\theta_{y}}=(|z|\!\cdot\!|w|)^{\alpha}e^{i\alpha\theta _{y}}\). It then happens that \(z^{\alpha}w^{\alpha}\) may not equal \((zw)^{\alpha}\) if \(\alpha(\theta_{z}+\theta_{w})\) crosses over the branch cut at \(\pi\) radians and ends up in a different branch than \(\alpha\theta_{y}\). Specifically, if \(\alpha\) is an odd integer multiple of \(1/2\), then \(z^{\alpha}w^{\alpha}\neq(zw)^{\alpha}\) if \(\theta_{z}+\theta_{w}\not\in(-\pi,\pi]\), or in other words, if either \(\theta_{z}+\theta_{w}>\pi\) or \(\theta_{z}+\theta_{w}\leq-\pi\). In either of those two cases, a sign flip occurs, and instead \(z^{\alpha}w^{\alpha}=-(zw)^{\alpha}\).
**4. Conditions that Cause an Error in the Integrals**
In (9), a sign flip error can potentially occur in any of the three cases where variables are combined: \(X^{m/2}\overline{Y}^{m/2}\rightarrow(X\overline{Y})^{m/2}\), \(X^{1/2}\,Y^{1/2}\rightarrow\sqrt{XY}\), or \(Y^{-m/2}\overline{Y}^{-m/2}\rightarrow(Y\overline{Y})^{-m/2}\). In the first and third cases, the sign flip will occur only for odd values of
\(m\). In the second case, \(2\sqrt{XY}\) is the input to the Bessel function \(I_{m}(z)\). However, for \(\nu\in\mathbb{Z}\), \(I_{\nu}(-z)=(-1)^{\nu}I_{\nu}(z)\) (which is a special case of NIST (4, Eq. 10.34.1)). So, \(I_{m}(z)\) is an even function of \(z\) for even \(m\) and an odd function of \(z\) for odd \(m\). Consequently, if a sign error occurs for \(z=2\sqrt{XY}\), it will only have an effect on the output of \(I_{m}(z)\) if \(m\) is odd. We will consider these three cases separately.
### Case 1: When \(X^{m/2}\,\overline{Y}^{m/2}\neq(X\overline{Y})^{m/2}\)
For ease of reference, \(X=\big{[}(p{+}b)+i(a{-}q)\big{]}/2\) and \(Y=\big{[}(p{-}b)+i(a{+}q)\big{]}/2\), so \(\overline{Y}=\big{[}(p{-}b)-i(a{+}q)\big{]}/2\). We therefore have \(\theta_{X}=\arg X=\mbox{atan2}(a{-}q,p{+}b)\) and \(\theta_{Y}=\arg Y=\mbox{atan2}(a{+}q,p{-}b)\), where \(\mbox{atan2}(y,x)\) is the "four-quadrant" version of the inverse tangent function \(\mbox{atan}(y/x)\). For \(X^{m/2}\,\overline{Y}^{m/2}\), we must generally consider \(\theta_{X}-\theta_{Y}\). The exception is when \(Y\) is a negative real number, where \(\arg\overline{Y}\;=\;\pi\;\neq\;-\arg Y\), which must be handled separately. We consider four cases for \(\theta_{X}\).
#### 4.1.1 \(\theta_{X}=0\)
In other words, \(X\) is a positive real number. For this to occur, \(a{-}q=0\) and \(p{+}b>0\). However, in this case, it is not possible for \(\theta_{X}+\theta_{\overline{Y}}\) to fall outside of \((-\pi,\pi]\). Thus, no sign error will occur in this case.
#### 4.1.2 \(\theta_{X}\) is undefined, i.e., \(X=0\)
This occurs when \(a{-}q=0\) and \(p{+}b=0\). Trivially, though, \(X^{m/2}\,\overline{Y}^{m/2}=0{\times}\overline{Y}^{m/2}=0\) and \((X\overline{Y})^{m/2}=0^{m/2}=0\), so there will be no sign error. (We are currently only considering odd values for \(m\), so \(0^{0}\) will not occur.) For similar reasons, a sign error will not occur if \(Y=0\), so we need not consider that case.
#### 4.1.3 \(0{<}\,\theta_{X}{\,\leq}\,\pi\)
This occurs either when \(a{-}q>0\) or when \(a{-}q=0\) and \(p{+}b<0\). The only possibility for a sign error to occur in this case is when \(\arg\overline{Y}>0\). This will happen
either if \(a{+}q<0\), so \(-\pi<\theta_{Y}<0\), or if \(a{+}q=0\) and \(p{-}b<0\), so \(Y\) is a negative real number and \(\theta_{Y}=\theta_{\overline{Y}}=\pi\). Considering the latter case first, if \(\theta_{\overline{Y}}=\pi\) and \(\theta_{X}\) is strictly greater than zero, then \(\theta_{X}+\theta_{\overline{Y}}\) must be strictly greater than \(\pi\). Hence, a condition for a sign error to occur is if either a) \((a{-}q{\,>\,}0)\cap(a{+}q{\,=\,}0)\cap(p{-}b{\,<\,}0)\), which reduces to \((q{\,=\,}{-}a)\cap(q{\,<\,}0)\cap(p{\,<\,}b)\); or b) \((a{-}q{\,=\,}0)\cap(p{+}b{\,<\,}0)\cap(a{+}q{\,=\,}0)\cap(p{-}b{\,<\,}0)\), which reduces to \((a{\,=\,}q{\,=\,}0)\cap(p{\,<\,}{-}|b|)\). Similarly, if \(\theta_{X}=\pi\) (i.e., \(a{-}q=0\) and \(p{+}b<0\)), then a sign error will occur for any positive value of \(\theta_{\overline{Y}}\). The case of \(\theta_{\overline{Y}}=\pi\) has already been covered above. Otherwise, a sign error will also occur when \((a{-}q{\,=\,}0)\cap(a{+}q{\,<\,}0)\cap(p{+}b{\,<\,}0)\), which reduces to \((q{\,=\,}a)\cap(q{\,<\,}0)\cap(p{\,<\,}{-}b)\).
For the remaining possibility, \(\theta_{X}\) and \(\theta_{\overline{Y}}\) are both positive but neither equals \(\pi\). This occurs when both \(a{-}q>0\) and \(a{+}q<0\), or in other words when \(q<-|a|\). Under this condition, we must determine when \(\theta_{X}+\theta_{\overline{Y}}>\pi\). For this, we may use the following property of atan2:
\[\text{atan2}(y,x)=\begin{cases}\frac{\pi}{2}-\text{atan}\biggl{(}\frac{x}{y} \biggr{)},&\text{if }y>0;\\ -\frac{\pi}{2}-\text{atan}\biggl{(}\frac{x}{y}\biggr{)},&\text{if }y<0.\end{cases} \tag{12}\]
Then, the condition \(\theta_{X}{+}\theta_{\overline{Y}}>\pi\), or equivalently \(\text{atan2}(a{-}q,p{+}b){+}\text{atan2}({-}a{-}q,p{-}b)\)\(>\pi\), may then be rewritten as \(\pi/2-\text{atan}\Bigl{(}\frac{p{+}b}{a{-}q}\Bigr{)}+\pi/2-\text{atan}\Bigl{(} \frac{p{-}b}{{-}a{-}q}\Bigr{)}>\pi\). After some basic algebraic manipulations, this reduces to \(p<-ba/q\). It is also interesting to note that in two of the earlier cases, i.e. \((q{\,=\,}{-}a)\cap(q{\,<\,}0)\cap(p{\,<\,}b)\) and \((q{\,=\,}a)\cap(q{\,<\,}0)\cap(p{\,<\,}{-}b)\), the inequality for \(p\) can be alternatively written in the same way, i.e., \(p<-ba/q\) in both of those cases as well.
#### 4.1.4 \(-\pi{\,<\,}\theta_{X}{\,<\,}0\)
This occurs when \(a{-}q<0\). A sign error will only occur in this case if \(\arg\overline{Y}\) is also negative, so that \(a{+}q>0\). Combining the two conditions, this means \(q>|a|\)
The sign error will occur if \(\theta_{X}+\theta_{\overline{Y}}\leq-\pi\), or equivalently \(\mbox{atan2}(a{-}q,p{+}b)+\mbox{atan2}(-a{-}q,p{-}b)\leq-\pi\). Using the lower half of (12) and similar algebraic manipulations as before, we obtain the very similar condition \(p\leq-ba/q\) (this time including equality).
To summarize, the result \(X^{m/2}\overline{Y}^{m/2}\neq(X\overline{Y})^{m/2}\) will occur under any of the following conditions:
\[\left[(|q|>|a|)\mbox{ OR }(q=-|a|\neq 0)\right]\mbox{ AND }p<-\frac{ba}{q}; \tag{13a}\]
\[q>|a|\mbox{ AND }p=-\frac{ba}{q}; \tag{13b}\]
\[q=a=0\mbox{ AND }p<-|b|. \tag{13c}\]
### Case 2: When \(X^{1/2}\,Y^{1/2}\neq\sqrt{XY}\)
This case is much the same as Case 1, except we are now considering \(\theta_{X}+\theta_{Y}\) instead of \(\theta_{X}+\theta_{\overline{Y}}\). As such, we shall omit most of the details of the derivation. The key difference is that the inequality conditions on \(a{+}q\) become the opposite of those from Case 1 (i.e., instances of "greater than" now become "less than" and vice versa). Also, with \(a{+}q\) being the imaginary component of \(Y\) (instead of \(-a{-}q\) for \(\overline{Y}\)), the inequality for \(p\) reduces to \(p<-bq/a\) rather than \(p<-ba/q\). Consequently, it ends up that the result \(X^{m/2}\,Y^{m/2}\neq(XY)^{m/2}\) will occur under any of the following conditions:
\[\left[(|a|>|q|)\mbox{ OR }(a=|q|\neq 0)\right]\mbox{ AND }p<-\frac{bq}{a}; \tag{14a}\]
\[a<-|q|\mbox{ AND }p=-\frac{bq}{a}; \tag{14b}\]
\[a=q=0\mbox{ AND }p<-|b|. \tag{14c}\]
### Case 3: When \(Y^{-m/2}\,\overline{Y}^{-m/2}\neq(Y\overline{Y})^{-m/2}\)
Overall, it will most commonly be the case that \(\theta_{\overline{Y}}=-\theta_{Y}\), so \(\theta_{Y}+\theta_{\overline{Y}}=0\) (or is undefined if \(Y=0\)). Since \(0\in(-\pi,\pi]\), there will not be a sign error. The one exception is when \(Y\) is a negative real number, so \(\theta_{\overline{Y}}=\theta_{Y}=\pi\) and \(\theta_{Y}+\theta_{\overline{Y}}=2\pi\). \(Y\) will be a negative real number when \(a{+}q=0\) and \(p{-}b<0\). Hence, \(Y^{-m/2}\,\overline{Y}^{-m/2}\neq(Y\overline{Y})^{-m/2}\) will occur when:
\[a=-q\ \mbox{AND}\ p<b. \tag{15}\]
Neither \(Y^{-m/2}\), \(\overline{Y}^{-m/2}\), nor \((Y\overline{Y})^{-m/2}\) can be calculated when \(a=-q\) and \(p=b\), as \(Y=0\) in that event.
### An Overall Error
As seen, there are three cases that can cause a sign error when calculating the different parts of \(f\). However, it could potentially occur that two of the parts produce a sign error, while the third does not. Thus, the two errors could cancel each other out, coincidentally leading to a correct overall result. Examining the conditions in (13), (14), and (15), the only time there can be an overlap is when \(q=-a\), \(q\leq 0\), and \(a\geq 0\). If \(q\neq-a\), then Case 3 will not yield a sign error, and only one of Case 1 or Case 2 can cause an error but not both. In this event, either \(|q|>|a|\) (Case 1), \(|a|>|q|\) (Case 2), \(q=a\) and both are positive (Case 2), or \(q=a\) and both are negative (Case 1). On the other hand, if \(q=-a\) but \(q>0\), then \(a\) must be negative; likewise, if \(q=-a\) but \(a<0\), then \(q\) must be positive. In either event, only Case 3 will yield a sign error.
When \(q=-a\) and neither equals zero, then \(-bq/a=-ba/q=b\). As such, the error conditions (13a), (14a), and (15) are all equivalent to each other. Hence, either all three cases will simultaneously yield a sign error or none of them will yield an error. If all three are in error, then the overall result will also be in error.
When \(a=q=0\), then conditions (13c), (14c), and/or (15) could be satisfied. If \(b\leq 0\), then the condition \(p<-|b|\) is equivalent to \(p<b\). Hence, again either all three cases will yield a sign error simultaneously, or none of them will yield an error. On the other hand, if \(b>0\), then if \(p<b\), Case 3 will yield a sign error. However, \(p\) may still fall in the range of \(-b\leq p<b\), which will not cause a sign error in Cases 1 and 2. Nonetheless, a single sign error will cause an overall error in the final result. If \(p<-b\) with \(b>0\), then Cases 1 and 2 will yield a sign error along with Case 3, and the three sign errors will again create an error in the overall result. Thus, for \(a=q=0\), it is sufficient to have \(p<b\) to cause an overall error in the result.
In summary, it is not possible for two out of the three cases to yield a sign error while the third does not. Either none of the cases yields a sign error (in which event the final result is correct), a single case yields a sign error, or all three cases yield sign errors at the same time. In either of the last two events, an overall sign error will be the result.
The conditions that cause an overall sign error when calculating \(f\) using (11) can be summed up fairly succinctly. First, we define \(K\) as follows:
\[K=\begin{cases}q/a,&\text{if }|a|\geq|q|\text{ AND }a\neq 0;\\ a/q,&\text{if }|q|\geq|a|\text{ AND }q\neq 0;\\ -1,&\text{if }a=q=0.\end{cases} \tag{16}\]
Then, a sign error occurs under the either of the following two conditions:
\[p<-bK; \tag{17a}\] \[\left[(a<-|q|)\text{ OR }(q>|a|)\right]\text{ AND }p=-bK. \tag{17b}\]
We lastly note that the above conditions yield a sign error specifically when calculating \(f\) using (11). However, there are a limited subset of conditions that
result in either the real or imaginary part of \(f\) being equal to zero while the other part is not. If (17) holds and \(\mathfrak{Re}(f)=0\), then (2) will still correctly yield zero while (1) will have a sign error. A straightforward way for this to happen is if \(p=b=0\); then, \(B=D=0\) and \(A=-C\). With an odd value for \(m\), it will then occur that \((A-iB)^{m/2}\) is purely real and \(I_{m}\big{(}\sqrt{C+iD}\big{)}\) is purely imaginary, or vice versa; hence, (11) will yield a purely imaginary value. (Trivially, \(Y\overline{Y}\) must be positive and real, and thus so is \((Y\overline{Y})^{-m/2}\)). Likewise, if (17) holds and \(\mathfrak{Im}(f)=0\), then (1) will still correctly yield zero while (2) will have a sign error. A straightforward way for this latter case to occur is when \(a=q=0\); then, \(B=D=0\) and \(A=C\). Consequently, \((A-iB)^{m/2}\) and \(I_{m}\big{(}\sqrt{C+iD}\big{)}\) will both be either purely real or purely imaginary, and so (11) will yield a purely real value.
**5. Improved Expressions for Integrals**
It is of course possible to correct the expressions in (1) and (2) so that they consistently give the proper results. For example, one could multiply both expressions by \((-1)^{m}\) if the conditions in (17) hold. Alternatively, one could just not combine the alike terms in (9). After deleting the \((\overline{Y}^{1/2})^{m}\,(\overline{Y}^{1/2})^{-m}\) terms (which cancel each other) from (9), this would give \(f=2\pi X^{m/2}\,Y^{-m/2}I_{m}\big{(}2\sqrt{X}\sqrt{Y}\big{)}\). The result of the integral in (1) can then be expressed as \(i\pi\Big{[}\overline{X}^{m/2}\,\overline{Y}^{-m/2}I_{m}\Big{(}2\sqrt{\overline {X}}\sqrt{\overline{Y}}\Big{)}-X^{m/2}\,Y^{-m/2}I_{m}\big{(}2\sqrt{X}\sqrt{Y} \big{)}\Big{]}\), and the result of the integral in (2) can be expressed as \(\pi\Big{[}\overline{X}^{m/2}\,\overline{Y}^{-m/2}I_{m}\Big{(}2\sqrt{\overline {X}}\sqrt{\overline{Y}}\Big{)}+X^{m/2}\,Y^{-m/2}I_{m}\big{(}2\sqrt{X}\sqrt{Y} \big{)}\Big{]}\).
However, we can instead derive an alternative expression that avoids the complications surrounding combining terms with non-integer exponents in the first place.
Resuming the derivation by continuing from (6):
\[f=2\pi X^{m}\sum_{j=0}^{\infty}\frac{X^{j}\,Y^{j}}{j!\,\Gamma(j{+}m{+}1)}=\frac{2 \pi X^{m}}{\Gamma(m{+}1)}\sum_{j=0}^{\infty}\frac{X^{j}\,Y^{j}\,\Gamma(m{+}1)}{ j!\,\Gamma(j{+}m{+}1)}=\frac{2\pi X^{m}}{m!}\sum_{j=0}^{\infty}\frac{X^{j}\,Y^{j}}{j!\,(m{+}1) _{j}} \tag{18}\]
where \((z)_{j}=\Gamma(z{+}j)/\Gamma(z)\) denotes the Pochhammer symbol. Since \(j\) is a non-negative integer, \(X^{j}\) and \(Y^{j}\) each have a single value, and their product will equal \((XY)^{j}\). (As a simple proof, \(X^{j}Y^{j}=\underbrace{X{\cdot}X{\cdot}X{\cdot}\ldots{\cdot}X}_{j\ \rm copies}\cdot \underbrace{Y{\cdot}Y{\cdot}Y{\cdot}\ldots{\cdot}Y}_{j\ \rm copies}=\underbrace{XY{\cdot}XY{\cdot}XY{\cdot}\ldots{ \cdot}XY}_{j\ \rm copies}\)\(=(XY)^{j}\).) Replacing \(X^{j}\,Y^{j}\) with \((XY)^{j}\) in the last sum of (18), it can be seen that the sum is, by definition, the (confluent) hypergeometric function \({}_{0}F_{1}(;m{+}1;XY)\). This is a special case of the generalized hypergeometric function \({}_{p}F_{q}(a_{1},a_{2},\ldots,a_{p};b_{1},b_{2},\ldots,b_{q};z)=\sum\limits_{ k=0}^{\infty}\frac{(a_{1})_{k}(a_{2})_{k}\cdots(a_{p})_{k}}{(b_{1})_{k}(b_{2})_{k} \cdots(b_{q})_{k}}\frac{z^{k}}{k!}\) (NIST [4, Eq. 16.2.1]), with \(p=0\) parameters in the numerator and \(q=1\) parameter in the denominator. (These are not the same "\(p\)" and "\(q\)" as otherwise used in this paper.) Therefore, we end up with:
\[\begin{split} f&=\frac{2\pi X^{m}}{m!}{}_{0}F_{1}(; m+1;XY)\\ &=\frac{2\pi}{m!}\left[\frac{(p{+}b)+i(a{-}q)}{2}\right]^{m}{}_{0 }F_{1}\!\left(;m+1;\frac{(p^{2}{+}q^{2}{-}a^{2}{-}b^{2})+i[2(ap+bq)]}{4}\right) \end{split} \tag{19}\]
Let us define four new constants as follows:
\[A^{\prime}=\frac{p+b}{2} \tag{20a}\] \[B^{\prime}=\frac{a-q}{2}\] (20b) \[C^{\prime}=\frac{p^{2}+q^{2}-a^{2}-b^{2}}{4}\] (20c) \[D^{\prime}=\frac{ap+bq}{2} \tag{20d}\]
We can then express \(f\) somewhat more compactly as
\[f=\frac{2\pi}{m!}(A^{\prime}+iB^{\prime})^{m}{}_{0}F_{1}(;m+1;C^{\prime}+iD^{ \prime}) \tag{21}\]
We also note that, from the power series definition of \({}_{0}F_{1}\) and the property \(\overline{z^{\alpha}}=\overline{z}^{\alpha}\) for \(\alpha\in\mathbb{R}\), it follows that \({}_{0}F_{1}(;b_{1};\overline{z})=\overline{{}_{0}F_{1}(;b_{1};z)}\) for \(b_{1}\in\mathbb{R}\). Using \((f{-}\overline{f})/(2i)\) and \((f{+}\overline{f})/2\), we ultimately arrive at the new expressions for the integrals:
\[\int_{0}^{2\pi}\exp(p\cos x+q\sin x)\sin(a\cos x+b\sin x-mx)\,dx\] \[\quad=\frac{i\pi}{m!}\big{[}(A^{\prime}-iB^{\prime})^{m}\,{}_{0}F_ {1}(;m+1;C^{\prime}-iD^{\prime})-(A^{\prime}+iB^{\prime})^{m}\,{}_{0}F_{1}(;m+ 1;C^{\prime}+iD^{\prime})\big{]} \tag{22}\]
\[\int_{0}^{2\pi}\exp(p\cos x+q\sin x)\cos(a\cos x+b\sin x-mx)\,dx\] \[\quad=\frac{\pi}{m!}\big{[}(A^{\prime}-iB^{\prime})^{m}\,{}_{0}F_ {1}(;m+1;C^{\prime}-iD^{\prime})+(A^{\prime}+iB^{\prime})^{m}\,{}_{0}F_{1}(;m+ 1;C^{\prime}+iD^{\prime})\big{]} \tag{23}\]
There is one corner case with the new expressions that also existed in the originals. In the event \(A^{\prime}=B^{\prime}=m=0\), then the expressions contain \(0^{0}\). In this event, the terms \((A^{\prime}+iB^{\prime})^{m}\) and \((A^{\prime}-iB^{\prime})^{m}\) should be treated as \(\lim_{z\to 0}z^{0}=1\).
The expressions in (22) and (23) offer several improvements over the original expressions in (1) and (2). Most importantly, they avoid the sign errors that can occur with the original expressions. The new expressions are also somewhat more compact, having fewer terms to calculate than the original ones. This is mostly since the term \(\big{[}(b{-}p)^{2}+(a{+}q)^{2}\big{]}^{-m/2}\) is no longer present. Because of the absence of that term, (22) and (23) can therefore also be used in the event \((b{-}p)^{2}+(a{+}q)^{2}=0\), which is a limitation of the original expressions.
There is one further advantage to the newer expressions. So far in this paper, it has been implicitly assumed that \(a\), \(b\), \(p\), and \(q\) are all real-valued constant parameters. However, there is nothing in particular in the derivations of (3)-(8) and (18)-(21) that prevent those constants from being complex numbers instead. Thus,
(21) also will give the correct result for the integral in (3) when using complex-valued constants. However, there is now the complication that the solution to the integral in (22) would no longer be obtained from \((f{-}\overline{f})/(2i)\), and the solution to the integral in (23) would no longer be obtained from \((f{+}\overline{f})/2\). Instead, those two integrals must be reworked a bit first. For brevity of notation, we shall denote the real part of a constant "\(u\)" (i.e., \(\mathfrak{Re}\,u\)) by \(u_{R}\) and the imaginary part (i.e., \(\mathfrak{Im}\,u\)) by \(u_{I}\). For (22), we have:
\[\int_{0}^{2\pi}\exp\bigl{[}(p_{R}{+}ip_{I})\cos x+(q_{R}{+}iq_{I}) \sin x\bigr{]}\] \[\qquad\times\sin\bigl{[}(a_{R}{+}i\,a_{I})\cos x+(b_{R}{+}i\,b_{I })\sin x-mx\bigr{]}\,dx\] \[=\frac{1}{2i}\int_{0}^{2\pi}\exp\bigl{[}(p_{R}{+}ip_{I})\cos x+( q_{R}{+}i\,q_{I})\sin x\bigr{]}\] \[\qquad\qquad\times\biggl{\{}\exp\Bigl{(}i\bigl{[}(a_{R}{+}i\,a_{I })\cos x+(b_{R}{+}i\,b_{I})\sin x-mx\bigr{]}\Bigr{)}\] \[\qquad\qquad\qquad-\exp\Bigl{(}-i\bigl{[}(a_{R}{+}i\,a_{I})\cos x +(b_{R}{+}i\,b_{I})\sin x-mx\bigr{]}\Bigr{)}\biggr{\}}\,dx\] \[=\frac{i}{2}\underbrace{\int_{0}^{2\pi}\exp\Bigl{(}\bigl{[}(p_{R }{+}a_{I})+i(p_{I}{-}a_{R})\bigr{]}\cos x+\bigl{[}(q_{R}{+}b_{I})+i(q_{I}{-}b_ {R})\bigr{]}\sin x\Bigr{)}e^{+imx}\,dx}_{f_{1}}\] \[\quad-\frac{i}{2}\underbrace{\int_{0}^{2\pi}\exp\Bigl{(}\bigl{[} (p_{R}{-}a_{I})+i(p_{I}{+}a_{R})\bigr{]}\cos x+\bigl{[}(q_{R}{-}b_{I})+i(q_{I} {+}b_{R})\bigr{]}\sin x\Bigr{)}e^{-imx}\,dx}_{f_{2}}\] \[=\frac{i}{2}(f_{1}-f_{2}) \tag{24}\]
For (23), we end up with \((f_{1}+f_{2})/2\). One can observe that if \(p\), \(q\), \(a\), and \(b\) are all real values, so \(p_{I}=q_{I}=a_{I}=b_{I}=0\), \(f_{2}\) simplifies to \(f\) in (3), and \(f_{1}\) reduces to \(\overline{f}\).
Hence, with \(f_{1}\) and \(f_{2}\), we have two expressions that are extremely similar to that of (3). Thus, the results of the integrals are still given in the form of (21). The key difference is simply that original constants \(p\), \(q\), \(a\), and \(b\) are replaced by the real part of that constant plus or minus the imaginary part of a different constant; for
example, \(p\) becomes \(p_{R}+a_{I}\) or \(p_{R}-a_{I}\). (The two "\(\cos\)" constants \(p\) and \(a\) and the two "\(\sin\)" constants \(q\) and \(b\) end up paired together.) \(f_{1}\) also has the slight complication that it contains a "\(+m\)" instead of a "\(-m\)". However, we can instead consider \(\overline{f_{1}}\) to convert that plus to a minus, then undo the conjugate for the final result. It is then simply a matter of making the appropriate substitution of constants in (21) to obtain the result. The final expressions for the integrals are as follows:
\[\int_{0}^{2\pi}\exp(p\cos x+q\sin x)\sin(a\cos x+b\sin x-mx)\,dx \quad[p,q,a,\text{and $b$ complex-valued}]\] \[\quad=\frac{i\pi}{m!}\big{[}(A_{1}+iB_{1})^{m}\,_{0}F_{1}(;m+1;C_ {1}+iD_{1})-(A_{2}+iB_{2})^{m}\,_{0}F_{1}(;m+1;C_{2}+iD_{2})\big{]} \tag{25}\]
\[\int_{0}^{2\pi}\exp(p\cos x+q\sin x)\cos(a\cos x+b\sin x-mx)\,dx \quad[p,q,a,\text{and $b$ complex-valued}]\] \[\quad=\frac{\pi}{m!}\big{[}(A_{1}+iB_{1})^{m}\,_{0}F_{1}(;m+1;C_ {1}+iD_{1})+(A_{2}+iB_{2})^{m}\,_{0}F_{1}(;m+1;C_{2}+iD_{2})\big{]} \tag{26}\]
where
\[A_{1} =\frac{p_{R}+a_{I}-q_{I}+b_{R}}{2} \tag{27a}\] \[A_{2} =\frac{p_{R}-a_{I}+q_{I}+b_{R}}{2}\] (27b) \[B_{1} =\frac{p_{I}-a_{R}+q_{R}+b_{I}}{2}\] (27c) \[B_{2} =\frac{p_{I}+a_{R}-q_{R}+b_{I}}{2}\] (27d) \[C_{1} =\frac{(p_{R}+a_{I})^{2}+(q_{R}+b_{I})^{2}-(p_{I}-a_{R})^{2}-(q_ {I}-b_{R})^{2}}{4}\] (27e) \[C_{2} =\frac{(p_{R}-a_{I})^{2}+(q_{R}-b_{I})^{2}-(p_{I}+a_{R})^{2}-(q_ {I}+b_{R})^{2}}{4}\] (27f) \[D_{1} =\frac{(p_{I}-a_{R})(p_{R}+a_{I})+(q_{I}-b_{R})(q_{R}+b_{I})}{2} \tag{27g}\]
\[D_{2}=\frac{(p_{I}+a_{R})(p_{R}-a_{I})+(q_{I}+b_{R})(q_{R}-b_{I})}{2} \tag{27h}\]
Again, if \(p_{I}=q_{I}=a_{I}=b_{I}=0\) so that the constants are real-valued, then the expressions in (27) will reduce so that \(A_{1}=A_{2}=A^{\prime}\), \(B_{1}=-B^{\prime}\), \(B_{2}=B^{\prime}\), \(C_{1}=C_{2}=C^{\prime}\), \(D_{1}=-D^{\prime}\), and \(D_{2}=D^{\prime}\). Hence, (25) and (26) will reduce to (22) and (23), respectively.
We lastly note that complex-valued constants can also be technically used with the form of \(f\) given by (11), with its restriction changing from \((b-p)^{2}+(a+q)^{2}>0\) to \((b-p)^{2}+(a+q)^{2}\neq 0\). However, the same type of sign errors will still result from its use, and the analysis of the conditions where those sign errors occur would be considerably more complicated. Use of (21) therefore still remains the better option.
## 6 Comments on Other Integrals
The results of (22) and (23) can be used to verify the results of a few other related integrals in Gradshteyn and Ryzhik [1]. Specifically, we consider the following integrals, which are special cases of (22) and (23). (In some instances, we add accents below to aid in distinguishing between constants.)
### Integral 1 (Gradshteyn and Ryzhik [1, Eq. 3.931 4]):
\[\int_{0}^{\pi}e^{-p^{\prime}\cos x}\cos(p^{\prime}\sin x)\,dx=\frac{1}{2}\int _{0}^{2\pi}e^{-p^{\prime}\cos x}\cos(p^{\prime}\sin x)\,dx=\pi \tag{28}\]
The equality between the two integrals can be determined by noting the value of \(\sin x\) from \(\pi\) to \(2\pi\) equals the negative of the value from \(\pi\) to \(0\), whereas the value of \(\cos x\) from \(\pi\) to \(2\pi\) equals the same value from \(\pi\) to \(0\). Thus, the equation takes on the same values between \(\pi\) to \(2\pi\) as it does from \(0\) to \(\pi\), albeit in the reverse order. Hence, the value of the integral over those two ranges of angles is the same.
The integral on the right corresponds to (23) with \(p=-p^{\prime}\), \(b=p^{\prime}\), and \(q=a=m=0\). We therefore have \(A^{\prime}\), \(B^{\prime}\), \(C^{\prime}\), and \(D^{\prime}\) all equal to zero. Consequently, as stated before, \((A^{\prime}\pm iB^{\prime})^{m}\) should be treated as being equal to 1. (23) then gives \((\pi/0!)\big{[}1\!\cdot\!_{0}F_{1}(;1;0)+1\!\cdot\!_{0}F_{1}(;1;0)\big{]}=\pi[ 1\!+\!1]=2\pi\). Therefore, after multiplying by the leading factor of \(1/2\), this matches with and confirms the original result. This result also applies in the degenerate case of \(p^{\prime}=0\): \(\int_{0}^{\pi}e^{0}\cos(0)\,dx=\int_{0}^{\pi}1\cdot 1\,dx=\pi\). We furthermore note that the same result is achieved in the event \(p^{\prime}\) is complex. In this case, all the constants in (27) work out to be 0, so (26) yields the value as (23). It is also worth noting that the same result holds if the \(p^{\prime}\) inside the \(\cos\) term is replaced by \(-p^{\prime}\), since \(\cos(-z)=\cos(z)\). By extension, the minus sign in the \(\exp\) term may also be removed (e.g., by setting \(p^{\prime}=-u\)). Hence, [1, Eq. 3.931 4] can be generalized to be \(\int_{0}^{\pi}e^{\pm p^{\prime}\cos x}\cos(p^{\prime}\sin x)\,dx=(1/2)\int_{0} ^{2\pi}e^{\pm p^{\prime}\cos x}\cos(p^{\prime}\sin x)\,dx=\pi\).
### Integrals 2 and 3 (Gradshteyn and Ryzhik [1, Eqs. 3.932 1 and 3.931 2]):
\[\int_{0}^{\pi}e^{p^{\prime}\cos x}\sin(p^{\prime}\sin x)\sin mx\,dx=\frac{1}{ 2}\int_{0}^{2\pi}e^{p^{\prime}\cos x}\sin(p^{\prime}\sin x)\sin mx\,dx=\frac{ \pi p^{\prime\,m}}{2\,m!} \tag{29}\]
\[\int_{0}^{\pi}e^{p^{\prime}\cos x}\cos(p^{\prime}\sin x)\cos mx\,dx=\frac{1}{ 2}\int_{0}^{2\pi}e^{p^{\prime}\cos x}\cos(p^{\prime}\sin x)\cos mx\,dx=\frac{ \pi p^{\prime\,m}}{2\,m!} \tag{30}\]
The equality between the two integrals again is a result of the mirror symmetry of the equation between 0 to \(\pi\) and between \(\pi\) to \(2\pi\); hence, the integral over those two ranges of angles gives the same value. The expressions in both equations are not quite in the required form. However, we can make use of the trigonometric product identities \(\sin\theta\cdot\sin\phi=\big{[}\cos(\theta\!-\!\phi)-\cos(\theta\!+\!\phi)\big{]}/2\) and \(\cos\theta\cdot\cos\phi=\big{[}\cos(\theta\!-\!\phi)+\cos(\theta\!+\!\phi) \big{]}/2\). The expression in (29) can then be rewritten as
\[\frac{1}{4}\int_{0}^{2\pi}\big{[}e^{p^{\prime}\cos x}\cos(p^{\prime}\sin x-mx) -e^{p^{\prime}\cos x}\cos(p^{\prime}\sin x+mx)\big{]}\,dx \tag{31}\]
and the expression in (30) can be rewritten as
\[\frac{1}{4}\int_{0}^{2\pi}\big{[}e^{p^{\prime}\cos x}\cos(p^{\prime}\sin x-mx)+e^ {p^{\prime}\cos x}\cos(p^{\prime}\sin x+mx)\big{]}\,dx \tag{32}\]
The latter \(\cos\) term in both expressions can be written equivalently as \(\cos(-p^{\prime}\sin x-mx)\).
We therefore end up with two applications of (23); in the first, \(p=p^{\prime}\), \(b=p^{\prime}\), and \(a=q=0\), while in the second, \(b=-p^{\prime}\), and the other constants are the same as in the first. This yields \(A^{\prime}=p^{\prime}\) in the first case and \(A^{\prime}=0\) in the second case. In both cases, \(B^{\prime}=C^{\prime}=D^{\prime}=0\). Substituting these values into (23) yields \(2\pi{p^{\prime}}^{m}/m!\) for the first half of the integrals in (31) and (32). The second half, however, depends on whether \(m>0\) or \(m=0\), on account of \(0^{m}\) terms. If \(m>0\), then \(0^{m}=0\), and the second half of the integral reduces to zero. On the other hand, if \(m=0\), then \(0^{0}\) should be treated as equal to \(1\), and the result for the second half is \(2\pi\) (the same as seen in the previous section). Consequently, the result for (29) and (31) is \(\big{[}2\pi{p^{\prime}}^{m}/m!-0\big{]}/4=(\pi{p^{\prime}}^{m})/(2\,m!)\) when \(m>0\), but \(\big{[}2\pi{p^{\prime}}^{m}/m!-2\pi\big{]}/4=0\) when \(m=0\). Likewise, the the result for (30) and (32) is \(\big{[}2\pi{p^{\prime}}^{m}/m!+0\big{]}/4=(\pi{p^{\prime}}^{m})/(2\,m!)\) when \(m>0\), but \(\big{[}2\pi{p^{\prime}}^{m}/m!+2\pi\big{]}/4=\pi\) when \(m=0\). Hence, it must be specified that the original results for (29) and (30) are only applicable when \(m>0\). The results for \(m=0\) can be cross-checked by substituting \(m=0\) into the original integrals. For (29), \(\sin mx=\sin 0=0\), and thus \(\int_{0}^{\pi}0\,dx=0\). For (30), \(\cos mx=\cos 0=1\), and thus the integral reduces to the equation in Section 6.1. As already seen, the result of that integral has been confirmed to be \(\pi\).
In considering the case of complex-valued \(p^{\prime}\), for the first half of the integrals in (31) and (32), from (27) we obtain \(A_{1}=A_{2}=p^{\prime}_{R}\), \(B_{1}=B_{2}=p^{\prime}_{I}\), and the remaining constants equal \(0\). Thus, \(A_{1}+iB_{1}=A_{2}+iB_{2}=p^{\prime}_{R}+ip^{\prime}_{I}\), which is simply \(p^{\prime}\). The first half of the integral thus evaluates to \(2\pi{p^{\prime}}^{m}/m!\), the same as if \(p^{\prime}\) is real-valued.
Likewise, for the second half of (31) and (32), all the constants in (27) are equal to zero. We therefore get the same results as the real case: \(0\) if \(m>0\), and \(2\pi\) if \(m=0\). Hence, the overall solutions to (29)/(31) and to (30)/(32) remain the same whether \(p^{\prime}\) is real-valued or complex-valued.
### Integral 4 (Gradshteyn and Ryzhik [1, Eq. 3.936 1]):
\[\int_{0}^{2\pi}e^{p^{\prime}\cos x}\cos(p^{\prime}\sin x-mx)\,dx=2\int_{0}^{ \pi}e^{p^{\prime}\cos x}\cos(p^{\prime}\sin x-mx)\,dx=\frac{2\pi p^{\prime\,m} }{m!} \tag{33}\]
This integral is the same as we considered in the previous section for the first half of (31) and (32). We have confirmed that \(2\pi p^{\prime\,m}/m!\) is correct for all non-negative integers \(m\), whether \(p\) is real or complex.
### Integrals 5 and 6 (Gradshteyn and Ryzhik [1, Eqs. 3.936 2 and 3.936 3]):
\[\int_{0}^{2\pi}e^{p^{\prime}\sin x}\sin(p^{\prime}\cos x+mx)\,dx=\frac{2\pi p ^{\prime\,m}}{m!}\sin\Bigl{(}\frac{m\pi}{2}\Bigr{)}\qquad[p^{\prime}>0] \tag{34}\]
\[\int_{0}^{2\pi}e^{p^{\prime}\sin x}\cos(p^{\prime}\cos x+mx)\,dx=\frac{2\pi p ^{\prime\,m}}{m!}\cos\Bigl{(}\frac{m\pi}{2}\Bigr{)}\qquad[p^{\prime}>0] \tag{35}\]
To begin, we rewrite the formulas as \((-1)e^{p^{\prime}\sin x}\sin(-p^{\prime}\cos x-mx)\) and \(e^{p^{\prime}\sin x}\times\cos(-p^{\prime}\cos x-mx)\) to obtain the required \(-mx\) instead of \(+mx\). For both formulas, we then have \(q=p^{\prime}\), \(a=-p^{\prime}\), and \(p=b=0\). These parameters give \(B^{\prime}=-p^{\prime}\) and \(A^{\prime}=C^{\prime}=D^{\prime}=0\). Substituting these values into (22) and multiplying by the \(-1\) factor in front of the rewritten formula yields
\[(-1)\frac{i\pi}{m!}\bigl{[}(0+ip^{\prime})^{m}{}_{0}F_{1}(;m+1;0)-(0-ip^{ \prime})^{m}{}_{0}F_{1}(;m+1;0)\bigr{]}=\frac{\pi}{i\,m!}\bigl{[}(ip^{\prime}) ^{m}-(-ip^{\prime})^{m}\bigr{]} \tag{36}\]
Since \(m\) is a non-negative integer, the factor of \(p^{\prime\,m}\) may safely be pulled out of both terms. We also can express \(\pm i\) as \(e^{\pm i\pi/2}\). This gives
\[\frac{\pi p^{\prime\,m}}{i\,m!}\bigl{[}(i)^{m}-(-i)^{m}\bigr{]}=\frac{2\pi p^ {\prime\,m}}{m!}\frac{e^{im\pi/2}-e^{-im\pi/2}}{2i}=\frac{2\pi p^{\prime\,m}}{ m!}\sin\Bigl{(}\frac{m\pi}{2}\Bigr{)} \tag{37}\]
Similarly, substituting the constants into (23) and following the same steps ultimately gives
\[\frac{2\pi{p^{\prime}}^{m}}{m!}\frac{e^{im\pi/2}+e^{-im\pi/2}}{2}=\frac{2\pi{p^{ \prime}}^{m}}{m!}\cos\Bigl{(}\frac{m\pi}{2}\Bigr{)} \tag{38}\]
This therefore leads to a significant observation: the results of (37) and (38) are valid even for \(p^{\prime}<0\) and for \(p^{\prime}=0\) if \(m>0\). They also hold for \(p^{\prime}=m=0\) if \({p^{\prime}}^{m}\) is treated as equal to one. Consequently, in the original results given in [1, Eqs. 3.936 2 and 3.936 3], the restriction \(p^{\prime}>0\) is unnecessary.
Extending the consideration to complex-valued \(p^{\prime}\), substituting the parameters into (27) gives \(A_{1}=-p^{\prime}_{I}\), \(A_{2}=p^{\prime}_{I}\), \(B_{1}=p^{\prime}_{R}\), \(B_{2}=-p^{\prime}_{R}\), and \(C_{1}=C_{2}=D_{1}=D_{2}=0\). Inserting these values into (25) and again multiplying by the \(-1\) factor yields
\[\begin{split}(-1)\frac{i\pi}{m!}\bigl{[}(-p^{\prime}_{I}+ip^{ \prime}_{R})^{m}-(p^{\prime}_{I}-ip^{\prime}_{R})^{m}\bigr{]}&= \frac{\pi}{i\,m!}\Bigl{(}\bigl{[}i(p^{\prime}_{R}+ip^{\prime}_{I})\bigr{]}^{m} -\bigl{[}(-i)(p^{\prime}_{R}+ip^{\prime}_{I})\bigr{]}^{m}\Bigr{)}\\ &=\frac{\pi}{i\,m!}\bigl{[}(ip^{\prime})^{m}-(-ip^{\prime})^{m} \bigr{]}\end{split} \tag{39}\]
This is the same as the end of (36). Although \(p^{\prime}\) is now complex-valued, we can still safely pull out the factor of \({p^{\prime}}^{m}\) as before. Ultimately, we find the same result as in (37) holds for complex-valued \(p^{\prime}\), as does the result in (38).
### Integral 7 (Gradshteyn and Ryzhik [1, Eq. 3.936 4]):
\[\int_{0}^{2\pi}e^{\cos x}\sin(mx-\sin x)\,dx=0 \tag{40}\]
After rewriting the formula as \((-1)e^{\cos x}\sin(\sin x-mx)\), we find this is a special case of (22) with \(p=b=1\) and \(q=a=0\). This therefore gives \(A^{\prime}=1\) and \(B^{\prime}=C^{\prime}=D^{\prime}=0\). Substituting these into (22) yields the first half of the formula equal to the second half. Subtracting them hence gives zero, confirming the original result.
In a sense, this may be considered to be a special case of the "\(\sin\)" counterpart to the "\(\cos\)" integral in the first half of (31) and (32), in which \(p^{\prime}=1\). In fact, the same result would be obtained in any case where \(b=p\) and \(q=a=0\), including complex values. Thus, [1, Eq. 3.936 4] could be generalized to \(\int_{0}^{2\pi}e^{p\cos x}\sin(mx-p\sin x)\,dx=0\) or \(\int_{0}^{2\pi}e^{p\cos x}\sin(p\sin x-mx)\,dx=0\). Furthermore, we may consider \(e^{p\cos x}\sin(p\sin x+mx)\) instead of \(e^{p\cos x}\sin(p\sin x-mx)\). In this case, we end up with \(b=-p\) and \(q=a=0\). As seen already in earlier subsections, this makes all the constants in (20) and (27) equal to zero. Consequently, the first and second parts of (22) equal each other, as do the first and second parts of (25). Subtracting the two parts thus again equals zero. Hence, [1, Eq. 3.936 4] could be generalized even further to \(\int_{0}^{2\pi}e^{p\cos x}\sin(p\sin x\pm mx)\,dx=0\), which applies for both real-valued and complex-valued \(p\).
### Integrals 8 and 9 (Gradshteyn and Ryzhik [1, Eqs. 3.937 3 and 3.937 4]):
\[\int_{0}^{2\pi}\exp(p\cos x+q\sin x)\sin(q\cos x-p\sin x+mx)\,dx= \frac{2\pi}{m!}\big{(}p^{2}+q^{2}\big{)}^{m/2}\sin\!\left(m\,{\rm at}\,\frac{ q}{p}\right) \tag{41}\] \[\int_{0}^{2\pi}\exp(p\cos x+q\sin x)\cos(q\cos x-p\sin x+mx)\,dx= \frac{2\pi}{m!}\big{(}p^{2}+q^{2}\big{)}^{m/2}\cos\!\left(m\,{\rm at}\,\frac{ q}{p}\right) \tag{42}\]
Once again we begin by rewriting the formulas as \((-1)\exp(p\cos x+q\sin x)\times\sin(-q\cos x+p\sin x-mx)\) and \(\exp(p\cos x+q\sin x)\cos(-q\cos x+p\sin x-mx)\). We thus have a special case of (22) and (23) where \(a=-q\) and \(b=p\). These parameters give \(A^{\prime}=p\), \(B^{\prime}=-q\), and \(C^{\prime}=D^{\prime}=0\). Substituting these constants into (22), multiplying by the factor of \(-1\) in front, then doing a bit of algebraic manipulation gives \(\big{[}\pi/(i\,m!)\big{]}\!\cdot\!\big{[}(p+iq)^{m}-(p-iq)^{m}\big{]}\). Let \(z=p+iq\), so that the expression may be written as
\[\frac{\pi}{i\,m!}\big{[}z^{m}-\overline{z}^{m}\big{]}=\frac{2\pi}{m!}\,|z|^{m }\,\frac{e^{i\,m\arg z}-e^{-i\,m\arg z}}{2i}=\frac{2\pi}{m!}\big{(}p^{2}+q^{2} \big{)}^{m/2}\sin(m\arg z) \tag{43}\]
The result in (43) is similar to the original result in (41). However, it must be noted that \(\arg(p+iq)\) does not equal \(\operatorname{atan}(q/p)\) in all circumstances. The former yields angles in the range of \((-\pi,\pi]\) radians, whereas the latter only yields angles in \((-\pi/2,\pi/2)\). If \(p<0\), then there will be a difference of \(\pm\pi\) between the two. In general, one may say that \(\arg(p+iq)=\operatorname{atan}(q/p)+k\pi\), where \(k\) may equal \(0\), \(1\), or \(-1\) depending on the values of \(p\) and \(q\). We may substitute this into (43), then make make use of the trigonometric identity \(\sin(\theta+\phi)=\sin\theta\cos\phi+\cos\theta\sin\phi\).
\[\frac{2\pi}{m!}\bigl{(}p^{2}+q^{2}\bigr{)}^{m/2}\sin\bigl{[}m \arg(p+iq)\bigr{]}=\frac{2\pi}{m!}\bigl{(}p^{2}+q^{2}\bigr{)}^{m/2}\sin\biggl{[} m\biggl{(}\operatorname{atan}\frac{q}{p}+k\pi\biggr{)}\biggr{]}\\ =\frac{2\pi}{m!}\bigl{(}p^{2}+q^{2}\bigr{)}^{m/2}\left[\sin\! \biggl{(}m\operatorname{atan}\frac{q}{p}\biggr{)}\cos(km\pi)+\cos\!\biggl{(}m \operatorname{atan}\frac{q}{p}\biggr{)}\sin(km\pi)\right] \tag{44}\]
\(\sin(km\pi)\) will equal \(0\) for integer values of \(k\) and \(m\); thus, the second part of the above equation will vanish. On the other hand, for \(\cos(km\pi)\), if either \(m\) is even or \(k=0\), then we are taking the cosine of an integer multiple of \(2\pi\), which yields a value of \(1\). However, if \(m\) is odd and \(k=\pm 1\), then we are taking the cosine of an odd integer multiple of \(\pi\), which yields a value of \(-1\). Hence, the original formula in [1, Eq. 3.937 3] will yield a sign error if \(m\) is odd and \(p<0\). (The formula also might not be able to be used depending on whether the method of calculating \(\operatorname{atan}(\cdot)\) accepts a number divided by zero as an input.)
Similarly, substituting the constants into (23) will yield \((2\pi/m!)\bigl{(}p^{2}+q^{2}\bigr{)}^{m/2}\times\cos(m\arg z)\). We may then use the trigonometric identity \(\cos(\theta+\phi)=\cos\theta\cos\phi-\sin\theta\sin\phi\) to convert \(\cos(m\arg z)\) to \(\cos\bigl{[}m\operatorname{atan}(q/p)\bigr{]}\cdot\cos(km\pi)-\sin\bigl{[}m \operatorname{atan}(q/p)\bigr{]}\times\sin(km\pi)\). Consequently, it can be seen that the original formula in [1, Eq. 3.937 4] will also yield a sign error if \(m\) is odd and \(p<0\).
Both formulas may be corrected by multiplying by \((-1)^{m}\) if \(p<0\). However, one can alternatively express [1, Eq. 3.937 3] in either of the following ways, which both
\[\int_{0}^{2\pi}\exp(p\cos x+q\sin x)\sin(q\cos x-p\sin x+mx)\,dx\] \[\qquad\qquad=\frac{2\pi}{m!}\bigl{(}p^{2}+q^{2}\bigr{)}^{m/2}\sin \bigl{[}m\,\mbox{atan2}(q,p)\bigr{]} \tag{45a}\] \[\qquad\qquad=\frac{2\pi}{m!}\bigl{(}p^{2}+q^{2}\bigr{)}^{m/2}\sin \bigl{[}m\,\mbox{arg}(p+iq)\bigr{]} \tag{45b}\]
Gradshteyn and Ryzhik [1, Eq. 3.937 4] may be expressed correctly in the same way, using \(\cos\) instead of \(\sin\) in (45a) and (45b).
When considering complex-valued \(p\) and \(q\), substitution of the parameters into (27) gives \(A_{1}=p_{R}-q_{I}\), \(A_{2}=p_{R}+q_{I}\), \(B_{1}=p_{I}+q_{R}\), \(B_{2}=p_{I}-q_{R}\), and \(C_{1}=C_{2}=D_{1}=D_{2}=0\). Inserting these constants into (25) and multiplying by the factor of \(-1\) in front yields the following equivalent forms to express the result for [1, Eq. 3.937 3]:
\[\int_{0}^{2\pi}\exp(p\cos x+q\sin x)\sin(q\cos x-p\sin x+mx)\,dx \qquad\mbox{[$p$ and $q$ complex-valued]}\] \[\qquad\qquad=\frac{i\pi}{m!}\Bigl{(}\bigl{[}(p_{R}{+}q_{I})+i(p_ {I}{-}q_{R})\bigr{]}^{m}-\bigl{[}(p_{R}{-}q_{I})+i(p_{I}{+}q_{R})\bigr{]}^{m} \Bigr{)} \tag{46a}\] \[\qquad\qquad\qquad-\bigl{[}(p_{R}{-}q_{I})^{2}+(p_{I}{+}q_{R})^ {2}\bigr{]}^{m/2}e^{i\,m\arg[(p_{R}-q_{I})+i(p_{I}+q_{R})]}\Bigr{)}\] (46b) \[\qquad\qquad=\frac{i\pi}{m!}\bigl{[}(p-iq)^{m}-(p+iq)^{m}\bigr{]} \tag{46c}\]
The form in (46c) is essentially the same as at the start of (43). However, in this case, since the imaginary parts of \(p\) and \(q\) may not be zero, the terms cannot be combined to form a \(\sin\) term in the same way.
In a similar way, we can insert the constants into (26) to yield the following equivalent forms to express the result for [1, Eq. 3.937 4]:
\[\int_{0}^{2\pi} \exp(p\cos x+q\sin x)\cos(q\cos x-p\sin x+mx)\,dx\qquad[p\ \mbox{and}\ q\ \mbox{complex-valued}]\] \[=\frac{\pi}{m!}\Big{(}\big{[}(p_{R}{+}q_{I})+i(p_{I}{-}q_{R}) \big{]}^{m}+\big{[}(p_{R}{-}q_{I})+i(p_{I}{+}q_{R})\big{]}^{m}\Big{)} \tag{47a}\] \[=\frac{\pi}{m!}\Big{(}\big{[}(p_{R}{+}q_{I})^{2}+(p_{I}{-}q_{R})^ {2}\big{]}^{m/2}e^{i\,m\arg[(p_{R}+q_{I})+i(p_{I}-q_{R})]}\] \[\qquad\qquad+\big{[}(p_{R}{-}q_{I})^{2}+(p_{I}{+}q_{R})^{2}\big{]} ^{m/2}e^{i\,m\arg[(p_{R}-q_{I})+i(p_{I}+q_{R})]}\Big{)}\] (47b) \[=\frac{\pi}{m!}\big{[}(p-iq)^{m}+(p+iq)^{m}\big{]} \tag{47c}\]
## 7 Conclusion
In this paper, we have examined several integrals that appear in Gradshteyn and Ryzhik [1]. Our main focus has been on [1, Eq. 3.937 1] [1, Eq. 3.937 2]; the others are special cases of these two integrals. We have determined that the formulas for these two integrals produce a sign error about half the time if the integer \(m\) is odd, and have derived the conditions for the formulas' parameters that lead to a sign error. We furthermore have derived updated expressions ((22) and (23)) that correct the errors, are simpler, and can be used for a wider range of parameter values. For the special cases, we have determined that some are correct but can be generalized further, while others contain errors as well. To summarize:
* [1, Eq. 3.931 4]: The minus sign may be replaced by \(\pm\).
* [1, Eqs. 3.932 1 and 3.931 2]: The formulas are only correct if \(m>0\); they are incorrect if \(m=0\). [1, Eq. 3.932 1] instead yields \(0\) if \(m=0\), while [1, Eq. 3.931 2] instead yields \(\pi\) if \(m=0\).
* [1, Eq. 3.936 1]: The formula is correct as given.
* [1, Eqs. 3.936 2 and 3.936 3]: The restriction \(p>0\) is unnecessary and may be removed.
* [1, Eq. 3.936 4]: The integrated equation may be generalized to \(e^{p\cos x}\sin(p\sin x\)\(\pm\,mx)\); the same result of 0 will be obtained.
* [1, Eqs. 3.937 3 and 3.937 4]: If \(m\) is odd, an error in sign will result from the formulas when \(p<0\). To correct this and also allow \(p=0\) to be used, \(\mbox{atan}(q/p)\) should be replaced by \(\mbox{atan}2(q,p)\) or \(\mbox{arg}(p+iq)\).
Lastly, we have also considered the extended case where the parameters are complex-valued rather than real-valued, and have derived the results for the integrals in this event. In the case of [1, Eqs. 3.937 1 to 3.937 4], the resulting expressions (respectively (25), (26), (46), and (47)) are somewhat more complicated. However, for the other integrals, it turns out that the same formulas (extended and/or corrected) still apply if \(p\) is complex-valued.
## Acknowledgments
The authors would like to thank Dr. Ivo Maljevic of TELUS Communications for his collaboration and his comments on this work.
This work was supported by funding from TELUS Communications and from the Natural Sciences and Engineering Research Council (NSERC) of Canada.
|
2309.02850 | Atomic electron shell excitations in double-$β$ decay | The problem of the transition of electron shells of atoms to excited states
in the process of neutrinoless double-$\beta$ decay is investigated. This
subject is crucial for modeling the energy spectrum of $\beta$-electrons, which
is sensitive to the mass and Majorana nature of neutrinos. The dependence of
the obtained results on the atomic number indicates the determining role of the
Feinberg--Migdal effect in the electron shell excitations. We report the
overlap amplitudes of the electron shells of the parent atom and the daughter
ion for eleven atoms, the two-neutrino double-$\beta$ decay of which was
observed experimentally. In around one-fourth of the cases where the structure
of the electron shells is inherited from the parent atom, there is a transition
to the ground state or the excited state with the lowest energy. The
de-excitation of the daughter ion in the latter scenario is accompanied by the
emission of photons in the ultraviolet range, which can serve as an auxiliary
signature of double-$\beta$ decay. The average excitation energy of the
electron shells ranges between 300 and 800 eV, with the variance ranging from
$(1.7~\mathrm{keV})^2$ in calcium to $(14~\mathrm{keV})^2$ in uranium. | M. I. Krivoruchenko, K. S. Tyrin, F. F. Karpeshin | 2023-09-06T09:17:49Z | http://arxiv.org/abs/2309.02850v1 | # Atomic electron shell excitations
###### Abstract
The problem of the transition of electron shells of atoms to excited states in the process of neutrinoless double-\(\beta\) decay is investigated. This subject is crucial for modeling the energy spectrum of \(\beta\)-electrons, which is sensitive to the mass and Majorana nature of neutrinos. The dependence of the obtained results on the atomic number indicates an important role of the Feinberg-Migdal effect in the electron shell excitations. We report the overlap amplitudes of the electron shells of the parent atom and the daughter ion for eleven atoms, the two-neutrino double-\(\beta\) decay of which was observed experimentally. In around one-fourth of the cases where the structure of the electron shells is inherited from the parent atom, there is a transition to the ground state or the excited state with the lowest energy. The de-excitation of the daughter ion in the latter scenario is accompanied by the emission of photons in the ultraviolet range, which can serve as an auxiliary signature of double-\(\beta\) decay. The average excitation energy of the electron shells ranges between 300 and 800 eV, with the variance ranging from (1.7 keV)\({}^{2}\) in calcium to (14 keV)\({}^{2}\) in uranium.
Neutrinoless double-\(\beta\) decay (\(0\nu 2\beta\)) does not preserve the total number of leptons and is particularly interesting in the search for departures from the Standard Model (SM). Similar significance could be found in the quark sector of SM for processes which violate baryon number conservation, such as proton decay and neutron-antineutron oscillations [1]. Beyond the SM, any mechanism of \(0\nu 2\beta\) decay implies the existence of a Majorana neutrino mass [2, 3]. In the effective theory, the Majorana neutrino mass, \(m_{\nu}\), is generated by the Weinberg operator of dimension \(d=5\)[4]. In the absence of operators of dimension \(d>5\) and symmetry between left and right elementary fermions, the amplitude of \(0\nu 2\beta\) decay with light Majorana neutrino is proportional to \(m_{\nu}\).
Experimental searches for \(0\nu 2\beta\) decay have been actively preformed for a number of decades. The GERDA collaboration recently obtained a constraint \(m_{\nu}<0.079-0.18\) eV at the confidence level CL = 90% using the isotope \({}^{76}\)Ge [5]. Similar results were obtained by the EXO collaboration [6] using xenon-136. Restriction on the neutrino Majorana mass \(m_{\nu}<0.3-0.9\) eV was also obtained by the collaboration NEMO-3 using molybdenum-100 [7]. The SuperNEMO experiment is under preparation [8]. CUORE's experiments using the isotope \({}^{130}\)Te [9, 10] and KamLAND-Zen with liquid xenon-136 [11] are in the active phase.
The uncertainty of the upper limit on the neutrino mass is due to the accuracy of calculations of the nuclear part of the process [12, 13, 14].
Experiments to search for \(0\nu 2\beta\) decay analyse the energy spectrum at the boundary of the phase space of \(\beta\)-electrons in order to find a deviation from the energy spectrum of a more probable two-neutrino double-\(\beta\) decay (\(2\nu 2\beta\)). Experimentalists inevitably encounter a problem that has become widely known in connection with attempts to measure neutrino mass in tritium beta decay: the daughter atom with a high probability passes into an excited state. This may be the excited state of a molecule composed of active target atoms. The atoms themselves experience excitation due to shake-up and shake-off effects or internal scattering of \(\beta\)-electrons. The theory of these processes is developed by Feinberg [39] and Migdal [40]. Influence of these processes on the spectrum of \(\beta\)-electrons are especially noticeable near the spectrum boundary. The effect increases significantly due to the fact that the spread of residual excitation energies is almost an order of magnitude higher than the average value [15]. A similar effect can be expected from the chemical shift [16].
The implications of atom ionization and excitation, first studied in the context of nuclear physics, are observed in molecular, solid-state systems and are crucial to the experiments LUX [17], XENON1T [18], and DarkSide-50 [19], which are designed to detect dark matter particles.
In double-\(\beta\) decays, the daughter ion with a high probability occurs in an excited state [20, 21, 22, 23], which reduces the energy carried away by \(\beta\)-electrons. The energy spectrum of \(\beta\)-electrons in \(0\nu 2\beta\) decay is a delta function, distorted by atomic effects.
This peak is considered as the \(0\nu 2\beta\) decay's signature. The decay realizes a scenario in which channels with valence electron excitations prevail in probability, although the average excitation energy, \(\cal M\), and its variance, \(\cal D\), are essentially saturated by rare electron excitations from inner atomic orbitals.
In this paper, we estimate the deviations of the \(\beta\)-electrons energy from the decay energy, \(Q^{*}\), of the \(0\nu 2\beta\) decay for 11 atoms for which \(2\nu 2\beta\) decay was experimentally observed.
The binding energy of electrons on the K shell differs from the binding energy of valence electrons by around three orders of magnitude (\(\sim Z^{2}\)) in medium-heavy and heavy atoms, making it difficult to estimate the magnitude of \(\cal M\) and \(\cal D\) qualitatively. Excitation of valence electrons with low binding energy obviously dominates the decay probability. However, the calculations result in unusually large values of average excitation energy and its variance. Given that the accuracy of calculations in multi-particle problems is also limited, the paper considers several approaches, including the Thomas-Fermi (TF) model [24], the Thomas-Fermi-Dirac-Weizsacker model (TFDW) [25, 26, 27, 28, 29], non-relativistic Rutaan-Hartree-Fock (RHF) formalism [30] and relativistic Dirac-Hartree- Fock formalism (DHF) [31, 32, 33, 34, 35]. When the outcomes are compared, the magnitude of uncertainty in the parameters of interest can be evaluated.
Each of these approaches has its advantages and limitations. Unlike the TF model, in the TFDW model the electron density is finite at the nucleus, which makes it possible to determine the variance within the model. In the RHF method, the wave functions of orbitals are parametrized analytically, which makes it possible to find exchange contributions to the variance and other observables, but the applicability of the method is restricted to light and medium-heavy atoms. Within the framework of DHF, the basic properties of atomic electron shells are tabulated in [31, 32, 33] and implemented in the form of software packages such as Grasp-2018 [34, 35] and RAINE [36, 37].
In what follows, the system of atomic units \(\tilde{\ }=m=e=1\), \(c=137\) is used, where \(m\) is the electron mass, \(e\) is the proton charge, \(c\) is the speed of light. Let \(\hat{H}_{Z,N}\) be the Hamiltonian of \(N\) electrons of an ion with a nucleus charge \(Z\). We denote \(|Z,N\rangle\) the ground state and \(E_{Z,N}\) the binding energy of the electrons, so that \(\hat{H}_{Z,N}|Z,N\rangle=E_{Z,N}|Z,N\rangle\).
The Hamiltonian of the daughter ion's electrons is related to the Hamiltonian of the parent neutral atom's electrons via the relation
\[\hat{H}_{Z+2,Z}=\hat{H}_{Z,Z}-2\sum_{i}\frac{1}{r_{i}}, \tag{1}\]
where \(r_{i}=|{\bf r}_{i}|\), \({\bf r}_{i}\) is the coordinate of the \(i\)th electron, and summation is performed by \(i=1,\ldots,Z\). The electrons of the daughter ion are in the state \(|Z,Z\rangle\) for the next
moment after the decay, while the nucleus acquires a charge of \(Z+2\). The relationship
\[{\cal M}=\langle Z,Z|\hat{H}_{Z+2,Z}|Z,Z\rangle-\langle Z+2,Z|\hat{H}_{Z+2,Z}|Z+2,Z\rangle \tag{2}\]
determines the average excitation energy of the daughter ion's electrons, or, with account of Eq. (1),
\[{\cal M}=E_{Z,Z}+2Z^{-1}E_{Z,Z}^{\rm C}-E_{Z+2,Z}, \tag{3}\]
where \(E_{Z,N}^{\rm C}\) is the Coulomb interaction energy of the electrons with the nucleus.
Table 1 shows the results of the calculation of the excitation energy in the TF, TFDW and DHF models. First, the values \({\cal M}^{\prime}\) are found, which differ from \({\cal M}\) by replacing in Eq. (3) the binding energy of the electrons of the ion \(E_{Z+2,Z}\) with the binding energy of the electrons of the neutral atom \(E_{Z+2,Z+2}\). The difference between \(E_{Z+2,Z}\) and \(E_{Z+2,Z+2}\) is equal to the double ionization energy, \(I_{2}\); there is a relation \({\cal M}={\cal M}^{\prime}-I_{2}\). The experimental values of \(I_{2}\) are collected in [38].
In the TF model, the calculations are carried out according to the scheme of [15]. The TFWD model, being a generalization of the TF model, additionally takes into account exchange contribution to the energy of electron gas [25] and spatial inhomogeneity
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline \multicolumn{2}{c}{Process} & \multicolumn{1}{c}{\(Q\)} & Ref. & \(K_{Z}\) & \({\cal M}_{\rm TF}\,{\cal M}_{\rm TFDW}\) & \({\cal M}_{\rm DHF}\) & \(I_{2}\) & \({\cal D}_{\rm TF}^{1/2}\,\bar{\cal D}_{\rm TFDW}^{1/2}\) & \({\cal D}_{\rm DHF/B}^{1/2}\,{\cal D}_{\rm DHF/B}^{1/2}\,{\cal D}_{\rm RHF/B}^{1/2}\) \\ & [keV] & & & [eV] & [eV] & [eV] & [eV] & [keV] & [keV] & [keV] & [keV] & [keV] & [keV] & [keV] \\ \hline \({}^{48}_{20}\)Ca \(\rightarrow\)\({}^{48}_{22}\)Ti & 4267.98(32) & [42] & 0.466 & 335 & 247 & 299 & 20.4 & 1.25 & 2.43 & 1.70 & 1.65 & 1.66 & 1.61 \\ \({}^{76}_{32}\)Ge \(\rightarrow\)\({}^{76}_{34}\)Se & 2039.006(50) & [43] & 0.575 & 383 & 246 & 369 & 30.9 & 2.16 & 3.92 & 2.88 & 2.77 & 2.72 & 2.62 \\ & 2039.061(7) & [5] & & & & & & & & & & & \\ \({}^{82}_{34}\)Se \(\rightarrow\)\({}^{82}_{36}\)Kr & 2997.9(3) & [44] & 0.597 & 384 & 238 & 377 & 38.4 & 2.31 & 4.17 & 3.09 & 2.97 & 2.90 & 2.79 \\ \({}^{96}_{27}\)Zr \(\rightarrow\)\({}^{96}_{42}\)Mo & 3356.097(86) & [45] & 0.518 & 422 & 246 & 409 & 23.3 & 2.78 & 4.92 & 3.76 & 3.60 & 3.44 & 3.29 \\ \({}^{100}_{49}\)Mo \(\rightarrow\)\({}^{104}_{40}\)Ru & 3034.40(17) & [46] & 0.564 & 428 & 241 & 419 & 24.1 & 2.94 & 5.17 & 4.00 & 3.82 & 3.62 & 3.46 \\ \({}^{116}_{48}\)Cd \(\rightarrow\)\({}^{56}_{50}\)Sn & 2813.50(13) & [47] & 0.601 & 451 & 229 & 442 & 22.0 & 3.42 & 5.92 & 4.74 & 4.51 & 4.17 & 3.97 \\ \({}^{128}_{52}\)Te \(\rightarrow\)\({}^{12}_{54}\)Te \(\rightarrow\)\({}^{56}_{54}\)Xe & 8658.7(131) & [48] & 0.589 & 452 & 206 & 457 & 33.1 & 3.74 & 6.42 & 5.29 & 5.04 & 4.53 & 4.32 \\ \({}^{130}_{52}\)Te \(\rightarrow\)\({}^{130}_{54}\)Xe & 2526.97(23) & [47] & 0.589 & 452 & 206 & 457 & 33.1 & 3.74 & 6.42 & 5.29 & 5.04 & 4.53 & 4.32 \\ \({}^{136}_{54}\)Xe \(\rightarrow\)\({}^{136}_{56}\)Ba & 2457.83(37) & [49] & 0.606 & 476 & 217 & 465 & 15.2 & 3.91 & 6.67 & 5.57 & 5.31 & 4.71 & 4.49 \\ \({}^{150}_{60}\)Nd \(\rightarrow\)\({}^{136}_{62}\)Sm & 3371.38(20) & [50] & 0.519 & & & 514 & 16.7 & & 6.50 & 6.20 & & \\ \({}^{238}_{38}\)U \(\rightarrow\)\({}^{238}_{94}\)Pa & 1437.3 & [51] & 0.546 & & & 774 & 17.5 & & 14.58 & 13.90 & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: The average excitation energy of the electron shells of the daughter ion and the variance for eleven atoms, the \(2\nu 2\beta\) decay of which was observed experimentally. The second column contains the values of the mass difference, \(Q\), of the neutral atoms involved in the decay. The fourth column shows the overlap integrals of the electron shells of the parent atom and the twice ionized daughter atom, calculated with the use of the software package Grasp-2018 [34, 35]. The average energy of the electron shell excitations of the daughter ion is shown in the TF, TFDW and DHF models, the upper bound of the variance \(\bar{\cal D}\) is shown in the TF, TFDW models, and the variance \(\cal D\) – in the DHF and RHF models. The values of \({\cal M}_{\rm DHF}\) and \({\cal D}_{\rm DHF/a}\) excluding exchange contributions are obtained using the results of [33] and [32], respectively. \({\cal D}_{\rm DHF/b}\) includes exchange contributions. To calculate \({\cal D}_{\rm RHF/a}\) without and \({\cal D}_{\rm RHF/b}\) with exchange contributions, the wave functions of orbitals in the RHF method are used [30]. The double ionization energy \(I_{2}\)[38] is rounded to three significant digits. The predictions of the non-relativistic models TF, TFDW and RHF are limited by the values of the nuclear charge \(Z\leq 54\).
in the electron density [26]. A consistent semiclassical decomposition of the density functional with account of the inhomogeneity can be found in the monograph by Kirzhnits [27]. In its simplest form
\[E_{Z,N} = \int d{\bf r}\left(-\frac{Z}{r}n({\bf r})+c_{1}n^{5/3}({\bf r})+c_{ 2}n^{4/3}({\bf r})+c_{3}\frac{(\nabla n({\bf r}))^{2}}{n({\bf r})}\right)\] \[\qquad\qquad+\frac{1}{2}\int d{\bf r}d{\bf r}^{\prime}n({\bf r}) \frac{1}{|{\bf r}-{\bf r}^{\prime}|}n({\bf r}^{\prime}).\]
Here, \(n({\bf r})\) is the electron density, the first term under the integral sign represents the interaction energy of the electrons with the nucleus, \(E^{\rm C}_{Z,N}\), the second term is the kinetic energy, the third one is the exchange energy, the fourth one is the Weizsacker gradient correction [26]. The last term is the interaction energy of electrons. The coefficients \(c_{i}\) equal
\[c_{1}=\frac{3}{10}(3\pi^{2})^{2/3},\ \ c_{2}=-\frac{3}{4}\left(\frac{3}{\pi} \right)^{1/3},\ \ c_{3}=\frac{\lambda}{8}. \tag{5}\]
The value \(\lambda=1/5\) of the phenomenological models [28, 29] is used.
In [28], the binding energy of neutral atoms N, Ne, Ar, Kr, Xe with filled valence shells is calculated using the TFDW model. Parameterization of the results gives \(E_{Z,Z}=-0.536Z^{2.38}\), which is not much different from the TF model, where \(E_{Z,Z}=-0.764Z^{7/3}\). The energy of the Coulomb interaction of electrons with the nucleus is calculated using the screening function. Integrating the expression for \(E^{\rm C}_{Z,Z}\) by parts, the action of the Laplacian is transferred to the Coulomb potential, which gives a delta function at the origin. The difference between the total potential and the nuclear potential occurs as a multiplier. The interaction energy turns out to be \(Z^{2}(a-b)\), the screening function parameters \(a\) and \(b\) are given in Table II of [28]. The fitting gives \(E^{\rm C}_{Z,Z}=-1.270Z^{2.38}\) in agreement with the virial theorem. The parameterization accuracy is not worse than 0.5%. The corresponding results for \({\cal M}\) are shown in Table 1.
The average values of \(\langle Z,Z|r_{i}^{-1}|Z,Z\rangle\) required to estimate \(E^{\rm C}_{Z,Z}\) in the DHF method are tabulated in [30, 31, 32, 33]. In [33], the values of \(E^{\rm C}_{Z,Z}\) are also provided. The results of calculations of \({\cal M}\) within the framework of DHF model [33] are shown in Table 1.
The TF and DHF models agree well with each other and agree qualitatively with the predictions of the TFDW model.
The variance of the electron excitation energy is determined by the formula
\[{\cal D}=\langle Z,Z|\hat{H}_{Z+2}^{2}|Z,Z\rangle-\langle Z,Z|\hat{H}_{Z+2}|Z,Z\rangle^{2}. \tag{6}\]
Taking into account Eq. (1) we have
\[\frac{1}{4}{\cal D}=\sum_{ij}\langle Z,Z|\frac{1}{r_{i}}\frac{1}{r_{j}}|Z,Z \rangle-\langle Z,Z|\sum_{i}\frac{1}{r_{i}}|Z,Z\rangle^{2}. \tag{7}\]
The summation is performed in the range \(1\leq i,j\leq Z\). In the TF and TFDW models, the two-particle electron density is not defined, however, it is possible to fix the upper limit of the variance [15]:
\[\frac{1}{4}\bar{\mathcal{D}}=\int d\mathbf{r}\frac{1}{r^{2}}n(\mathbf{r})-Z^{-1 }\left(\int d\mathbf{r}\frac{1}{r}n(\mathbf{r})\right)^{2}. \tag{8}\]
Calculation of the integral of \(1/r^{2}\) over the electron density distribution in the TFDW model leads to values that can be parameterized as
\[\int d\mathbf{r}\frac{1}{r^{2}}n(\mathbf{r})=5.81Z^{2.00}. \tag{9}\]
The parameterization accuracy is not worse than \(5\%\). The values of \(\bar{\mathcal{D}}\) in the TF and TFDW models are shown in Table 1.
In the DHF method, it is possible to estimate not only the upper bound of the variance, but also the variance itself. In disregard of exchange effects \(\mathcal{D}\) is calculated from Eq. (7) after factorization of the average value under the double summation sign. The corresponding results, using the tabulated values of averages \(1/r_{i}\) and \(1/r_{i}^{2}\) for the electron orbitals [32], are shown in Table 1.
The exchange effects are taken into account by averaging the two-particle operator over the total wave function of electrons of the atom. In the one-determinant approximation, the wave function has the form
\[\Psi_{\alpha_{1}\alpha_{2}\ldots\alpha_{N}}=\frac{1}{\sqrt{N!}}\epsilon^{s_{1 }s_{2}\ldots s_{N}}\phi_{\alpha_{s_{1}}}^{1}\phi_{\alpha_{s_{2}}}^{2}\ldots \phi_{\alpha_{s_{N}}}^{N}, \tag{10}\]
where \(\phi_{\alpha}^{i}\) are the wave functions of electrons, the index \(i=1,...,N\) counts the spatial coordinates and spin indices, the index \(\alpha\) counts the quantum numbers of orbitals. In the case under consideration, \(\alpha=(njlm)\), where \(n\) is the principal quantum number, \(j\) is the total angular momentum, \(m\) is its projection, \(l=j\pm 1/2\) is the orbital angular momentum. A fixed set of quantum numbers \((\alpha_{1},\alpha_{2},\ldots,\alpha_{N})\) determines the state of the electron shells of the atom. The tensor \(\epsilon^{s_{1}s_{2}\ldots s_{N}}=\pm 1\) performs antisymmetrization.
The functions \(\phi_{\alpha}^{i}\) are orthonormal. We write them as the product of the radial and angular parts:
\[\phi_{njlm}^{i}=R_{njl}(r_{i})\Omega_{jm}^{l}(\mathbf{n}_{i}). \tag{11}\]
Here \(R_{njl}(r)\) is a real function, \(\Omega_{jm}^{l}(\mathbf{n})\) is a spherical spinor depending on the unit vector \(\mathbf{n}=\mathbf{r}/|\mathbf{r}|\). We denote by \(\kappa_{njl}\) the number of occupied energy levels with quantum numbers \((njl)\). In the case of fully occupied energy levels, as well as cases allowing for each pair of \((jl)\) the existence of no more than one partially occupied energy level with the maximum total angular momentum, \(j^{\rm max}=\kappa_{njl}(2j+1-\kappa_{njl})/2\), one can simplify Eq. (7) by replacing the summation over electrons by the summation over
energy levels:
\[\frac{1}{4}\mathcal{D}=\sum_{njl}\kappa_{njl}\langle njl|r^{-2}|njl\rangle-\sum_{ nn^{\prime}jl}\min(\kappa_{njl},\kappa_{n^{\prime}jl})\langle njl|r^{-1}|n^{ \prime}jl\rangle^{2}. \tag{12}\]
The matrix elements are defined according to
\[\langle njl|h(r)|n^{\prime}jl\rangle=\int r^{2}drh(r)R_{njl}(r)R_{n^{\prime} jl}(r). \tag{13}\]
The sum of the diagonal components \(n=n^{\prime}\) of Eq. (12) coincides with the right side of Eq. (7) through factorization of the mean value under the sign of the double sum, as it is assumed in the TF and TFDW estimates. The components with \(n\neq n^{\prime}\) in the second term of Eq. (12) are related to exchange effects. The exchange effects reduce the variance.
In the RHF method, the functions \(R_{njl}(r)\) are tabulated [30]. To calculate the variance taking into account exchange effects, knowledge of the off-diagonal matrix elements \(\langle njl|r^{-1}|n^{\prime}jl\rangle\) and \(\langle njl|r^{-2}|n^{\prime}jl\rangle\) is required. Table 2 shows the results for molybdenum atom in the RHF method. The diagonal matrix elements are compared with those in the DHF method [32]. There is some systematic underestimation of the average values in comparison with the DHF method, which is due to the shift in relativistic models of the electron density to smaller distances [35]. A similar pattern holds for other 10 atoms. Accordingly, the variance in the RHF method without taking into account exchange contributions is also systematically lower than the predictions of the DHF method.
The average values of \(1/r\) and \(1/r^{2}\) in inner and outer orbitals are approximately in the ratios \(Z:1\) and \(Z^{2}:1\), which is consistent with the values of the diagonal matrix elements in Table 2. In cases where for a partially occupied level the total angular momentum is not the maximum and/or where there exist more than one partially occupied level for a pair of \((jl)\), the formula (12) is used as an approximation. Since in medium-heavy and heavy atoms, the main contribution to the variance comes from electrons in inner shells, where \(\kappa_{njl}=2j+1\), one can expect that accuracy of such an estimate is quite high.
The results of calculations of \(\mathcal{D}\) in the RHF method, taking into account exchange contributions, are presented in Table 1. For comparison, the results of calculations without exchange effect are also provided. The agreement with the TF, TFDW, RHF and DHF models is quite satisfactory.
For applications, we recommend the values of excitation energy \(\mathcal{M}_{\rm DHF}\), for variance - \(\mathcal{D}_{\rm DHF/b}\), as theoretically the most justified. The estimate of \(\mathcal{D}_{\rm DHF/b}\) differs from \(\mathcal{D}_{\rm DHF/a}\) in that it includes exchange corrections found by the RHF method. Taking into account various approximations, the uncertainty in \(\mathcal{M}_{\rm DHF}\) and \(\mathcal{D}_{\rm DHF/b}\) can be estimated at \(<10\%\).
In the non-relativistic TF, TFDW and RHF models \({\cal M}\) only weakly depends on \(Z\), while \({\cal D}\) grows approximately as \(Z^{2}\). This behavior is perfectly consistent with the highlighted role of K electrons, whose nonrelativistic excitation theory in \(\beta\) decay is developed in [39, 40].
The parameter \(K_{Z}\) given in Table 1 represents the overlap amplitude of the wave functions of all the electrons in the ground state of the parent atom with the wave functions of the electrons in the ground state of the twice ionized daughter atom, whose electrons have retained their initial configuration. The corresponding wave functions of the electrons are not orthogonal because the charges of the nuclei before and after shaking differ by two units, and as a result, the overlap of electron wave functions with the identical quantum numbers is not equal to one. The daughter ion gets excited as a result. The value \(K_{Z}^{2}\) determines the probability of inheriting quantum numbers by the electrons and, accordingly, the absence of shaking effects.
To estimate \(K_{Z}\), a multiparticle calculation using the DHF method is required, which was performed using the software package Grasp-2018 [34, 35]. A set of large \(f_{njl}^{+}(r)\) and small \(f_{njl}^{-}(r)\) radial components of electron wave functions for all quantum numbers (\(njl\)) is obtained for each parent atom of Table 1 with an appropriate electron configuration and a total angular momentum corresponding to the ground state of the parent atom's electrons. Similarly, for a daughter ion with a nuclear charge \(Z+2\), a set of radial components \(\tilde{f}_{njl}^{\pm}(r)\) is obtained. The overlap amplitude \({\cal O}_{njl}\) of the wave
\begin{table}
\begin{tabular}{c|c c c c c|c c c|c c} \hline \hline \multicolumn{11}{c}{\({}_{42}\)Mo} \\ \hline \(\langle njl|r^{-1}|n^{\prime}jl\rangle\) & 1S & 2S & 3S & 4S & 5S & & 2P & 3P & 4P & & 3D & 4D \\ \hline
1S & 41.49 & 7.962 & 3.231 & -1.255 & 0.321 & & & & & & \\
2S & & 9.378 & 2.160 & -0.803 & 0.204 & 2P & 9.339 & -1.858 & -0.626 & & & \\
3S & & & 3.264 & -0.665 & 0.163 & 3P & & 3.164 & 0.582 & 3D & 2.970 & -0.361 \\
4S & & & & 1.171 & -0.149 & 4P & & & 1.052 & 4D & & 0.714 \\
5S & & & & & 0.327 & & & & & & \\ \hline
[32] & 43.55 & 9.939 & 3.409 & 1.209 & 0.322 & & 9.412 & 3.190 & 1.059 & & 2.958 & 0.695 \\ & & & & & & 9.879 & 3.300 & 1.089 & & 2.987 & 0.705 \\ \hline \(\langle njl|r^{-2}|n^{\prime}jl\rangle\) & 1S & 2S & 3S & 4S & 5S & & 2P & 3P & 4P & & 3D & 4D \\ \hline
1S & 3455. & 984.9 & 410.3 & -160.1 & 40.94 & & & & & & \\
2S & & 357.8 & 141.7 & -55.02 & 14.06 & 2P & 118.4 & -37.69 & -13.17 & & \\
3S & & & & 65.20 & -24.42 & 6.223 & 3P & & 21.34 & 6.697 & 3D & 11.17 & -2.120 \\
4S & & & & 10.41 & -2.564 & 4P & & & 3.157 & 4D & & 0.965 \\
5S & & & & & 0.748 & & & & & & \\ \hline
[32] & 4005. & 439.4 & 80.03 & 12.73 & 0.830 & & 120.7 & 21.93 & 3.243 & & 11.11 & 0.930 \\ & & & & & & 141.5 & 25.50 & 3.744 & & & 11.37 & 0.960 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Matrix elements \(\langle nl|r^{-1}|n^{\prime}jl\rangle\) and \(\langle njl|r^{-2}|n^{\prime}jl\rangle\) for \(n\leq n^{\prime}\) electron orbitals in a molybdenum atom. Calculations use electron wave functions of the RHF method [30] with degeneracy in \(j\). In the lower part of table, diagonal matrix elements \(n=n^{\prime}\) of the relativistic DHF method [32] are given; the upper and lower rows of P and D waves correspond to \(j=l+1/2\) and \(j=l-1/2\), respectively.
functions of electrons with the same quantum numbers is equal to
\[{\cal O}_{njl}=\int\left(\tilde{f}^{+}_{njl}(r)f^{+}_{njl}(r)+\tilde{f}^{-}_{njl }(r)f^{-}_{njl}(r)\right)r^{2}dr.\]
In the one-determinant approximation and without taking into account exchange terms, the amplitude of \(K_{Z}\) is equal to the product of the amplitudes of \({\cal O}_{njl}\) in the degree equal to the occupation number of the corresponding level:
\[K_{Z}=\prod_{njl}\left({\cal O}_{njl}\right)^{\kappa_{njl}}. \tag{14}\]
For a wide range of atomic numbers \(Z\), the values of \(K_{Z}\) turn out to be close to 1/2. The probability of \(K_{Z}^{2}\sim 1/4\) is quite small, which indicates the dominance of channels with the excited electron shells of atoms in agreement with the phenomenological analysis [20].
The above approach assumes that the resulting configuration of the daughter ion is the ground state with quantum numbers of electrons inherited from the parent atom. However spectroscopic analysis shows that this condition is not always met. For example, the Ti III ion formed after \(0\nu 2\beta\) decay of Ca with the electron configuration [Ar]4s\({}^{2}\), in the ground state has the configuration [Ar]3d\({}^{2}\). Strictly speaking, this fact means that the overlap is exactly zero: \(K_{Z}\equiv 0\), therefore, the decay with the dominant probability is accompanied by the excitation of the electron shells of the atom. The energy of the [Ar]4s\({}^{2}\) lowest configuration exceeds that of the [Ar]3d\({}^{2}\) configuration of Ti III by 12.7 eV. A similar situation occurs in double-\(\beta\) decay atoms of Zr, Mo, Nd and U. In these cases, the amplitude \(K_{Z}\), given in Table 1, is the amplitude of the transition to the most likely excited state of the electron shells of the daughter ion. In approximately every fourth case, double-\(\beta\) decay of Ca, Zr, Mo, Nd and U is accompanied by the de-excitation of the electron shells of atoms from the unique excited state to the ground state with the emission of a series of photons of the ultraviolet range. The observation of these photons, whose wavelengths are well known, can serve as an auxiliary signature for the identification of decay.
Knowledge of the parameters \(K_{Z}\), \({\cal M}\) and \({\cal D}\) is sufficient to construct simple models of the energy distribution of \(\beta\)-electrons in \(0\nu 2\beta\) decay. With a probability of \(K_{Z}^{2}\), the electrons of the decaying atom remain in the lowest energy state, preserving their quantum numbers, with a probability of \(1-K_{Z}^{2}\) they pass into an excited state. The conditional probability of transition to an excited state with energy \(\epsilon\) in the interval \(d\epsilon\) is denoted by \(w(\epsilon/Q^{*})d\epsilon/Q^{*}\). The total probability density takes the form
\[p(\epsilon)=K_{Z}^{2}\delta(\epsilon)+(1-K_{Z}^{2})w(\epsilon/Q^{*})/Q^{*}. \tag{15}\]
The binomial distribution is used for \(w(x)\), which has a certain unversatility and is widely used in modeling random processes [41]. The distribution has two free parameters, which are fixed by normalization to the average value of \({\cal M}=\int_{0}^{Q^{*}}d\epsilon p(\epsilon)\) and the mean square of the energy \({\cal D}+{\cal M}^{2}=\int_{0}^{Q^{*}}d\epsilon\epsilon^{2}p(\epsilon)\).
Based on the DHF model predictions, we calculate the maximum deviation, \(\Delta T_{\rm max}\), of the \(\beta\)-electrons energy from the decay energy \(Q^{*}\). \(\Delta T_{\rm max}\) can be determined through the equation
\[p_{T}=\int_{0}^{\Delta T_{\rm max}}d\epsilon p(\epsilon)\]
for a given probability, \(p_{T}\). The value of \(p_{T}=0.9\) corresponds to the deviations of the \(\beta\)-electrons energy from \(Q^{*}\) less than \(\Delta T_{\rm max}=180\) eV (Ca), 18 eV (Ge), 19 eV (Se), and \(\Delta T_{\rm max}<5\) eV for Zr, Mo, Cd, Te, Xe, Nd, U. At the probability of \(p_{T}=0.95\), the deviations do not exceed \(\Delta T_{\rm max}=1.16\) keV (Ca), 0.55 keV (Ge), 0.44 keV (Se), 0.25 keV (Zr), 0.18 keV (Mo), 69 eV (Cd), 22 eV (\({}^{128}\)Te), 30 eV (\({}^{130}\)Te), 19 eV (Xe), 11 eV (Nd), and \(\Delta T_{\rm max}<5\) eV for U.
The decay energy without the energy taken away by neutrinos is measured in calorimetric detectors, where the energy resolution reaches a few keV (GERDA). Using a track calorimeter, the SuperNEMO experiment measures the energy of \(\beta\)-electrons in \(0\nu 2\beta\)-selenium decay with an uncertainty of 4% at an energy of \(Q\). Innovative technologies with excellent energy resolution are in high demand for reducing background noise and for tracking the impact of atomic shell excitations on the neutrino mass constraints.
To summarize, the overlap amplitudes for the electron shells of the parent atom and the daughter ion for each atom whose \(2\nu 2\beta\) decay was observed experimentally were found. In the double-\(\beta\) decay of atoms \({}^{82}\)Se, \({}^{96}\)Zr, \({}^{100}\)Mo, \({}^{150}\)Nd, and \({}^{238}\)U, the electron shells with probability \(\sim 1/4\) turn out to be the lowest excited state with quantum numbers inherited from the parent atom. Such decays are accompanied by a subsequent de-excitation with characteristic emission of photons of the ultraviolet range. In the atoms \({}^{48}\)Ca, \({}^{76}\)Ge, \({}^{116}\)Cd, \({}^{128}\)Te, \({}^{130}\)Te, and \({}^{136}\)Xe, the daughter ion's electrons move to the ground state with a probability of \(\sim 1/4\) and to an excited state with a probability of \(\sim 3/4\). The average value and variance of the excitation energy were computed for each of the scenarios under consideration. The dependence on the atomic number indicates the dominant contribution to the variance of the Feinberg-Migdal effect. Deviations of the \(\beta\)-electrons energy from the decay energy \(Q^{*}\) were estimated for the neutrinoless mode of double-\(\beta\) decay.
The work was supported by the grant #23-22-00307 of Russian Science Foundation.
|
2305.15021 | EmbodiedGPT: Vision-Language Pre-Training via Embodied Chain of Thought | Embodied AI is a crucial frontier in robotics, capable of planning and
executing action sequences for robots to accomplish long-horizon tasks in
physical environments. In this work, we introduce EmbodiedGPT, an end-to-end
multi-modal foundation model for embodied AI, empowering embodied agents with
multi-modal understanding and execution capabilities. To achieve this, we have
made the following efforts: (i) We craft a large-scale embodied planning
dataset, termed EgoCOT. The dataset consists of carefully selected videos from
the Ego4D dataset, along with corresponding high-quality language instructions.
Specifically, we generate a sequence of sub-goals with the "Chain of Thoughts"
mode for effective embodied planning. (ii) We introduce an efficient training
approach to EmbodiedGPT for high-quality plan generation, by adapting a 7B
large language model (LLM) to the EgoCOT dataset via prefix tuning. (iii) We
introduce a paradigm for extracting task-related features from LLM-generated
planning queries to form a closed loop between high-level planning and
low-level control. Extensive experiments show the effectiveness of EmbodiedGPT
on embodied tasks, including embodied planning, embodied control, visual
captioning, and visual question answering. Notably, EmbodiedGPT significantly
enhances the success rate of the embodied control task by extracting more
effective features. It has achieved a remarkable 1.6 times increase in success
rate on the Franka Kitchen benchmark and a 1.3 times increase on the Meta-World
benchmark, compared to the BLIP-2 baseline fine-tuned with the Ego4D dataset. | Yao Mu, Qinglong Zhang, Mengkang Hu, Wenhai Wang, Mingyu Ding, Jun Jin, Bin Wang, Jifeng Dai, Yu Qiao, Ping Luo | 2023-05-24T11:04:30Z | http://arxiv.org/abs/2305.15021v2 | # EmbodiedGPT: Vision-Language Pre-Training via Embodied Chain of Thought
###### Abstract
Embodied AI is a crucial frontier in robotics, capable of planning and executing action sequences for robots to accomplish long-horizon tasks in physical environments. In this work, we introduce EmbodiedGPT, an end-to-end multi-modal foundation model for embodied AI, empowering embodied agents with multi-modal understanding and execution capabilities. To achieve this, we have made the following efforts: (i) We craft a large-scale embodied planning dataset, termed EgoCOT. The dataset consists of carefully selected videos from the Ego4D dataset, along with corresponding high-quality language instructions. Specifically, we generate a sequence of sub-goals with the "Chain of Thoughts" mode for effective embodied planning. (ii) We introduce an efficient training approach to EmbodiedGPT for high-quality plan generation, by adapting a 7B large language model (LLM) to the EgoCOT dataset via prefix tuning. (iii) We introduce a paradigm for extracting task-related features from LLM-generated planning queries to form a closed loop between high-level planning and low-level control. Extensive experiments show the effectiveness of EmbodiedGPT on embodied tasks, including embodied planning, embodied control, visual captioning, and visual question answering. Notably, EmbodiedGPT significantly enhances the success rate of the embodied control task by extracting more effective features. It has achieved a remarkable 1.6 times increase in success rate on the Franka Kitchen benchmark and a 1.3 times increase on the Meta-World benchmark, compared to the BLIP-2 baseline fine-tuned with the Ego4D dataset.
## 1 Introduction
Embodied AI tasks, e.g., embodied planning, embodied VQA, and embodied control, aim to imbue robots with the ability to perceive, reason, and act within their environment, enabling them to perform long-horizon plans and execute actions autonomously based on real-time observations. Recently, large language models (LLMs) such as GPT4 [1] and PaLM-E [2], have shown promising language understanding, reasoning, and "chain-of-thought" capabilities. Such advances may open new possibilities for developing robots capable of processing natural language instructions, performing multi-modal "chain-of-thought", and planning actions in physical environments.
Large-scale datasets play important roles in training large language models. For example, OpenCLIP trains its ViT-G/14 model on the LAION-2B dataset [3], which contains 2B image-language pairs. Unlike general-purpose visual language tasks that can get a huge amount of weakly labeled image-caption pairs from the Internet, embodied AI tasks require egocentric data in robotics domains. Also,
structured language instructions are needed for precise planning, which usually requires huge manual efforts and costs. This poses a challenging problem in collecting high-quality embodied multi-modal data. Some researchers [4; 5; 6; 7] explore creating large-scale embodied datasets with simulators, but a significant gap remains between simulation and the real world. Recent works [8; 9; 10] also explore adapting the pre-trained LLMs to a new domain by efficient tuning strategies like LoRA [11]. However, several open questions still remain: how to apply LLMs to the field of robotics which may face large domain gaps; how to leverage the "chain-of-thought" capability for structured planning; and how to use the output language plan for downstream manipulation tasks in an end-to-end manner.
To solve the above challenges, in this work, we first build a large-scale embodied planning dataset, termed EgoCOT, which features chain-of-thought planning instructions. It contains carefully selected egocentric videos from the Ego4D dataset [16] and corresponding high-quality step-by-step language instructions, which are machine-generated, then semantics-based filtered, and finally human-verified. Additionally, we also create the EgoVQA dataset as an extension of the Ego4D dataset, focusing on egocentric human-object interaction video question answering tasks, which aims to offer a wider range of egocentric multi-modal data.
Based on our EgoCOT and EgoVQA, we present an end-to-end multi-modal embodied foundation model called EmbodiedGPT, which can interact with the physical world in a more natural and intuitive manner, and perform many embodied tasks, as shown in Figure 1, such as embodied planning, embodied VQA, and embodied control. EmbodiedGPT comprises four integrated modules that work together, including i) a frozen vision model for encoding visual features of current observations, ii) a frozen language model used to execute natural language for question answering, captioning, and embodied planning tasks, iii) an embodied-former with a language mapping layer for aligning the visual and embodied instructions and extracting task-relevant instance-level features with the generated planning for low-level control, and iv) a policy network, which is responsible for producing low-level actions based on the task-relevant features, enabling the agent to effectively interact with the environment. To further enhance EmbodiedGPT's performance in generating reliable planning containing sub-goal sequences, we implement prefix tuning on the frozen language model to encourage the generation of more executable planning.
Our method possesses the following core advantages: i) the generated planning exhibits strong executability and granularity at the object part level, such as the gripper of a robotic arm or the handle of a door, manifested in sub-goal sequences. ii) the proposed EgoCOT dataset is built based on an open-source large-scale dataset, which offers greater scalability compared to the PaLM-E [2] model trained on proprietary robot data. And both the EgoCOT dataset, and the EmbodiedGPT model will be open-sourced. iii) EmbodiedGPT forms a closed-loop from high-level planning to
Figure 1: EmbodiedGPT’s capabilities for video captioning, multi-turn question answering, embodied planning, and low-level control. The plans given by EmbodiedGPT are highly executable and incorporate task-specific features, leading to a significant improvement in the success rate of embodied control tasks, outperforming both R3M [12] (a video-language contrastive learned model) and BLIP-2 [13] (a multi-modal foundation model) on Franka Kitchen [14] and Meta-World [15] environments.
low-level control, which enables seamless integration of high-level planning and low-level control, providing efficient task performance and adaptability to a wide range of tasks. To achieve this, we utilize the embodied-former to query task-relevant instance-level features through cross-attention between visual observations and generated embodied planning. This enables the policy network to complete low-level control tasks with fewer than 25 demonstrations.
The contributions can be summarized as follows: (i) We build an end-to-end multi-modal foundation model EmbodiedGPT for embodied AI, which is featured with "chain-of-though" capability, empowering embodied agents to interact with the physical world in a more natural and intuitive manner. (ii) We develop two datasets, EgoCOT and EgoVQA, consisting of 200M annotated videos from the Ego4D dataset with corresponding detailed planning instructions and VQA data. The datasets are first machine-generated, then semantics-based filtered, and finally human-verified for quality control. (iii) We introduce EmbodiedGPT a cost-effective training approach and a paradigm for extracting task-relevant features from LLM-generated planning queries, thereby forming a closed loop between high-level planning and low-level control. We demonstrate our approach's effectiveness by achieving state-of-the-art or comparable performance on multiple embodied tasks, including embodied control, embodied planning, video captioning, and video QA. Notably, in comparison to BLIP-2 [17] fine-tuned on the Ego4D dataset and R3M [12] specifically designed for manipulation tasks, EmbodiedGPT outperforms both models on the Franka Kitchen [14] benchmark with a margin of 22.1% and 5.5%, respectively. Similarly, on the Meta-World [14] benchmark, EmbodiedGPT surpasses both models with margins of 22.5% and 4.2%, respectively.
## 2 Related Work
### Vision Language Pre-training with large scale foundation model
Vision-Language Pre-training focuses on strengthening the link between visual observation and natural language. The goal is to develop models that can better understand and process visual content, such as recognizing objects and actions, and generating descriptive text. As models become larger, the computational expense for end-to-end pre-training rises, leading to the need for modular vision-language pre-training methods. These methods smartly use pre-trained models, keeping them 'frozen' during vision language pre-training to save on computational costs. For example, models like Uniter [18], Oscar [19], VinVL [20], and LiT [21] freeze the image encoder, while Frozen [22] and VGPT [23] freeze the language model. Furthermore, Flamingo [24] and BLIP-2 [17] use both frozen image encoders and language models, providing a balance between performance and computational efficiency. Due to the lack of open-source data for multi-modal embodied planning, previous works struggled to perform detailed task decomposition and lacked the ability to generate precise and executable plans. To tackle this issue, we create the EgoCOT dataset and develop an embodied chain-of-thought vision language pre-training framework to enhance the capacity of multi-modal models for embodied reasoning and planning.
### Egocentric Video Datasets.
Egocentric videos, which are captured using wearable cameras, provide a natural perspective of daily activities and pose several challenging research questions [25; 26; 27]. Several egocentric video datasets have been created over the years, including [28; 29; 30]. However, the collection of egocentric videos is expensive, and previous datasets tend to be small-scale and domain-specific. Recently, a massive egocentric video dataset, Ego4D [16], has been released and has been used for embodied representation learning. The dataset comprises 3,670 hours of videos collected by 931 people from 74 locations across 9 countries, with videos accompanied by narrations. For embodied AI tasks, learning from large and diverse egocentric human videos has emerged as a promising approach to acquiring a generally useful visual representation for controlling such tasks. For example, R3M [12] developed a sparse and compact visual representation using the Ego4D human video dataset through a combination of time-contrastive learning and video-language alignment. VIP [31], learns general-purpose reward functions for goal-conditioned robotic manipulation using the Ego4D dataset.
### Large Foundation Model Assistant System
Recent advancements in large-scale multi-modal language models (LLMs), such as GPT-3 [32] and GPT-4 [1], have resulted in the creation of various models that can understand multiple modes of information. Two main approaches are used in this field: systematic collaboration and end-to-end trained models. Systematic collaboration approaches involve coordinating multiple vision models or tools with language models to combine visual information with textual descriptions. Examples include models like Visual ChatGPT [33], MM-REACT [34], and HuggingGPT [35]. However, this approach is limited by the accuracy and capacity of fixed modular models, which can lead to an accumulation of errors. On the other hand, end-to-end models aim to provide unified models for multi-modal tasks. For example, Flamingo [24] combines vision and language by freezing pre-trained vision encoders and language models. BLIP-2 [13] introduces Q-Former to align visual features from frozen visual encoders with large language models. Recently, models such as MiniGPT-4 [36] and LLAVA [37] align instruction-tuned language models with visual features from frozen visual backbones. VideoChat[38], mPLUG-Owl [39] and X-LLM [40], further expand support for video input. PaLM-E [41] is the first large embodied multi-modal model, which directly incorporates features from sensor modalities to improve real-world performance and is trained with their large-scale everyday robot data [42]. Compared to PaLM-E, EmbodiedGPT is more compact, with a size of only 10B and offers additional support for video captioning, video QA and making planning according to a demonstration video. Furthermore, we form a closed-loop system that spans from high-level planning to low-level control.
## 3 Method
The goal of the embodied foundation model is to imitate human-like perception and interaction with the environment by accurately perceiving the environment, identifying relevant objects, analyzing their spatial relationships, and formulating a detailed task plan. To achieve this, the EmbodiedGPT employs a pre-trained vision transformer as the visual encoder and a pre-trained LLAMA [43] model as the language model. As shown in Figure 2, the embodied-former acts as a bridge between the visual and language domains, it first extracts compact visual features from the output of the vision model through attention-based interaction involving visual tokens, text queries, and learnable embodied queries and then maps it to the language modality through a language mapping layer. These embeddings are sent to the frozen LLAMA [43] language model for visual caption, visual QA, and embodied planning. The generated planning is then used to query highly relevant features from the general visual tokens encoded by the visual model via the embodied-former. These features are utilized to generate low-level control commands for task execution through the downstream policy network. To enhance performance across a range of embodied tasks, we introduce a novel
Figure 2: Overall framework of EmbodiedGPT. The black arrow shows the vision-language planning process, while the red arrow represents that we leverage the queried language plans for better policy learning in low-level control tasks.
video-language pre-training paradigm that leverages a cognitive chain of thought to produce embodied planning from egocentric video inputs. We formulate this task as a standard VQA (Visual Question Answering) task, using "how to do the task that + original caption" as the question and embodied planning as the answer. This framework enriches the data of embodied planning and standard visual QA tasks, encouraging the embodied-former to capture task-specific features that are more suitable for embodied control tasks.
### Framework
The training process consists of three stages, each designed to incrementally develop reasoning and planning capabilities. The first two stages focus on pre-training in basic cognitive and responsive skills, while the third stage involves training the embodied AI task with egocentric video-text data on EgoCOT. In the first stage, we focus on image-text conversation alignment pre-training, which involves using three datasets: COCO Caption [44], 595 thousand finely filtered image-text pairs from CC3M [45], and 491 thousand filtered image-text pairs obtained by re-captioning LAION-400M using BLIP-2 [17]. The primary goal of this stage is to pre-train the Embodied-former and language projection while keeping the vision and language model parameters frozen to save computational resources. In the second stage, our goal is to enhance the model's ability to comprehend and generate more complex sentences and improve its reasoning skills. We achieve this by updating the language projection and prefix language adapter and utilizing the "Complex_Reasoning_77k" and multi-turn conversation datasets provided by "LLaVA_Instruct_150K" [46].
**Embodied "chain-of-thought" training with EgoCOT**: During the third stage, we first use Conv3D [47] to transfer the pre-trained vision model from stage 2 to the video encoder, with a time offset of 2 and a total frame count of 8 for the videos. Then, we introduce the 'chain-of-thought' vision language pre-training paradigm where the model takes 8 keyframes of the video as input, along with the task description, embodied planning, and structured verb-noun pairs summary to reason with a prompt, such as Listing 1. To avoid overfitting, we provide a prompt set that has different instructions with the same meaning. In this stage, we fine-tune the patch embedding, the language projection layer, and the prefix language adapter to better capture temporal information.
```
Watchthisvideo,identifytheactionsanddevisealplanningchain-of-thought.Extract detailedactionsusingthisschema: Task:{"taskdescription"} Plan:{"planwithchain-of-thought"}Actions:{{"number"}:{'verb'}({'noun'})}.
```
Listing 1: Prompt we used for chain-of-thought pre-training.
### Model Architecture
The Embodied-former, denoted as \(\mathcal{E}(\cdot)\), serves as a bridge between visual input \(x_{\text{vis}}\) and the frozen language model, acting as an information bottleneck that delivers the most relevant visual data to the language model. The Embodied-former consists of two sub-modules: one for extracting features from the image input, denoted as \(\mathcal{E}_{\text{vis}}:x_{\text{vis}}\to y_{\text{vis}}\), and another for extracting features from the text input, denoted as \(\mathcal{E}_{\text{txt}}:x_{\text{txt}}\to y_{\text{txt}}\). We employ \(N\) learnable embodied query embeddings \(y_{\text{query}}\) as the input of \(\mathcal{E}\) to interact with \(x_{\text{vis}}\) through cross-attention layers and with \(x_{\text{txt}}\) through self-attention layers. We denote the output query representation as \(z\in\mathbb{R}^{N\times D}\), where \(D\) is the dimensionality of the embeddings. The dimension of \(z\) is significantly smaller than that of the visual features. The output query embeddings are then transformed to \(z^{{}^{\prime}}\in\mathbb{R}^{N\times D^{{}^{\prime}}}\), which have the same dimensionality \(D^{{}^{\prime}}\) as the LLM's text embedding in the language modality. This transformation is performed by a mapping function denoted as \(M:z\to z^{{}^{\prime}}\), which is accomplished by a linear projection via a fully-connected (FC) layer. The projected embeddings, \(z^{{}^{\prime}}\), serve as "soft visual prompts for the language model," decoupling the whole interaction into visual-query interaction and query-text interaction. The final embodied planning is inferred by the language model with \(z^{{}^{\prime}}\) and text prompt(shown as Listing 1) as input. For low-level control which aims to generate actions to interact with the environment, the embodied plan \(x_{\text{plan}}\) is used as input text for embodied-former to query the task-relevant instance level features \(z_{\text{instance}}=\mathcal{E}(x_{\text{vis}},x_{\text{plan}},y_{\text{query}})\). Subsequently, the agent is capable of generating control commands, such as the turning angle of the servo, represented as \(a=g(z_{\text{instance}},z_{\text{global}})\). This function combines both the instance-specific information \(z_{\text{instance}}\) and the global context \(z_{\text{global}}\). The global context is inferred using a ResNet50 model [48] that has been pre-trained on ImageNet [49], employing global average pooling. Here, \(g(\cdot)\) represents the
policy network, which is a Multi-Layer Perceptron (MLP) [50] mapping function. The output of the policy network consists of specific executable actions, such as positions and velocities in the Cartesian coordinate system. More implementation details can be found in Appendix A.
### Training Settings
We employ the same pre-trained image encoder as BLIP-2[17]. Specifically, we utilize the ViT-G/14 model from EVA-CLIP [51] and remove its last layer, using the output features of the second last layer instead. For the frozen language model, we adopt a pre-trained LLaMA-7B [43] model and fine-tune it using the ShareGPT dataset and a GPT-4 generated 52K English instruction-following dataset [52]. We then utilize the well-fine-tuned language model as the frozen language model for vision-language pre-training. Additionally, we convert the data type of parameters of the frozen ViT [53] and language model to FP16 during pre-training to increase efficiency.
### Creating EgoCOT and EgoVQA Dataset
For our EgoCOT dataset, we obtain basic data from the Ego4D dataset [16], which includes \(9,645\) untrimmed videos of various durations ranging from 5 seconds to 7 hours. To prepare the data for our purposes, we conducted two stages of data cleaning to prepare our data. In the first stage, we filtered out videos with missing or very short narrations (which made up 7.4% and 0.9% of the text, respectively), as well as those with unsure tags (which accounted for 4.0% of the text). We also excluded videos without human-object interaction, such as watching TV or walking. After this stage, we were left with 2.9 thousand hours of video, containing 3.85 million narrations, from 129 different scenarios covering 2927 hours of video.
To generate pairs of captions, embodied plannings, and corresponding video segments with time intervals, we utilized the EgoVLP framework [54] to segment the video. The narrations are organized as a sequence of sentences \(\mathcal{T}_{0},\cdots,\mathcal{T}_{n}\) with precise timestamps \(t_{0},\cdots,t_{n}\) that indicate when a described event occurred. For each narration \(\mathcal{T}_{i}\) with timestamp \(t_{i}\), we paired it with a clip \(\mathcal{V}_{i}\) by determining its start and end time points:
\[\left[t_{i}^{start},t_{i}^{end}\right]=\left[t_{i}-\beta_{i}/2\alpha,\ t_{i}+ \beta_{i}/2\alpha\right], \tag{1}\]
where \(\beta_{i}=\sum_{j=0}^{n-1}\left(t_{j+1}-t_{j}\right)/n\) is an adjustable parameter equal to the average temporal distance between consecutive narrations in a given video. Conversely, \(\alpha\) is a scale factor computed as the average of all \(\beta_{i}\) across all videos in the EgoCOT dataset (\(\alpha=4.9\) seconds). For each video segment, we provide prompts and corresponding captions for ChatGPT [55] to generate a reasonable and detailed embodied planning. The caption is typically a brief introduction such as "C opens a drawer." We use the ChatGPT to generate a chain of thought according to the caption and organize it into a list of verb-nour pairs, such as _"plans: grasp the handle with the gripper and pull the handle; actions: 1. grasp(handle, gripper) 2. pull(handle)."_ The prompt we used to generate EgoCOT dataset is shown in Listing 2. To enhance the diversity of generated chain of thoughts, we employ a temperature parameter of 0.9 and a top-p parameter of 0.95. For each prompt, we perform five sampling iterations.
```
Youneedtogenerateplanswithchainofthoughforeschtask,andthenextract detailedactions(collocationofnumsandverbs)fromtheplan. Theactioncanbeoftthefollowingform: [action_name,e,t_turn,left; [action_name]argument,e,g.,pickup(apple); [action_name]argument1argument2,eg.,put(apple,table) Task:pickupapcuponthetable plans:graspthehandleoftthecupwiththegripperandliftitup Actions: 1. grasp(handleoftthecup,gripper) 2.liftup(cup)
```
Listing 2: Prompt we used for creating EgoCOT dataset.
**Post-procedure.** To ensure the quality of the generated planning instructions, we perform the second stage of data cleaning. We used the CLIP model [56] to assess the similarities between the video and text pairs. For each video, we compared it against five potential embodied plans and selected the one with the highest similarity as the corresponding label for the embodied plan. We then took our data-cleaning process a step further by filtering out any video-caption-planning pairs with similarities lower than the threshold. We eliminated both data with the low similarity between the video and
caption and between the video and planning to ensure the highest quality data for our EgoCOT dataset. For each keyframe of the video segment, we use the CLIP model to encode both the text data \(T\) and the image data \(I\) into a shared embedding space. The similarity is calculated using the cosine similarity function as \(S(y_{T},y_{I})=\frac{y_{T}\cdot y_{I}}{\|y_{T}\|\|_{y_{I}}\|}\), where \(S(y_{T},y_{I})\) denotes the similarity between the text and image, and \(y_{T}\) and \(y_{I}\) are the respective embeddings. Given that each video contains multiple keyframes, an ensemble of similarity scores is obtained for each video. This ensemble strategy helps to alleviate the problem of variability among individual frames and ensures a more robust and representative measure of overall similarity. The ensemble similarity score between a video \(V\) with \(n\) keyframes and text data \(T\) is given by:
\[E(V,T)=\frac{1}{n}\sum_{i=1}^{n}S(y_{T_{i}},y_{I\,i}) \tag{2}\]
where \(E(V,T)\) is the ensemble similarity score, \(S(y_{T\,i},y_{I\,i})\) is the similarity score for the \(i\)-th keyframe, and \(n\) is the total number of keyframes. We also created the EgoVQA dataset specifically for egocentric human-object interaction video question answering tasks to enrich the training data. For each caption in the Ego4D dataset, we used ChatGPT to generate five QA pairs. To ensure relevance, we guided ChatGPT to focus on core key verbs and nouns by designing prompts as shown in Listing 3. The sampling schema when crafting EgoVQA is the same to that as EgoCOT.
```
Pleaseasksomequestionsaccordingtotheverbsandnounsinthesentence. Forexample,inthissentence"amanispikingupacup",theverbispickingupandthe nounismcup,therforquestioncanbe"whatisttheobjectthemamispickingup?" or"whatoperationisperformedonthecup?". ThenYouneedtogivetheanswer. input:amanispikingupacup question:Whatisttheobjectthemispickingup answer:Thecup
```
Listing 3: Prompt used for creating EgoVQA dataset.
## 4 Experiments
In this section, we present a comprehensive evaluation of multi-modal foundation models and EmbodiedGPT, across various tasks including visual captioning, embodied planning, and control.
**Evaluation on image input tasks.** In order to evaluate the quality of generated captions and planning with the given image, we conducted a user study with 30 participants. The study included 10 cases of image caption tasks from MS-COCO dataset [44], 5 embodied planning scenarios in different embodied AI simulators, and 5 real-world scenes with accompanying embodied planning tasks. Participants were asked to rate the generated captions from different end-to-end models on five dimensions using a scoring system ranging from 1 to 10: object recognition accuracy, spatial relationship understanding, level of redundancy in the answer, and reasonability of the planning and the executability of the planning. The average scores among all the participants for different models are shown in Table 1. The results demonstrate that EmbodiedGPT achieves a comparable level of object recognition and spatial relationship understanding as the LLaVA-13B model, despite having only 7B parameters in the language model. Furthermore, EmbodiedGPT generates less redundant content in relation to the given embodied AI task, and produces the most reasonable and executable planning outputs. We also compared the performance of EmbodiedGPT with Visual ChatGPT [33], which adopts a hierarchical approach by combining several pre-trained vision models and language models to answer questions. In the Virtual-Home [57] benchmark, Visual ChatGPT uses a visual caption model to generate dense captions that are subsequently passed into ChatGPT for deriving a solution. As shown in Figure 3, Visual ChatGPT failed to find a coat hanger due to its limitations of relying solely on the caption model for extracting visual information, resulting in poor performance
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Model & Object(\(\uparrow\)) & Spatial(\(\uparrow\)) & Redundancy(\(\downarrow\)) & Plan Reasonable(\(\uparrow\)) & Plan Executable(\(\uparrow\)) \\ \hline Minigpt4 & 5.6 & 4.8 & 4.4 & 4.5 & 4.8 \\ LLaVA-7B & 7.3 & 7.4 & 3.9 & 7.5 & 6.6 \\ LLaVA-13B & **8.5** & 8.6 & 3.4 & 8.4 & 7.6 \\ EmbodiedGPT & 8.4 & **8.8** & **2.6** & **8.8** & **8.4** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Generate Quality Evaluation on image input tasks.
when compared to the end-to-end model like EmbodiedGPT. These findings highlight the advantages of adopting a unified, end-to-end model over hierarchical approaches that rely on multiple stages.
**Evaluation on video input embodied AI tasks.** We evaluate the recognition ability of videos and planning abilities of our model for embodied control tasks on standard embodied AI benchmarks, Franka Kitchen [14] and Meta-World [15]. Meta-World provides a challenging set of tasks that require complex object manipulation skills, including assembling a ring on a peg, picking and placing a block between bins, pushing a button, opening a drawer, and hammering a nail. Franka Kitchen benchmark focuses on tasks like sliding open the right door, opening the cabinet, turning on the light, turning the stovetop knob, and opening the microwave. As shown in Figure 4, given a demonstration video, EmbodiedGPT can accurately interpret the embodied control task and provide step-by-step planning. The output planning is fed into the Embodied-former module of EmbodiedGPT to query highly relevant features for use as inputs in the policy network and the low-level actions are generated by the policy network to interact with the environment (see more visualizations in Appendix B).
Figure 4: Example of video input embodied AI tasks on Meta-World benchmark. EmbodiedGPT accurately analyzes embodied control tasks in demonstration videos and provides precise planning.
Figure 5: Performance of EmbodiedGPT in low-level control tasks with 10 demonstration demos.
Figure 3: Comparison between EmbodiedGPT and VisualGPT in the question-answering task.
**Evaluation on embodied control tasks.** For embodied control tasks, we compare our model with R3M[12], which is the state-of-the-art method in these two benchmarks, and an ablation version called 'BLIP-2[Ego4D]', which has the same structure and same amount of parameters as EmbodiedGPT, and is only fine-tuned on the video caption task using the Ego4D dataset without incorporating EgoCOT. In all experiments, the policy network is learned using few-shot learning on a small amount of demonstration data. There are two settings, one of which utilizes 10 demonstrations, and the other utilizes 25 demonstrations. We report the success rate in 100 random evaluations with only visual observations in 5 tasks per benchmark over 5 seeds and 2 different camera views for each setting, respectively. As shown in Figure 5 and Figure 6, EmbodiedGPT outperforms the baseline methods, demonstrating the effectiveness of learning with EgoCOT.
**Ablation study.** We perform ablation studies to analyze the effectiveness of the "Chain-of-Thought" training mode and the importance of a closed-loop design for embodied control. The results, as shown in Table 2, demonstrate a significant improvement in success rate when using the EgoCOT approach compared to training solely with the EGO4D caption task. Moreover, the closed-loop design is necessary as the generated plans contained specific and relevant sub-goal information, which proved crucial for control tasks.
In summary, EmbodiedGPT exhibits a strong ability to generate reasonable planning, accurately extract task-relevant features from visual inputs, as well as execute low-level actions to interact with the environment. The ablation experiments demonstrate that both the training paradigm based on EgoCOT and the closed-loop design from embodied planning to low-level control significantly contribute to the performance improvement of EmbodiedGPT.
## 5 Conclusion
In this paper, we present EmbodiedGPT, an end-to-end multi-modal foundational model for embodied AI that enables agents to perform step-by-step planning and execute low-level commands. To achieve this, we create a large-scale embodied planning dataset called EgoCOT and develop an efficient training approach that utilizes prefix tuning to generate high-quality plans with a "chain-of-thought". Furthermore, our embodied control paradigm seamlessly coordinates high-level planning and low-level control. Extensive experiments demonstrate the effectiveness of EmbodiedGPT on various embodied tasks, achieving state-of-the-art or comparable performance. We believe that EmbodiedGPT represents a significant step towards developing more intelligent embodied AI agents.
**Future works and limitations:** EmbodiedGPT freezes the parameters of the vision and language
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Model & Franka(10 demos) & Franka(25 demos) & Meta-World(10 demos) & Meta-World(25 demos) \\ \hline EmbodiedGPT & **50.8\%**\(\pm 2.8\) & **58.5\%**\(\pm 2.7\) & **76.4\%**\(\pm 2.2\) & **81.2\%\(\pm 2.0\)** \\ - Close-loop & 38.6\% \(\pm 2.9\) & 47.3\% \(\pm 2.5\) & 62.7\% \(\pm 2.2\) & 64.9\% \(\pm 2.0\) \\ - COT & 26.2\% \(\pm 3.2\) & 36.4\% \(\pm 2.7\) & 55.2\% \(\pm 2.4\) & 58.7\% \(\pm 2.0\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation on the closed-loop spans from planning to low-level control, and “chain-of-thought” (COT) training with 25 and 10 demonstrations(”-” symbol indicates “removing”). We report the average success rate over 5 tasks and 2 camera views per benchmark.
Figure 6: Performance of EmbodiedGPT in low-level control tasks with 25 demonstration demos.
model due to limited computational resources. Joint training with all modules and exploring other modalities, such as speech, could be future works. We do not foresee obvious undesirable ethical or social impacts at this moment.
|
2301.11968 | Strong Cosmic Censorship in light of Weak Gravity Conjecture for Charged
Black Holes | In this paper, we investigate the strong cosmic censorship conjecture (SCC)
for charged black holes in the de Sitter space by considering the weak gravity
conjecture (WGC). Using analytical methods, we find that the SCC is preserved
for dS-charged black holes with respect to some restriction $qQ\gg1$ and
$r_+\geq Q$ with the help of the WGC condition viz $\frac{q}{m}\geq 1$ for
scalar fields. Where q, m are the charge and mass of the scalar field, and
$r_+$, Q determine the radius of the outer event horizon and the charge of the
black hole, respectively. In that case, when the (WGC) is valid, SCC will
definitely be satisfied for the dS-charged black holes. On the other hand, the
SCC is violated when the WGC is not satisfied. Also, we examined the RN-dS
charged black hole in the extremality state and found that SCC can be violated
with the condition $\Lambda r_+^2=1$. | Jafar Sadeghi, Mohammad Reza Alipour, Saeed Noori Gashti | 2023-01-27T19:46:46Z | http://arxiv.org/abs/2301.11968v2 | ###### Abstract
###### Abstract
In this paper, we investigate the strong cosmic censorship conjecture (SCC) for charged black holes in the de Sitter space by considering the weak gravity conjecture (WGC). Using analytical methods, we find that the SCC is preserved for dS-charged black holes with respect to some restriction \(qQ\gg 1\) and \(r_{+}\geq Q\) with the help of the WGC condition viz \(\frac{q}{m}\geq 1\) for scalar fields. Where q, m are the charge and mass of the scalar field, and \(r_{+}\), Q determine the radius of the outer event horizon and the charge of the black hole, respectively. In that case, when the (WGC) is valid, SCC will definitely be satisfied for the dS-charged black holes. On the other hand, the SCC is violated when the WGC is not satisfied. Also, we examined the RN-dS charged black hole in the extremality state and found that SCC can be violated with the condition \(\Lambda r_{+}^{2}=1\).
Keywords: Strong cosmic censorship conjecture; Weak gravity conjecture; RN-dS charged black hole
**Strong Cosmic Censorship in light of Weak Gravity Conjecture for Charged Black Holes**
**Jafar Sadeghi \({}^{\star}\)1**, **Mohammad Reza Alipour \({}^{\star}\)2**, **Saeed Noori Gashti\({}^{\star,\ddagger}\)3**
Footnote 1: Email: [email protected]
Footnote 2: Email: [email protected]
Footnote 3: Email: [email protected]
\({}^{\star}\)Department of Physics, Faculty of Basic Sciences,
University of Mazandaran P. O. Box 47416-95447, Babolsar, Iran
\({}^{\ddagger}\)School of Physics, Damghan University, P. O. Box 3671641167, Damghan, Iran
###### Contents
* 1 Introduction
* 2 Weak Gravity Conjecture
* 3 Charged Black Holes in dS Space
* 4 The Quasinormal Resonant Frequency Spectrum
* 5 Discussion and Result
Acknowledgments
## 1 Introduction
One of the studies with a long history in general relativity is the study of the collapse of small perturbations. We need more information on how these oscillations decay to understand better the gravity concept, use gravitational wave data, and study and investigate the valuable features of general relativity. One of the signs of the failure of determinism in general relativity can be the emergence of an interesting phenomenon known as Cauchy horizons that appear in the astrophysical solutions of Einstein's equations. These horizons are such that it is impossible to specify the history of the future of an observer that passes through such horizons using Einstein's equations and initial data. With these descriptions in the black holes' space-time background, it is an expected possibility that the perturbations of the outer region are infinitely amplified by a mechanism known as the blue shift. They lead to a singularity boundary beyond the Cauchy horizon in the interior of black holes, where field equations cease to make sense. The Penrose strong cosmic censorship (SCC) confirms such an expectation. Of course, another point is that astrophysical black holes are stable due to a special mechanism called the perturbation-damping mechanism, which is applied in the outer region. Therefore, considering whether SCC retains real hinges or not depends on the very subtle competition between the collapse of perturbations in the outer region and their amplification (blue shift) in the inner space-time of black holes. In general, the fate of Cauchy horizons is related to the collapse of small perturbations outside the event horizon. Hence, the validity of SCC is tied to the extent of external damps fluctuation. In connection with various structures and conditions, SCC and its satisfaction and violation have been investigated in various theories. The violation of this conjecture near the extremal region studied in the investigation of higher curvature gravity [1]. Also, this conjecture has been challenged in investigating many charged black holes. In [2, 3], this conjecture was checked for a charged AdS black hole. It was shown that for a specific interval for the parameter (\(\beta\)), this conjecture is satisfied and violated in other areas as well. The strong cosmic censorship conjecture has also been investigated in two dimensions. There have been interesting outcomes regarding the violation of this conjecture near the extremal region at specific points [4]. The study of this conjecture in the structure of three-dimensional black strings has also carried interesting results, which you can see [5] for a deeper study. Also, you can see [6, 7, 8, 9] for further study. The effectiveness of mass-inflation systems, which are involved in the transformations of the inner Cauchy horizon associated with the space-time of black holes that are approximately flat, which is pathological in the estimation of SCC, into a series of hypersurfaces which is singular non-extendable. Those that are in an indivisible form are related to two different types of physical mechanisms [10, 11, 12, 13, 14, 15, 16]. First, the events in the exterior space-time regions of dynamic black holes formed viz the collapse of the remnant
perturbation fields and second amplification of exponential blue shift related to the fields falling into the inside of black holes. We can manage these two introduced different systems through parameters such as \((g)\) and \((k_{-})\). It can be stated that the dimensionless physical ratio with the help of these two parameters can determine the fate of the inner Cauchy horizons inside such space-times of non-asymptotic flat black holes [8, 17, 18],
\[\beta\equiv\frac{g}{k_{-}}.\]
Of course, a certain range of parameters of black holes, such as mass and charge, etc., as indicated in [8, 17, 18],
\[\beta>\frac{1}{2}.\]
So, space-time of the corresponding black holes can be physically expanded beyond their Cauchy horizon which includes a pathological fact and a sign of algebraic failure or a violation of the Penrose SCC in classical general relativity. For the dynamics of Einstein's equations as well as the destiny of the observers, the explosive structure of the curvature that is related to \((\beta<1)\) does not have per se much physical significance: it indicates two theorems, not the failure of the field equations mentioned in [19] and of course not the destruction of macroscopic observers which is discussed in [13]. Therefore, the physical and mathematical formulation of the conjecture of a SCC in such conditions leads to ignoring physical phenomena such as impulsive gravitational waves or the formation of shocks in relativistic fluids. Due to the aforementioned reasons, the modern form of the SCC conjecture was introduced that requires a stronger constraint \((\beta<\frac{1}{2}\) ) and many works have been done to fit such constraints. In [8], it was found that when there are _neutral_ scalar fields in the presence of the R-N dS black hole, it leads to the violation of SCC. But in continuation, Hod has shown in [34] that the presence of _charged_ massive scalar fields near _charged_ black holes is inevitable. By considering _charged_ scalar fields near a _charged_ black hole and using WKB techniques, he has shown that the SCC will be met. In this article, by using an analytical method and WKB approximation, we show that _charged_ scalar fields play an essential role in satisfying the SCC in light of WGC for charged black holes. Therefore, we find that the numerical results presented in [8] have no physical relevance to the question of the (in) validity of the SCC in _charged_ black-hole spacetimes. In particular, the SCC conjecture in the context of _charged_ black hole spacetimes must be tested in the presence of _charged_ matter fields, whereas the numerical results presented in [8] are based only on the presence of _neutral_ scalar fields in the _charged_ black hole spacetime. Therefore, in this article, we are going to study different structure of this conjecture. According to the above description, we consider the general configuration of _charged_ black holes in the presence of massive _charged_ scalar fields. Then, using the weak gravity conjecture, we will prove that SCC is valid for specific values for all _charged_ black holes. We will use the weak
gravity conjecture to prove a general relation with respect to SCC for all _charged_ black holes. According to the above explanations, we organize the article in the following form.
In section 2, we will give basic explanations about the weak gravity conjecture and also the motivation to use it. In section 3, we introduce charged black holes in dS space, and then we show the quasinormal resonant frequency spectrum in section 4. We check the conditions of compatibility and violation of (SCC) with respect to (WGC) for RN-dS charged black holes. Finally, we describe the results in section 5.
## 2 Weak Gravity Conjecture
As it is known in the literature, a new idea has been put forward as a swampland program to check theories coupled to gravity, the consistency of quantum gravity and finally, a proof for string theory. Recently, ones have done lots of work on this field [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]. Due to the special conditions of string theory and the fact that its testing and experimental investigations seem a bit difficult, this idea has been proposed to test and investigate various concepts of cosmology. The swampland program is challenged from two sides. From an up-bottom view for introducing principles and limitations to introduce conjectures, as well as mathematical formulations to examine cosmological concepts. A second look from the bottom-up in order to test each of these conjectures with various concepts of cosmology including inflation and matching with observable data, which is both a proof for this new idea and a proof for string theory. So far, many conjectures have been proposed from this theory, and now, according to the structure and further investigations, new conjectures will be added to this program. Some of these conjectures face challenges and as a result, corrections are made to the conjectures. We face some limitations in quantum gravity (QG). At the point when gravity is considered at the quantum level, the hypothesis will be incompatible. Generally having a reliable quantum hypothesis of gravity isn't really straightforward and can in any case hold many surprises and can be interesting for physical science at low energies. The objective of the swampland program is to decide the limitations that an effective field theory(EFT) should fulfill to be viable with the consideration of ultraviolet completion(UV) in QG. They are called swampland limitations, and different suggestions are figured out as far as swampland conjectures(SC). The objective is to recognize these limitations, accumulate proof to demonstrate or refute them inside the structure of QG, give reasoning to make sense of them in a model-free manner, and comprehend their phenomenological suggestions for low-energy EFTs. Albeit the swampland idea isn't restricted to string theory on a fundamental level, SC are frequently examined by string theory backdrops. Without a doubt, the string theory gives an ideal structure to thorough quantitative testing of conjectures and works on how we might interpret potential compressions of string theory. Strangely, it has as of late been uncovered that a large number of these conjectures are to be sure related, recommending that they may essentially be various countenances of some yet-to
be-found crucial standard of QG. As far as possible have significant ramifications for cosmology and particle physics. They can give new core values to building conjectures past the standard models in high-energy physics. They may likewise prompt \(UV/IR\) blending, which breaks the assumption for scale detachment and possibly gives new bits of knowledge into the regular issues seen in our universe. Consequently, the presence of swampland is extraordinary information for phenomenology. For a total random of references connected with swampland that might be valuable, we allude in [20] the swampland program (SP) has likewise been surveyed and presented. The shortfall of global symmetry (GS) and the completeness of charge spectra are at the center of the SP. Nonetheless, they need phenomenological suggestions except if we can restrict the global symmetries [21, 22] and whether there is any limitations point on the mass of charged states. In any case, they just bound the complete hypothesis but not the low-energy EFTs. Specifically, it is phenomenologically important whether all charged particles can be really super heavy and even compare to black holes(BHs), or whether there is some thought of completeness of the range that gets by at low energies. A large portion of the SCs examined address exactly these inquiries. They want to profoundly explore these assertions and measure them so we can draw nearer to the recuperation of a few global symmetries. For instance, we can deduce recuperate a global symmetry (GS) \(U(1)\) by sending the gauge coupling(GC) to nothing, which ought not to be permitted in \(QG\). Attempting to comprehend string theory for the study of this issue, it might turn out that if one somehow managed to try to do such work, can give data about the imperatives that an EFT can fulfill to be viable with QG. Likewise, WGC forbids this cycle by flagging the presence of new charged states that denies the depiction of the EFTs. Thusly, it gives an upper bound on the mass of these charged states. The WGC comprises of some parts: the electric and the magnetic electric-WGC: As indicated by a quantum hypothesis, we have the following condition [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30],
\[\frac{Q}{m}\geq\frac{\mathcal{Q}}{M}|_{ext}=\mathcal{O}(1), \tag{1}\]
and
\[Q=gq, \tag{2}\]
where, \(g\) and \(q\) are the gauge coupling and the quantized charge. The electric-WGC needs the presence of an electrically charged condition of a higher charge-to-mass proportion than extremal BH in that hypothesis, which is regularly a variable of the order one. One more understanding of this conjecture is that the limitations region shows that scale force determines stronger than the gravity on this mode -- so subsequently is called WGC. This is an identical equation since it expects that electromagnetic force is stronger gravitational force [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30],
\[F_{Grav}\leq F_{EM} \tag{3}\]
It implies that the charge is more prominent than the mass, so we get a similar condition as above. This is as of now false within the sight of massless scalar fields. The motivations of \(WGC\) are twofold. To begin with, it makes a \(QG\) boundary to reestablish the GS of \(U(1)\) by sending \(g\to 0\). If a GC goes to zero as indicated by \(WGC\), this conducts new light particles and the cutoff the hypothesis arrives at nothing and nullifies the EFT. Because of the littleness of the scale coupling, it relies upon how much energy the interaction with which you need to portray the viable EFT. The smallness of the cycle energy leads to the smallness of the scale coupling. On the other hand, if you need to keep the EFT substantial up to an extremely high cut-off, the GC can't be excessively small. This is an illustration of swampland limitations that becomes more grounded for higher energies. Obviously, a hypothesis with disappearing measure coupling i.e., GS is incompatible because the cutoff of the viable EFT is likewise zero. One more fundamental inspiration for \(WGC\) is that a kinematic prerequisite permits extremality \(BH\) to have decomposed. Charged BHs should fulfill an extremality breaking point to stay away from the presence of exposed singularities, as shown by the weak cosmic censorship (\(WCC\)). For a given charge \(\mathcal{Q}\), this super bound shows that the this super bound shows that the mass \(M\) of the BHs should be more noteworthy than the charge [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30],
\[M\geq\mathcal{Q} \tag{4}\]
For the BHs to have a regular horizon. Here, we set the extremal factor \(\mathcal{O}(1)\) to one for simplicity. The primary condition for starting the decay to the small black hole and particle is the existence of the extremal BHs (\(M=\mathcal{Q}\)). So, one can consider the decay of an extremal BHs which one of the rot items has a charge more modest than its mass as far as possible, so \(M_{1}\geq\mathcal{Q}_{1}\). Then the rot item can never again have a charge more modest than the mass, that is \(m_{2}\leq Q_{2}\). It is just a kinematic necessity. Since the second rot item violates the WCC, it can't be a BH, so it should be a particle. The above kinematic necessity can be acquired by applying preservation of mass/energy and protection of charge as follows. The initial mass of the BH should be more prominent than the amount of the mass of the rot items \(M_{i}\) and the charge of the initial BH.
## 3 Charged Black Holes in dS Space
The metric of charged black hole in spherical symmetric space is defined as follows,
\[dS^{2}=f(r)dt^{2}-f^{-1}(r)dr^{2}-r^{2}d\Omega^{2},\qquad d\Omega^{2}=(d \theta^{2}+\sin^{2}(\theta)d\varphi^{2}). \tag{5}\]
Here, we consider \(f(r)=H(M,Q)-\frac{\Lambda r^{2}}{3}\) in general; where Q, M, \(\Lambda>0\) are electric charge, the mass of the black hole and the cosmological constant respectively. In this case, we can obtain its event horizons as follows,
\[f(r_{\star})=0\qquad\rightarrow\qquad\star\in(-,+,...,c). \tag{6}\]
Considering the metric in general terms, we have different event horizons, where \((r_{-})\) is the Cauchy horizon, \((r_{+})\) is the outer event horizon, and \((r_{c})\) is the cosmological horizons. Using Klein-Gordon's differential equation, we can determine the dynamics of a massive charged particle near a charged black hole [31, 32, 33, 34],
\[\frac{1}{\sqrt{-g}}\partial_{\mu}(g^{\mu\nu}\sqrt{-g}\partial_{\nu}\Phi)-2iqg^ {\mu\nu}A_{\mu}\partial_{\nu}\Phi-q^{2}g^{\mu\nu}A_{\mu}A_{\nu}\Phi-m^{2}\Phi=0, \tag{7}\]
where \(m\) and \(q\) are the mass and charge of the particle, respectively also, \(A_{\mu}=\left(\frac{Q}{r},0,0,0\right)\). We can define the scalar field \(\Phi\) according to relation (7) as follows [36],
\[\Phi(t,r,\theta,\phi)=\sum_{m}\sum_{\ell}e^{-i\omega t}Y_{\ell m}(\theta,\varphi )\Phi(r). \tag{8}\]
The integer parameters \(\ell\) and \(m\) are the spherical and the azimuthal harmonic indices of the resonant eigenmodes which characterize the charged massive scalar fields in the charged black-hole spacetime. By putting Eq.(8) in Eq.(7) and using \(dx=\frac{dr}{f(r)}\), we get the Schrodinger-like differential equation,
\[\frac{d^{2}\phi(r)}{dx^{2}}+V(r)\phi(r)=0. \tag{9}\]
The effective radial potential due to a massive charged particle near a charged black hole is defined as [8],
\[V(r)=\frac{qm}{r^{2}}\left[\frac{q}{m}\alpha(r)-\frac{m}{q}\beta(r)\right], \tag{10}\]
where
\[\alpha(r)=Q^{2}\left(1-\frac{\omega r}{qQ}\right)^{2},\qquad\beta(r)=r^{2}f(r )H(r),\qquad H(r)=\left(\frac{\ell(\ell+1)}{m^{2}r^{2}}+\frac{f^{\prime}(r)}{ m^{2}r}+1\right). \tag{11}\]
Also, we can consider the boundary conditions for the special radial function near the outer event horizon as an incoming wave and at the largest event horizon as an outgoing wave [34, 35]:
\[\phi(x)\sim\left\{\begin{array}{ll}e^{-i(\omega-\frac{qQ}{r_{+}})x},&\mbox{ for}\quad r\to r_{+}\;(x\rightarrow-\infty);\\ e^{-i(\omega-\frac{qQ}{r_{c}})x},&\mbox{for}\quad r\to r_{c}\;(x \rightarrow\infty).\end{array}\right. \tag{12}\]
According to the above boundary conditions, we can obtain the discrete spectrum of \(\omega\), defined as the resonance frequency of the imaginary quasi-normal state.
The Quasinormal Resonant Frequency Spectrum
In this section, we need to obtain the imaginary part of the resonance frequency to investigate the linear dynamics of a massive charged particle near a general charged black hole. Also, we need to do this in a dimensionless regime to do this analytically. Since, the \(\frac{q^{2}}{h}\simeq\frac{1}{137}\) relationship exists in our universe, we can consider it for black holes, even slightly charged, and get \(qQ\gg 1\). In addition, the mechanism of the Schwinger-type pair-production in space-time of charged black hole creates a limit to the black hole electric field with the \(\frac{Q}{r_{+}^{2}}\ll\frac{m^{2}}{q}\) relationship [37, 38, 39, 40]. Therefore, according to the above statement, we can consider SCC and define our constraint regime following ansans,
\[m^{2}r_{+}^{2}\gg\ell(\ell+1)\qquad and\qquad m^{2}r_{+}^{2}\gg 2k_{+}r_{+}, \tag{13}\]
where \(k_{+}=f^{\prime}(r_{+})/2\) is the gravitational acceleration of the black hole at the outer event horizon. In this area, we try to obtain the imaginary part of the resonance frequency in the background of the general charged black hole near the event horizon. Now, we use radial potential (10) to determine the linear dynamics of the massive charged particle near the event horizon of the black hole. We can consider this potential in region (13) as an effective potential and obtain the quasinormal resonance modes analytically using standard WKB techniques [41, 42]. In this region, we consider the maximum effective potential near the event horizon of the black hole at point \(r=r_{0}\). In the following, we use the relationship (10), (11), and \(V^{\prime}(r_{0})=0\) to obtain the point where the effective potential is maximum as follows,
\[r_{0}=\frac{q^{2}Q^{2}}{qQ\omega-m^{2}r_{+}^{2}k_{+}} \tag{14}\]
According to the Schrodinger-like differential equation (9) and [41, 42, 43], we use the \(WKB\) method to obtain the quasinormal mode frequencies through the following,
\[iK-(n+\frac{1}{2})-\Lambda(n)=\Omega(n) \tag{15}\]
where
\[\begin{split} K&=\frac{V_{0}}{\sqrt{2V_{0}^{(2)}}}\\ \Lambda(n)&=\frac{1}{\sqrt{2V_{0}^{(2)}}}\left[\frac{ \left(\alpha^{2}+\frac{1}{4}\right)}{8}\frac{V_{0}^{(4)}}{V_{0}^{(2)}}-\frac{ \left(60\alpha^{2}+7\right)}{288}\left(\frac{V_{0}^{(3)}}{V_{0}^{(2)}}\right) ^{2}\right]\\ \Omega(n)&=\frac{n+\frac{1}{2}}{2V_{0}^{(2)}}\left[ \frac{5\left(188\alpha^{2}+77\right)}{6912}\left(\frac{V_{0}^{(3)}}{V_{0}^{(2) }}\right)^{4}-\frac{\left(100\alpha^{2}+51\right)}{384}\frac{\left(V_{0}^{(3) }\right)^{2}V_{0}^{(4)}}{\left(V_{0}^{(2)}\right)^{3}}\right]\\ &+\frac{n+\frac{1}{2}}{2V_{0}^{(2)}}\left[\frac{\left(68\alpha^ {2}+67\right)}{2304}\left(\frac{V_{0}^{(4)}}{V_{0}^{(2)}}\right)^{2}+\frac{ \left(28\alpha^{2}+19\right)}{288}\frac{\left(V_{0}^{(3)}V_{0}^{(5)}\right)}{ \left(V_{0}^{(2)}\right)^{2}}-\frac{\left(4\alpha^{2}+5\right)}{288}\frac{V_{ 0}^{(6)}}{V_{0}^{(2)}}\right]\end{split} \tag{16}\]
Here, \(V_{0}^{(k)}\equiv\frac{d^{k}V}{dx^{k}}|_{r=r_{0}}\) is the spatial derivative of the effective potential of equation (10), and its scattering peak is evaluated at the point \(r=r_{0}\). Using relations (10), (11), (14) and (16), we will have the following relation in the region of (13),
\[\begin{split}& K\simeq\frac{k_{+}^{2}m^{4}r_{+}^{4}qQ}{2f_{0} \left(k_{+}m^{2}r_{+}^{2}-qQ\omega\right)^{2}}\\ &\Lambda(n)\simeq\frac{k_{+}^{2}m^{4}\left[17-60\left(n+\frac{1} {2}\right)^{2}\right]r_{+}^{4}+2k_{+}m^{2}\left[36\left(n+\frac{1}{2}\right)^ {2}-7\right]qQr_{+}^{2}\omega}{16qQ\left(qQ\omega-3k_{+}m^{2}r_{+}^{2}\right) ^{2}}\times f_{0}\\ &\mathcal{A}=15k_{+}^{4}m^{8}\left[148(n+\frac{1}{2})^{2}-41 \right]r_{+}^{8}+12k_{+}^{3}m^{6}\left[121-420(n+\frac{1}{2})^{2}\right]qQr_{+ }^{6}\omega\\ &\mathcal{B}=64q^{5}Q^{5}\left(k_{+}m^{2}r_{+}^{2}-qQ\omega \right)^{4}\\ &\Omega(n)\simeq-(n+\frac{1}{2})q^{3}Q^{3}f_{0}^{2}\times\frac{ \mathcal{A}}{\mathcal{B}}\end{split} \tag{17}\]
where \(f_{0}=f(r_{0})\). In the next step, to determine the study of SCC, we need to obtain the minimum value of the fundamental imaginary resonance mode of the system. For this purpose, using equations (15) and (17), we can calculate the \(Im(\omega_{0})\),
\[\begin{split}&\omega\simeq\frac{qQ}{r_{+}}-\frac{2k_{+}m^{2}r_{+}^ {2}}{qQ}\left[1-\frac{14400}{11644}\left(\frac{(n+1/2)f_{0}}{qQ}\right)^{4} \right]\\ &-i\left\{4f_{0}k_{+}(n+\frac{1}{2})\frac{m^{2}r_{+}^{2}}{q^{2}Q ^{2}}\left[1-\frac{34qQf_{0}^{4}}{11664}\right]+\mathcal{O}(f_{0}^{2}) \right\}\end{split} \tag{18}\]
Since we consider \(r_{0}\) near the event horizon (\(r_{+}\)), we have \(f_{0}\ll 1\). For investigation the SCC, it is necessary to find the minimum value of the resonance mode and evaluate its ratio to the surface gravity of the event horizon,
\[\beta=\frac{-Im(\omega_{0})}{k_{+}}\simeq 2f_{0}\frac{m^{2}r_{+}^{2}}{q^{2}Q^{2} }\left[1-\frac{34qQf_{0}^{4}}{11664}\right]. \tag{19}\]
Since it is \(f_{0}\ll 1\), it is sufficient to have the conditions \(q^{2}Q^{2}>m^{2}r_{+}^{2}\) in the relation above concepts so that \(\frac{-Im(\omega_{0})}{k_{+}}<\frac{1}{2}\) is established. Therefore, we have the following condition for the study of SCC,
\[\frac{q}{m}\geq\frac{r_{+}}{Q}. \tag{20}\]
from equation (20) determine that when \(r_{+}\geq Q\), we have the weak gravity conjecture condition. We know that \(k_{-}>k_{+}\), so the relationship of (19) and (20) is also established for \(\beta=\frac{-Im(\omega_{0})}{k_{-}}<\frac{1}{2}\). Also, according to relation (19), when \(qQ<2\sqrt{f_{0}}mr_{+}\), SCC can be violated. Since \(qQ\gg 1\) and \(f_{0}\ll 1\), the mass of the scalar field and the radius of the event horizon must be very massive and very large respectively. In the following, we obtain the extremality state of the RN-dS black hole. We will have the following relation for the RN-dS black hole with respect to equation(5),
\[f(r)=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}-\frac{\Lambda r^{2}}{3}. \tag{21}\]
When \(k_{+}=k_{-}=0\), we can obtain the black hole extremality state,
\[Q_{exe}=r_{+}\sqrt{1-\Lambda r_{+}^{2}},\qquad M_{exe}=r_{+}(1-\frac{2}{3} \Lambda r_{+}^{2}). \tag{22}\]
We substitute Eq.(22) in Eq.(19) to obtain \(\beta\) in the extremality state of the RN black hole,
\[\beta\simeq 2f_{0}\frac{m^{2}}{q^{2}(1-\Lambda r_{+}^{2})}\left[1-\frac{34qf_{ 0}^{4}\sqrt{1-\Lambda r_{+}^{2}}r_{+}}{11664}\right] \tag{23}\]
According the above relationship, when the condition \(\frac{q}{m}\geq\frac{1}{1-\Lambda r_{+}^{2}}\) is satisfied, the SCC will definitely be preserved, and since \(\Lambda r_{+}^{2}<1\), the weak gravity conjecture will also be satisfied. On the other hand, when \(\Lambda r_{+}^{2}\ll 1\), we will have the \(SCC\) condition in light of the \(WGC\),
\[\frac{q}{m}\geq 1+\Lambda r_{+}^{2}, \tag{24}\]
from the above relation WGC is clearly obtained. In relation (23) when \(\Lambda r_{+}^{2}=1\), we have \(\beta\rightarrow\infty\) and the SCC is violated. Also, these result and conditions are completely compatible with [44, 45].
Discussion and Result
One of the indications of the failure of determinism GR can be the rise of a fascinating peculiarity known as the Cauchy horizon that shows up in the astrophysical solutions of Einstein's equations. These horizons are such that it is difficult to indicate the history of the future of an observer that passes come of such horizons utilizing Einstein's conditions and initial information. With these descriptions in the black holes' background space-time, it is a predicted possibility that the perturbations of the external area are infinitely enhanced by a system known as the blue shift. They lead to a singularity beyond the Cauchy horizon the inside of BHs, where field conditions fail to seem good. The Penrose cosmic censorship conjecture (SCC) affirms such an assumption. Obviously, another point is that astrophysical BHs are stable because of an exceptional component called the perturbation-damping system, which is applied in the outer region. Also, the SCC resolves the issue of the idea of the singularities tracked down in many answers to Einstein's gravitational field equations: Are such singularities conventionally described by unbounded curvature? Is the presence of a Cauchy horizon, an unsteady characteristic element of answers of Einstein's equations? Recently researchers, remarking on the historical backdrop of the SCC conjecture, overview a portion of the headway made in research coordinated either toward satisfying SCC or toward revealing a portion of its shortcomings. They specifically around model adaptations of SCC which have been demonstrated for constrained groups of spacetimes viz the Gowdy spacetimes and the role played by the conventional presence of Asymptotically speed term dominated conduct in these answers. Also additionally note some work on spacetimes containing weak null singularities, and their importance for the SCC [44, 45, 47]. SCC conjecture has been one of the main acts of pure confidence with regard to GR, confirming the deterministic idea of the related field relations. However, it holds well for asymptotically level spacetimes, an expected disappointment of the SCC conjecture could emerge for spacetimes acquiring Cauchy horizon alongside a positive cosmological constant viz its potential failure about this issue. Researchers have unequivocally exhibited that infringement of the restriction SCC turns out as expected within the sight of a Maxwell field even with the presence of higher spacetime aspects. Specifically, for higher dimensions of the RN black holes, the infringement of SCC is at a bigger scope compared with the 4D case, for specific of the cosmological constant. Then again, for a brane world BH, the impact of an additional dimension is to make the infringement of cosmic censorship weaker. For rotating BHs, intriguingly, the SCC is constantly holding even in the presence of higher dimensions. A comparable situation is likewise noticed for rotating BHs on the brane [47]. In this paper, we investigated dynamically formed charged black holes. Also, to satisfy the SCC, the inner Cauchy horizons of the black hole must be unstable. Here, to check the SCC, it is necessary to get two \(-Im(\omega_{0})\) and \(k_{-}\) parameters to demonstrate the decay rate of the remaining perturbation fields in the outer regions of the black hole and the blue-shift growth rate of the in-falling fields of the black hole, respectively. Therefore, if \(\beta=\frac{-Im(\omega_{0})}{k_{-}}<1/2\), SCC
will be maintained. We found that for the dS charged black hole with respect to \(r_{+}\geq Q\) in light of the WGC, viz \(q/m\geq 1\), SCC will definitely be satisfied. We also found that there will be a possibility of violation of SCC for the massive scalar field as well as when the radius of the event horizon of the charged black hole is very large. We also found SCC will be violated in the extremality state for the charged RN-dS black hole when \(\Lambda r_{+}^{2}=1\) which is also mentioned in [44, 45]. Also, these results and conditions are completely compatible with [44, 45]. On the other hand, in [8, 46], when the scalar field is uncharged, the SCC is violated, which is consistent with (19) in this paper. Because can be obtained \(\beta>1/2\) if assume the charge of the scalar field is zero viz \(q=0\). The above study also raises some questions as follows.
Is the relationship researched in this article also valid for black holes in higher dimensions? Do other black holes in different frames satisfy the SCC and WGC simultaneously? Is it possible to consider the SCC relation with WGC monitoring for all black holes? Is it may such a structure also be established for black holes on the brane? We leave these questions for future work.
## 6 Acknowledgments
The authors would like to thank the referee for the fruitful comments to improve the introduction section.
|
2303.07973 | Hadronic vacuum polarization correction to the bound-electron $g$ factor | The hadronic vacuum polarization correction to the $g$ factor of a bound
electron is investigated theoretically. An effective hadronic Uehling potential
obtained from measured cross sections of $e^- e^+$ annihilation into hadrons is
employed to calculate $g$ factor corrections for low-lying hydrogenic levels.
Analytical Dirac-Coulomb wave functions, as well as bound wave functions
accounting for the finite nuclear radius are used. Closed formulas for the $g$
factor shift in case of a point-like nucleus are derived. In heavy ions, such
effects are found to be much larger than for the free-electron $g$ factor. | Eugen Dizer, Zoltán Harman | 2023-03-14T15:28:44Z | http://arxiv.org/abs/2303.07973v1 | # Hadronic vacuum polarization correction to the bound-electron \(g\) factor
###### Abstract
The hadronic vacuum polarization correction to the \(g\) factor of a bound electron is investigated theoretically. An effective hadronic Uehling potential obtained from measured cross sections of \(e^{-}e^{+}\) annihilation into hadrons is employed to calculate \(g\) factor corrections for low-lying hydrogenic levels. Analytical Dirac-Coulomb wave functions, as well as bound wave functions accounting for the finite nuclear radius are used. Closed formulas for the \(g\) factor shift in case of a point-like nucleus are derived. In heavy ions, such effects are found to be much larger than for the free-electron \(g\) factor.
## I Introduction
Precision Penning-trap experiments on the \(g\) factor of hydrogenlike and few-electron highly charged ions allow a thorough testing of quantum electrodynamics (QED), a cornerstone of the standard model describing electromagnetic interactions. The \(g\) factor of hydrogen-like silicon (\(Z=14\)) has been measured with a \(5\times 10^{-10}\) relative uncertainty [1; 2], allowing to scrutinize bound-state QED theory (see e.g. [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13]). Two-loop radiative effects and shifts due to nuclear structure and recoil are observable in such measurements. The high accuracy which can be achieved on the experimental as well as theoretical side also enables the determination of fundamental physical constants such as the electron mass \(m_{\mathrm{e}}\)[14; 15; 16; 17; 18]. Recently, it was shown that \(g\) factor studies can also help in the search for new physics, i.e. the coupling strength of a hypothetical new interaction can be constrained through the comparison of theoretical and experimental results [19; 20; 12].
Further improved tests and possible determinations of fundamental constants [21; 22; 23] call for an increasing accuracy on the theoretical side. The evaluation of two-loop terms up to order \((Z\alpha)^{5}\) (with \(Z\) being the atomic number and \(\alpha\) the fine-structure constant) has been finalized recently [24; 25], increasing the theoretical accuracy especially in the low-\(Z\) regime. First milestones have been also reached in the calculation of two-loop corrections in stronger Coulomb fields, i.e. for larger values of \(Z\alpha\)[26; 27]. As the experiments are advancing towards heavy ions [28; 29], featuring smaller and smaller characteristic distance scales for the interaction between the bound electron and the nucleons, the effects of other forces may need to be considered as well.
Motivated by these prospects, in this article we investigate vacuum polarization (VP) corrections due to the virtual creation and annihilation of hadrons. The dominant VP contribution arises from virtual \(e^{-}e^{+}\) pair creation, which has been widely investigated in the literature [5; 6; 30; 31] and is well understood. The other leptonic VP effect is due to virtual muons, the contribution of which is suppressed by the square of the electron-to-muon mass ratio [32]. The hadronic VP effect, which arises due to a superposition of different virtual hadronic states, is comparable in magnitude to muonic VP, however, it requires a completely different description since the virtual hadrons interact via the strong force. An effective approach to take into account such effects for the free-electron \(g\) factor is described in e.g. Ref. [33], in which hadronic VP is characterized by the cross section of hadron production via \(e^{-}e^{+}\) annihilation. Following this treatment, we apply the known empirical parametric hadronic polarization function for the photon propagator from Ref. [34] to account for the complete hadronic contribution in case of the bound-electron \(g\) factor.
While in case of the free electron, the hadronic correction only appears on the two-loop level, as a correction to the electrons electromagnetic self-interaction (see Fig. 1a), in case of a bound electron it appears already as a one-loop effect (see Fig. 1b). Furthermore, the hadronic VP is boosted by approximately \(\sim Z^{4}\), i.e. by the fourth power of the nuclear charge number, and thus, as we will see later, for heavier ions above \(Z=14\) its contribution is larger than in case of a free electron [35].
Figure 1: Feynman diagrams representing the leading hadronic VP corrections to the free-electron \(g\) factor (1a) and the bound-electron \(g\) factor (1b). Double lines represent electrons in the electric field of the nucleus and wavy lines with a triangle depict the interaction with the external magnetic field. For the free electron, it is a two-loop process where the self-interaction of the electron is perturbed by the effective hadronic polarization function (shaded bubble). For the bound electron, it is a one-loop correction where the Coulomb interaction with the nucleus (cross) is perturbed by the effective hadronic polarization function.
An effective potential constructed from the parametrized VP function, the hadronic Uehling potential, has been derived in Ref. [36]. We calculate the perturbative correction to the \(g\) factor due to this radial potential employing analytical Dirac-Coulomb wave functions, as well as numerically calculated wave functions accounting for a finite-size nucleus. Analytical formulas are presented, and numerical results are given for hydrogenic systems from H to \({}^{91+}\). We note that such an approach assumes an infinitely heavy nucleus, i.e. nuclear recoil effects are excluded in our treatment.
We use natural units with \(\hbar=c=1\) for the reduced Planck constant \(\hbar\) and the speed of light \(c\), and \(\alpha=e^{2}\), where \(\alpha\) is the fine-structure constant and \(e\) is the elementary charge. Three-vectors are denoted by bold letters.
## II \(g\) factor corrections
Generally speaking, the \(g\) factor describes the coupling of the electron's magnetic moment \(\mathbf{\mu}\) to its total angular momentum \(\mathbf{J}\). The corresponding first-order Zeeman splitting \(\Delta E\) due to the electron's interaction with an external homogeneous magnetic field \(\mathbf{B}\) is
\[\Delta E=-\left\langle\mathbf{\mu}\cdot\mathbf{B}\right\rangle=g\,\mu_{\rm B}\left\langle \mathbf{J}\cdot\mathbf{B}\right\rangle\,, \tag{1}\]
where \(\mu_{\rm B}=e/(2m_{\rm e})\) is the Bohr magneton of the electron and \(g\) is its \(g\) factor, which depends on the electron configuration.
On the other hand, the relativistic interaction of an electron with the external magnetic field can be derived from the minimal coupling principle in the Dirac equation. In first-order perturbation theory, this leads to the energy shift
\[\Delta E=e\left\langle\mathbf{\alpha}\cdot\mathbf{A}\right\rangle\,, \tag{2}\]
where \(\mathbf{\alpha}\) are the usual Dirac matrices given in terms of the gamma matrices by \(\alpha_{i}=\gamma^{0}\gamma^{i}\)[37] and \(\mathbf{A}\) is the vector potential for the magnetic field, such that \(\mathbf{B}=\nabla\times\mathbf{A}\). Choosing the magnetic field to be directed along the \(z\) axis, one can see that a possible choice for the vector potential is \(\mathbf{A}=[\mathbf{B}\times\mathbf{r}]/2\), where \(\mathbf{r}\) is the position vector. Together with Eq. (1) and (2), one can derive the following general expression for the \(g\) factor [30]:
\[g=\frac{2\kappa m_{\rm e}}{j(j+1)}\int_{0}^{\infty}dr\,rG_{n\kappa}(r)F_{n \kappa}(r)\,, \tag{3}\]
where \(n\) is the principal quantum number of the bound state, \(j=|\kappa|-1/2\) is the total angular momentum quantum number and \(\kappa\) is the relativistic angular momentum quantum number. The functions \(G_{n\kappa}(r),F_{n\kappa}(r)\) are the radial components in the electronic Dirac wave function,
\[\psi_{n\kappa m}(\mathbf{r})=\frac{1}{r}\left(\begin{array}{c}G_{n\kappa}(r)\; \Omega_{\kappa m}(\theta,\varphi)\\ iF_{n\kappa}(r)\;\Omega_{-\kappa m}(\theta,\varphi)\end{array}\right)\,, \tag{4}\]
where \(m\) is the magnetic quantum number and \(r=|\mathbf{r}|\). The spherical spinors \(\Omega_{\pm nm}(\theta,\varphi)\) make up the angular components and are the same for any central potential \(V(r)\)[38].
A straightforward approach for calculating the \(g\) factor shift \(\Delta g^{\rm VP}\) due to vacuum polarization (VP), is to solve the radial Dirac equation numerically with the inclusion of the VP effect, and then substituting the perturbed functions \(G_{n\kappa}^{\rm VP}(r),F_{n\kappa}^{\rm VP}(r)\) into Eq. (3). The difference between the pertubed and the unperturbed \(g\) factor gives the corresponding shift
\[\Delta g^{\rm VP}=g^{\rm VP}-g\,. \tag{5}\]
However, we will apply a different method to investigate the hadronic \(g\) factor shift. As shown in Ref. [31], owing to the properties of Dirac wave functions, the \(g\) factor in Eq. (3) can be expressed through the energy eigenvalues \(E_{n\kappa}\),
\[g=-\frac{\kappa}{2j(j+1)}\left(1-2\kappa\frac{\partial E_{n\kappa}}{\partial m _{\rm e}}\right)\,, \tag{6}\]
if the potential \(V(r)\) does not depend on the electron mass \(m_{\rm e}\). This formula was used successfully, e.g., to investigate the finite nuclear size effect in Ref. [31]. We apply this new approach to investigate the vacuum polarization effect, described by an effective potential. Having a small perturbation \(\delta V(r)\) to the nucleus potential (like the hadronic Uehling potential [36]), the \(g\) factor shift can be shown to be [31]
\[\Delta g^{\rm VP}=-\frac{\kappa^{2}}{j(j+1)m_{\rm e}}\left\langle r\frac{ \partial\delta V(r)}{\partial r}\right\rangle\,. \tag{7}\]
For the relativistic ground state and a point-like nucleus, this expectation value can be evaluated further to obtain
\[\Delta g^{\rm VP}_{1s}=\frac{4(1+2\gamma)}{3m_{\rm e}}\Delta E^{\rm VP}_{1s} \,-\frac{8Z\alpha}{3}\left\langle r\delta V\right\rangle_{1s}\,, \tag{8}\]
where \(\gamma=\sqrt{1-(Z\alpha)^{2}}\) and \(\Delta E_{1s}=\left\langle\delta V\right\rangle_{1s}\) is the corresponding energy shift in first-order perturbation theory. Since the second term on the right-hand side of Eq. (8) is \(Z\alpha\) times smaller than the first term, the \(g\) factor shift can be approximated for light ions (\(Z\alpha\ll 1\)) with the formula:
\[\Delta g^{\rm VP}_{1s}\approx\frac{4(1+2\gamma)}{3m_{\rm e}}\Delta E^{\rm VP}_ {1s}\,. \tag{9}\]
A similar expression also appeared in Ref. [23; 31] in a different context, studying the finite size effect. However, we will investigate the applicability of this formula as an approximation for calculating the \(g\) factor shift due to VP effects for light ions.
### Leptonic vacuum polarization correction to the \(g\) factor
The leptonic VP correction to the bound-electron \(g\) factor is well known. The corresponding diagrams are shown in Fig. 2 and can be divided into two groups: the electric loop (EL) and the magnetic loop (ML) contribution. The vacuum polarization effect in the EL contribution (Fig. 2a and Fig. 2b) is equivalent to a perturbation in the interaction between the bound electron and the nucleus, and thus can be described by an effective perturbing potential \(\delta V_{\rm EL}(\mathbf{r})\). This allows the usage of perturbation theory and the simple inclusion of hadronic VP effects to the bound-electron \(g\) factor shift, using Eq. (7). As can be seen in Ref. [6; 39], the ML contribution (Fig. 2c) is \(Z\alpha\) times smaller than EL in the leading order, and is not the subject of the current work.
The vacuum loop in the EL contribution can be expanded in powers of the nuclear coupling strength \(Z\alpha\), which corresponds to a free loop interacting with the nucleus. Due to Furry's theorem, only odd powers of \(Z\alpha\) contribute [39; 40]. The leading term in this expansion is described by the Uehling potential \(\delta V_{\rm Ue}(\mathbf{r})\) and the contributions of higher order in \(Z\alpha\) are summarized to the Wichmann-Kroll potential \(\delta V_{\rm WK}(\mathbf{r})\), such that the effective perturbing potential is given by \(\delta V_{\rm EL}(\mathbf{r})=\delta V_{\rm Ue}(\mathbf{r})+\delta V_{\rm WK}(\mathbf{r})\)[39]. The diagrams in Fig. 2a and Fig. 2b contribute equally to the EL correction. In this paper, we will investigate the leading contribution to the vacuum polarization due to the Uehling potential: \(\delta V_{\rm EL}(\mathbf{r})\approx\delta V_{\rm Ue}(\mathbf{r})\).
In case of leptonic vacuum loops, the well-known leptonic Uehling potential is given by [41]
\[\delta V_{\rm Ue}(\mathbf{r})=-\frac{2\alpha(Z\alpha)}{3\pi}\int d^{3}x\,\rho(\bm {x})\,\frac{K_{1}(2m_{\rm l}|\mathbf{r}-\mathbf{x}|)}{|\mathbf{r}-\mathbf{x}|}\,, \tag{10}\]
where \(\rho(\mathbf{x})\) denotes the nuclear charge distribution normalized to unity, \(m_{\rm l}\) is the mass of the virtual particle in the fermionic loop and \(K_{1}(x)\) is given by
\[K_{1}(x)=\int_{1}^{\infty}dt\ e^{-xt}\left(1+\frac{1}{2t^{2}}\right)\frac{ \sqrt{t^{2}-1}}{t^{2}}\,. \tag{11}\]
The \(g\) factor shift of a bound electron in the ground state can be calculated analytically for a point-like nucleus and was already derived in [30]. We will show that one arrives to the same result using the approach in Eq. (7). Using the leptonic Uehling potential for a point-like nucleus (\(\rho(\mathbf{x})=\delta^{(3)}(\mathbf{x})\)) [30],
\[\delta V_{\rm point}^{\rm lept.}(r)=-\frac{2\alpha(Z\alpha)}{3\pi r}\ K_{1}( 2m_{\rm l}r)\,, \tag{12}\]
and the radial components of the electronic wave function in the ground state [32], one obtains from Eq. (7)
\[\Delta g_{\rm point}^{\rm lept.}(1s)=-\frac{8\alpha(Z\alpha)}{3\pi s}\left[I _{133}-\frac{1}{3}I_{233}+\frac{Z\alpha s}{2\gamma}\left(I_{122}-\frac{1}{3}I _{222}\right)\right]\,. \tag{13}\]
\(I_{abc}\) is a modification of the base integral given in Ref. [30], see Appendix A, and \(s=m_{\rm e}/m_{\rm l}\) is the ratio of the electron and the loop particle masses.
The leading order \(Z\alpha\) expansion is given by
\[\Delta g_{\rm point}^{\rm lept.}(1s) =\frac{\alpha}{\pi}\left[-\frac{16s^{2}(Z\alpha)^{4}}{15}+\frac {5\pi s^{3}(Z\alpha)^{5}}{9}\right.\] \[\quad+\left(\frac{16s^{2}}{15}\ln(2sZ\alpha)-\frac{116s^{2}}{75}- \frac{16s^{4}}{7}\right)(Z\alpha)^{6}\] \[\quad+\left(-\frac{5\pi s^{3}}{9}\ln\left(\frac{sZ\alpha}{2} \right)-\frac{8\pi s^{3}}{27}+\frac{7\pi s^{5}}{8}\right)(Z\alpha)^{7}\] \[\quad+\mathcal{O}\left((Z\alpha)^{8}\right)\Big{]}\,. \tag{14}\]
For \(s=1\), this is exactly the same result as in Ref. [30], however, obtained with a different method. In the case of muonic VP, thus \(s=m_{\rm e}/m_{\mu}\), the results for a finite size nucleus were obtained numerically in Ref. [39].
In the next Subsection, we will use this approach to derive an analytic expression for the hadronic VP correction to the bound-electron \(g\) factor.
### Hadronic vacuum polarization correction to the \(g\) factor
As discussed in [33; 34; 36], the hadronic vacuum polarization function can be constructed semi-empirically from experimental data of \(e^{-}e^{+}\) annihilation cross sections. The whole hadronic polarization function is parametrized for seven regions of momentum transfer and is given e.g. in Ref. [34]. In Ref. [36], it was found that only the first region of parametrization is significant for the hadronic energy shift calculations. This is also clear from the physical point of view, since atomic physics is dominated by low energies around eV - keV. Thus, we will use the analytic hadronic Uehling potential introduced in Ref. [36] for our calculations. For a point-like nucleus it is given by
\[\delta V_{\rm point}^{\rm had.}(r)=-\frac{2Z\alpha}{r}B_{1}E_{1}\left(\frac{r} {\sqrt{C_{1}}}\right)\,, \tag{15}\]
Figure 2: Feynman diagrams representing the VP correction to the bound-electron \(g\) factor. Double lines represent electrons in the electric field of the nucleus and wavy lines with a triangle depict the interaction with the external magnetic field.
with the coefficients \(B_{1}=0.0023092\) and \(C_{1}=3.9925370\) GeV\({}^{-2}\)[34; 36] and the exponential integral \(E_{1}(x)\) which can be generalized for \(n=0,1,2,...\) by [42]
\[E_{n}(x)=\int_{1}^{\infty}dt\ \frac{e^{-xt}}{t^{n}}\,. \tag{16}\]
The values for \(B_{1}\) and \(C_{1}\) are taken from the most recent parametrization in Ref. [34] and will be used for the calculations. The error of numerical results is estimated by comparison with an older parametrization in Ref. [43] like has been done in Ref. [36].
The corresponding hadronic Uehling potential for an extended nucleus with spherical charge distribution \(\rho(\mathbf{x})\) is obtained by the convolution [36]
\[\delta V_{\rm{fns}}^{\rm{had.}}(\mathbf{r}) =\int d^{3}x\ \rho(\mathbf{x})\ \delta V_{\rm{point}}^{\rm{had.}}(\mathbf{r}-\mathbf{x})\] \[=-\frac{4\pi Z\alpha B_{1}\sqrt{C_{1}}}{r}\int_{0}^{\infty}dx\ x \rho(x)D_{2}^{-}(r,x), \tag{17}\]
where \(x=|\mathbf{x}|\) and
\[D_{n}^{\pm}(r,x)=E_{n}\left(\frac{|r-x|}{\sqrt{C_{1}}}\right)\pm E_{n}\left( \frac{|r+x|}{\sqrt{C_{1}}}\right)\,. \tag{18}\]
As in our previous work [36], we will consider the homogeneously charged sphere as the model for the extended nucleus with root-mean-square (RMS) radii taken from Ref. [44]. The charge distribution \(\rho(r)\) is given by
\[\rho(r)=\frac{3}{4\pi R^{3}}\,\theta(R-r)\,, \tag{19}\]
where \(\theta(x)\) is the Heaviside step function and the effective radius \(R\) is related to the RMS nuclear charge radius \(R_{\rm{rms}}\) via \(R=\sqrt{5/3}\,R_{\rm{rms}}\). The correspondig hadronic Uehling potential is given analytically in [36], see Appendix B.
Let us turn to the evaluation of the leading hadronic VP contribution to the bound-electron \(g\) factor, depicted in Fig. 1b. In the low-energy limit, the hadronic Uehling potential is given by [45]
\[\delta V_{\rm{non-rel.}}^{\rm{had.}}(\mathbf{x})=-4\pi Z\alpha B_{1}C_{1}\delta^ {(3)}(\mathbf{x})\,. \tag{20}\]
Using Eq. (7) and the non-relativistic expectation value of the delta function, the leading order in \(Z\alpha\) of the hadronic \(g\) factor shift for general \(ns\) states is found to be [46]
\[\Delta g_{\rm{non-rel.}}^{\rm{had.}}(ns) =-\frac{4}{3m_{\rm{e}}}\left\langle 12\pi Z\alpha B_{1}C_{1} \delta^{(3)}(\mathbf{x})\right\rangle_{ns}\] \[=-\frac{16(Z\alpha)^{4}m_{\rm{e}}^{2}}{n^{3}}B_{1}C_{1}\,. \tag{21}\]
For the \(1s\) state, a fully relativistic expression for the point-like nucleus can be given. Using the hadronic Uehling potential in Eq. (15) and the relativistic wave function of the ground state, one obtains with Eq. (7):
\[\Delta g_{\rm{point}}^{\rm{had.}}(1s)=\frac{4}{3m_{\rm{e}}}\Delta E_{\rm{ point}}^{\rm{had.}}(1s)-\frac{8B_{1}(Z\alpha)^{2}(2\lambda\sqrt{C_{1}})^{2 \gamma}}{3\gamma(1+2\lambda\sqrt{C_{1}})^{2\gamma}}\,, \tag{22}\]
where \(\lambda=Z\alpha m_{\rm{e}}\) and \(\Delta E_{\rm{point}}^{\rm{had.}}(1s)\) is the analytical energy shift for a point-like nucleus given in Ref. [36],
\[\Delta E_{\rm{point}}^{\rm{had.}}(1s)= -\frac{Z\alpha\lambda(2\lambda\sqrt{C_{1}})^{2\gamma}B_{1}}{ \gamma^{2}}\] \[\times{}_{2}F_{1}\left(2\gamma,2\gamma;1+2\gamma;-2\lambda\sqrt{C _{1}}\right)\,, \tag{23}\]
with \({}_{2}F_{1}(a,b;c;z)\) being the hypergeometric function [42]. The expansion of this expression up to \(6^{\rm{th}}\) order in \(Z\alpha\) is given by
\[\Delta g_{\rm{point}}^{\rm{had.}}(1s)= -16B_{1}C_{1}m_{\rm{e}}^{2}(Z\alpha)^{4}+\frac{512B_{1}C_{1}^{3/ 2}m_{\rm{e}}^{3}(Z\alpha)^{5}}{9}\] \[-\frac{16B_{1}C_{1}m_{\rm{e}}^{2}(Z\alpha)^{6}}{3}\Big{[}2+30C_{1 }m_{\rm{e}}^{2}\] \[\quad-3\ln\!\Big{(}2m_{\rm{e}}Z\alpha\sqrt{C_{1}}\Big{)}\Big{]}+ \mathcal{O}\left((Z\alpha)^{7}\right)\,, \tag{24}\]
and it coincides with the non-relativistic approximation in Eq. (21) to order \((Z\alpha)^{4}\).
A similar relativistic calculation for the \(2s\) state yields
\[\Delta g_{\rm{point}}^{\rm{had.}}(2s)= -2B_{1}C_{1}m_{\rm{e}}^{2}(Z\alpha)^{4}+\frac{64B_{1}C_{1}^{3/2}m_ {\rm{e}}^{3}(Z\alpha)^{5}}{9}\] \[-\frac{B_{1}C_{1}m_{\rm{e}}^{2}(Z\alpha)^{6}}{24}\Big{[}41+420C_{1 }m_{\rm{e}}^{2}\] \[\quad-48\ln\!\Big{(}m_{\rm{e}}Z\alpha\sqrt{C_{1}}\Big{)}\Big{]}+ \mathcal{O}\left((Z\alpha)^{7}\right)\,. \tag{25}\]
The leading orders of Eq. (24) and Eq. (25) satisfy the non-relativistic relationship in Eq. (21),
\[\Delta g_{\rm{non-rel.}}^{\rm{had.}}(ns)=\frac{1}{n^{3}}\Delta g_{\rm{non-rel. }}^{\rm{had.}}(1s)\,. \tag{26}\]
### Hadronic vacuum polarization correction to the reduced \(g\) factor
Additionally, we investigate hadronic effects on the weighted difference of the \(g\) factor and the bound-electron energy \(E\) of H-like ions, called reduced \(g\) factor,
\[\tilde{g}=g-\frac{4(1+2\gamma)}{3m_{\rm{e}}}E\,, \tag{27}\]
put forward in Ref. [23] for a possible novel determination of the fine-structure constant, and for testing physics beyond the standard model [20]. It was shown there that
the detrimental nuclear structure contributions featuring large uncertainties can be effectively suppressed in the above combination of the \(g\) factor and level energy of the hydrogenic ground state. The question arises whether the same can be said about the hadronic VP corrections investigated in the present article.
The hadronic VP correction to the reduced \(g\) factor for a point-like nucleus can be found analytically using Eq. (22) and Eq. (23). The leading order \(Z\alpha\) expansion is given by
\[\tilde{g}_{\rm point}^{\rm had.}(1s) = \Delta g_{\rm point}^{\rm had.}(1s)-\frac{4(1+2\gamma)}{3m_{\rm e }}\Delta E_{\rm point}^{\rm had.}(1s) \tag{28}\] \[= \frac{128}{9}B_{1}C_{1}^{3/2}m_{\rm e}^{3}(Z\alpha)^{5}-64B_{1}C _{1}^{2}m_{\rm e}^{4}(Z\alpha)^{6}\] \[+{\cal O}\left((Z\alpha)^{7}\right)\,.\]
Thus, the leading term of order \((Z\alpha)^{4}\) in \(\Delta g_{\rm point}^{\rm had.}(1s)\) cancels such that the hadronic VP contribution to the reduced \(g\) factor is indeed small for practical purposes. This also supports the approximation in Eq. (9). Therefore, we may conclude that hadronic effects do not hinder the extraction of \(\alpha\) or detailed tests of QED and standard model extensions via the measurement of \(\tilde{g}\).
Hadronic vacuum polarization correction to the weighted \(g\) factor difference of H- and Li-like ions
Another quantity of interest is the weighted difference of the \(g\) factors of the Li-like and H-like charge states of the same element,
\[\delta_{\Xi}g=g(2s)-\Xi\,g(1s)\,, \tag{29}\]
where \(g(2s)\) is the \(g\) factor of the Li-like ion and \(g(1s)\) is the \(g\) factor of the H-like ion. For light elements, the parameter \(\Xi\) can be calculated to great accuracy by [47; 21]
\[\Xi=2^{-2\gamma-1}\left[1+\frac{3}{16}(Z\alpha)^{2}\right]\left(1-\frac{2851} {1000}\frac{1}{Z}+\frac{107}{100}\frac{1}{Z^{2}}\right)\,. \tag{30}\]
This weighted (or specific) difference was introduced to suppress uncertainties arising from the nuclear charge radius and further nuclear structural effects [48]. Therefore, bound-state QED theory can be investigated more accurately in \(g\) factor experiments combining H- and Li-like ions than with the individual ions alone.
As we have seen, the hadronic VP correction to \(\delta_{\Xi}g\) for a point-like nucleus can be found analytically. We approximate \(\Delta g_{\rm point}^{\rm had.}(2s)\) of the Li-like ion with the expression in Eq. (25) for the H-like ion. Since there are no electron-electron interactions in this approximation, we have to neglect the terms of relative orders \(1/Z\) and \(1/Z^{2}\) in Eq. (30). We note that the residual weight
\[\Xi_{0}=2^{-2\gamma-1}\left[1+\frac{3}{16}(Z\alpha)^{2}\right]\,, \tag{31}\]
exactly cancels the first two leading orders \((Z\alpha)^{4}\) and \((Z\alpha)^{5}\):
\[\delta_{\Xi_{0}}g_{\rm point}^{\rm had.} = \Delta g_{\rm point}^{\rm had.}(2s)-\Xi_{0}\,\Delta g_{\rm point }^{\rm had.}(1s) \tag{32}\] \[= \frac{5}{2}B_{1}C_{1}^{3/2}m_{\rm e}^{4}(Z\alpha)^{6}+{\cal O} \left((Z\alpha)^{7}\right)\,.\]
Therefore, we can conclude that hadronic VP effects are also largely cancelled in the above specific difference. A similar conclusion can be drawn for the case of the specific difference introduced for a combination of H- and B-like ions [22]. This result is well understood, since nuclear and hadronic VP contributions are both short-range effects with a similar behavior.
## III Numerical results
As mentioned in Ref. [36], the hadronic VP contribution to the energy shift is about \(1/0.665\approx 1.5\) times smaller than the muonic VP contribution in the case of the Uehling term. This can be also confirmed for the \(g\) factor shift. Comparing the non-relativistic approximation for the hadronic \(g\) factor shift \(\Delta g_{\rm non-rel.,point}^{\rm had.\ VP}\) in Eq. (20) with the first term of the expression for the muonic \(g\) factor shift \(\Delta g_{\rm non-rel.,point}^{\rm muonic\ VP}\) in Eq. (14), yields for hydrogen in the ground state
\[\Delta g_{\rm non-rel.,point}^{\rm had.\ VP}(1s) = -1.092(14)\times 10^{-16}\] \[= 0.664(9)\ \Delta g_{\rm non-rel.,point}^{\rm muonic\ VP}(1s)\,.\]
The values for the hadronic \(g\) factor shift with an extended nucleus were calculated numerically using two different methods, both yielding the same results within the given uncertainties. The first method consists of calculating the expectation value in Eq. (7) with the FNS hadronic Uehling potential and the semi-analytic wave functions of a homogeneously charged spherical nucleus given in Ref. [49]. As a consistency check, these results were reproduced by using the approach of solving the radial Dirac equation numerically with the inclusion of the FNS potential, and substituting the resulting large and small radial wave function components into Eq. (3) and Eq. (5). The results for the hydrogenlike systems H, Si, Ca, Xe, Kr, W, Pb, Cm and U are given in Table 1. A diagrammatic representation is shown in Fig. 3. We note that for \(Z=14\) and above, the magnitude of the hadronic vacuum polarization terms considered in this work exceed in magnitude the hadronic contribution to the free-electron \(g\) factor [50]. However, it is important to mention that the uncertainty of the leading finite nuclear size correction to the \(g\) factor is approximately an order of magnitude larger than the hadronic VP effect for all elements considered (see e.g. [23]), hindering the identification of the effect.
The errors given in Table 1 and 2 are based on the uncertainty of the nuclear root-mean-square radii
given in Ref. [44] and an assumed uncertainty for the parameters \(B_{1}\) and \(C_{1}\) as described in Section II.2. The total error is dominated by the assumed uncertainty of \(B_{1}\) and \(C_{1}\). Owing to the closed analytical expression for the hadronic Uehling potential, numerical uncertainties are negligible. For the results \(\Delta g^{\rm had.}_{\rm approx,fns}\) using the approximate formula in Eq. (9), the hadronic energy shifts \(\Delta E^{\rm approx}_{\rm rel,fns}\) from Ref. [36] and their respective uncertainties are utilized. For \(Z=92\), the hadronic energy shift, which is not given in Ref. [36], was calculated using the same method.
One can see that the non-relativistic approximation in Eq. (21) represents a lower bound for the hadronic \(g\) factor shift and is not sufficient for large atomic numbers \(Z\). On the other hand, the analytic expression for the relativistic \(g\) factor shift in case of a point-like nucleus in Eq. (22) represents an upper bound and differs also significantly from the numerical results for extended nuclei. We conclude that the effects due to a finite size nucleus need to be included in a precision calculation of the hadronic VP effect. At the present time, the uncertainty stemming from the assumed nuclear charge distribution model limits the accuracy to about 1% [36]. At the same time, the absence of more precise parametrizations of the hadronic polarization function in the low-energy regime limits the accuracy also to about 1%, see Table 1. Thus, the given errors include, to a great part, all possible limitations of the uncertainty of the hadronic \(g\) factor shift.
The simple approximate formula in Eq. (9) is found to be a good approximation for atomic numbers below \(Z=14\). The error is less than 1% for atomic numbers up to \(Z=36\).
As shown in Section II.3 and II.4, the hadronic VP contribution to the reduced and the weighted \(g\) factor in case of a point-like nucleus is at least \(Z\alpha\) times smaller than the regular hadronic \(g\) factor shift, see Eq. (24). In fact, numerical results for extended nuclei confirm that the hadronic contribution to both quantities does not differ significantly from zero for small atomic numbers below \(Z=36\) at the current level of accuracy. To see this, note that the numerical results for the finite-size reduced and weighted \(g\) factor can be obtained from Table 1 and 2 via
\[\tilde{g}^{\rm had.}_{\rm fns}(1s) = \Delta g^{\rm had.}_{\rm rel,fns}(1s)-\Delta g^{\rm had.}_{\rm approx,fns}(1s)\,, \tag{34}\] \[\delta\Xi_{9}g^{\rm had.}_{\rm fns} = \Delta g^{\rm had.}_{\rm rel,fns}(2s)-\Xi_{0}\,\Delta g^{\rm had. }_{\rm rel,fns}(1s)\,, \tag{35}\]
respectively. For \(Z=36\), one obtains
\[\tilde{g}^{\rm had.}_{\rm fns}(1s) = -32(47)\times 10^{-13}\,, \tag{36}\] \[\delta\Xi_{0}g^{\rm had.}_{\rm fns} = -1(64)\times 10^{-14}\,. \tag{37}\]
Even for larger atomic numbers, hadronic effects do not constrain high-precision tests of QED via the measurement of the reduced and weighted \(g\) factor.
Recently, a high-precision measurement of the \(g\) factor difference of two Ne isotopes was performed [12]. It was shown that QED effects mostly cancel, whereas nuclear effects like the nuclear recoil are well observable. In the following, we investigate hadronic VP contributions to the bound-electron \(g\) factor of the isotopes \({}^{20}\)Ne\({}^{9+}\) and \({}^{22}\)Ne\({}^{9+}\) in the ground state.
First, we calculate the hadronic VP correction to the \(g\) factor difference stemming from the different nuclear size of the isotopes. Nuclear recoil effects are excluded for now, and nuclear charge radii are taken from Ref. [44]. Using \(R_{\rm rms}=3.0055(21)\) fm for \({}^{20}\)Ne\({}^{9+}\) and \(R_{\rm rms}=2.9525(40)\) fm for \({}^{22}\)Ne\({}^{9+}\), the fully relativistic result for both isotopes is
\[\Delta g^{\rm had.}_{\rm rel,fns}\left(1s,{}^{20}{\rm Ne}^{9+} \right) = -1.133(14)\times 10^{-12}\,, \tag{38}\] \[\Delta g^{\rm had.}_{\rm rel,fns}\left(1s,{}^{22}{\rm Ne}^{9+} \right) = -1.133(15)\times 10^{-12}\,. \tag{39}\]
This is approximately a third of the hadronic contribution of the free electron given in Extended Table 1 in Ref. [12]. Thus, we conclude that at the given level of accuracy, hadronic effects of the bound electron also do not hinder the precise calculation of the isotopic shift of \({}^{20}\)Ne\({}^{9+}\) and \({}^{22}\)Ne\({}^{9+}\).
To estimate also the hadronic VP correction stemming from the different nuclear mass of the isotopes including nuclear recoil effects, we use the non-relativistic formula [35]
\[\Delta g^{\rm had.}_{\rm recoil}(1s)=\left(\frac{m_{\rm r}}{m_{\rm e}}\right)^ {2}\Delta g^{\rm had.}_{\rm non-rel.}(1s)\,, \tag{40}\]
with \(m_{\rm r}=m_{\rm N}m_{\rm e}/(m_{\rm N}+m_{\rm e})\) being the reduced mass for an isotope with nuclear mass \(m_{\rm N}\). This is a reasonable approximation since the non-relativistic result for Ne (\(Z=10\)), using Eq. (21), is
\[\Delta g^{\rm had.}_{\rm non-rel.}(1s,Z=10)=-1.092(14)\times 10^{-12}\,. \tag{41}\]
Using atomic masses from Ref. [51], we obtain \(m_{\rm r}(^{20}{\rm Ne}^{9+})=0.99997m_{\rm e}\) and \(m_{\rm r}(^{22}{\rm Ne}^{9+})=0.99998m_{\rm e}\), such that to first order:
\[\Delta g^{\rm had.}_{\rm recoil}(1s,{}^{20}{\rm Ne}^{9+}) = -1.092(14)\times 10^{-12}\,, \tag{42}\] \[\Delta g^{\rm had.}_{\rm recoil}(1s,{}^{22}{\rm Ne}^{9+}) = -1.092(14)\times 10^{-12}\,. \tag{43}\]
Thus, also the nuclear recoil effect to the hadronic VP contribution cannot be resolved at the given level of accuracy.
## IV Summary
Hadronic vacuum polarization corrections to the bound-electron \(g\) factor have been calculated, employing a hadronic polarization function constructed from empirical data on electron-positron annihilation into hadrons. We have found that for a broad range of H-like ions, this one-loop effect is considerably larger than hadronic VP for the free electron (see Fig. 1a). Hadronic effects will be observable in future bound-electron \(g\) factor experiments once nuclear charge radii and charge distributions
will be substantially better known. We have also found that the hadronic effect does not pose a limitation on testing QED or physics beyond the standard model, and determining fundamental constants through specific differences of \(g\) factors for different ions, or through the reduced \(g\) factor. Finally, the analytic hadronic Uehling potential proves to be very useful and can be applied to further atomic systems, e.g. positronium, or the hyperfine structure.
## Acknowledgements
E. D. would like to thank the colleagues at the Max Planck Institute for Nuclear Physics, especially the theory division lead by Christoph H. Keitel, for the hospitality during the work. We thank S. Breidenbach and H. Cakir for insightful conversations, and H. Cakir for assistance with numerical computations. Supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 273811115 - SFB 1225.
\begin{table}
\begin{tabular}{l l} \(Z\) & \(\Delta g^{\rm had.}_{\rm refl.,fins}(2s)\) \\ \hline
1 & \(-1.366(17)\times 10^{-17}\) \\
14 & \(-5.673(71)\times 10^{-13}\) \\
20 & \(-2.542(32)\times 10^{-12}\) \\
36 & \(-3.583(45)\times 10^{-11}\) \\
54 & \(-2.966(37)\times 10^{-10}\) \\
74 & \(-2.189(27)\times 10^{-9}\) \\
82 & \(-4.728(59)\times 10^{-9}\) \\
92 & \(-1.213(15)\times 10^{-8}\) \\ \end{tabular}
\end{table}
Table 2: Results for the hadronic VP contribution to the \(g\) factor shift of the bound electron in the \(2s\) state.
\begin{table}
\begin{tabular}{l l l l l l} \(Z\) & \(R_{\rm rms}\) [fm] & \(\Delta g^{\rm had.}_{\rm non-rel.,point}(1s)\) & \(\Delta g^{\rm had.}_{\rm rel.,point}(1s)\) & \(\Delta g^{\rm had.}_{\rm approx.,fins}(1s)\) & \(\Delta g^{\rm had.}_{\rm refl.,fins}(1s)\) \\ \hline
1 & \(0.8783(86)\) & \(-1.092(14)\times 10^{-16}\) & \(-1.093(14)\times 10^{-16}\) & \(-1.093(13)\times 10^{-16}\) & \(-1.093(13)\times 10^{-16}\) \\
14 & \(3.1224(24)\) & \(-4.196(53)\times 10^{-12}\) & \(-4.616(57)\times 10^{-12}\) & \(-4.490(56)\times 10^{-12}\) & \(-4.497(56)\times 10^{-12}\) \\
20 & \(3.4776(19)\) & \(-1.748(22)\times 10^{-11}\) & \(-2.109(25)\times 10^{-11}\) & \(-1.989(25)\times 10^{-11}\) & \(-1.996(25)\times 10^{-11}\) \\
36 & \(4.1884(22)\) & \(-1.835(23)\times 10^{-10}\) & \(-3.263(39)\times 10^{-10}\) & \(-2.664(33)\times 10^{-10}\) & \(-2.696(34)\times 10^{-10}\) \\
54 & \(4.7859(48)\) & \(-9.29(12)\times 10^{-10}\) & \(-3.291(35)\times 10^{-9}\) & \(-2.004(25)\times 10^{-9}\) & \(-2.065(26)\times 10^{-9}\) \\
74 & \(5.3658(23)\) & \(-3.275(41)\times 10^{-9}\) & \(-3.568(32)\times 10^{-8}\) & \(-1.261(15)\times 10^{-8}\) & \(-1.344(17)\times 10^{-8}\) \\
82 & \(5.5012(13)\) & \(-4.938(62)\times 10^{-9}\) & \(-9.589(77)\times 10^{-8}\) & \(-2.508(31)\times 10^{-8}\) & \(-2.728(34)\times 10^{-8}\) \\
92 & \(5.8571(33)\) & \(-7.825(98)\times 10^{-9}\) & \(-3.572(24)\times 10^{-7}\) & \(-5.705(71)\times 10^{-8}\) & \(-6.410(80)\times 10^{-8}\) \\ \end{tabular}
\end{table}
Table 1: Results for the hadronic VP contribution to the \(g\) factor shift of the bound electron in the ground state arising from the Uehling potential in the EL diagram (Fig. 1b) using different approaches: the non-relativistic approximation \(\Delta g^{\rm had.-rel.,point}_{\rm non-rel.,point}\) in Eq. (21), the relativistic formula for a point-like nucleus \(\Delta g^{\rm had.}_{\rm rel.,point}\) in Eq. (22), the approximate formula \(\Delta g^{\rm had.}_{\rm approx.,fins}\) using the hadronic energy shift with an extended nucleus from [36] in Eq. (9), and the full relativistic result for an extended nucleus \(\Delta g^{\rm had.}_{\rm refl.,fins}\) using the analytical finite-size Uehling potential with numerical finite-size wave functions in Eq. (7). Root-mean-square nuclear charge radii \(R_{\rm rms}\) are taken from [44].
Figure 3: Comparison of analytical and numerical results for the hadronic \(g\) factor shift of the bound electron in the ground state of H-like ions with atomic numbers \(Z\) obtained in this work, see Table 1. The green solid line represents the analytical expression for a point-like nucleus \(\Delta g^{\rm had.}_{\rm rel.,point}\) in Eq. (21), while the red dashed line represents the non-relativistic expression \(\Delta g^{\rm had.}_{\rm non-rel.,point}\) in Eq. (20). The full numerical results for extended nuclei \(\Delta g^{\rm had.}_{\rm refl.,fins}\) (crosses) are compared to the approximation \(\Delta g^{\rm had.}_{\rm approx.,fins}\) in Eq. (9) (circles) with hadronic energy shifts for extended nuclei taken from Ref. [36].
## Appendix A Base integral \(I_{abc}\)
The base integral \(I_{abc}\) used in Eq. (13) is given by [30]
\[I_{abc}= \int_{0}^{1}dy\ \frac{\left(1-y^{2}\right)^{a-1/2}}{y^{b-1}}\left( \frac{sZ\alpha y}{1+sZ\alpha y}\right)^{c-2\epsilon}\] \[= \frac{1}{2}(sZ\alpha)^{c-2\epsilon}B\left(a+\frac{1}{2},1-\frac{ b-c}{2}-\epsilon\right)\] \[\times\,_{3}F_{2}\left(\frac{c}{2}-\epsilon,\frac{c+1}{2}- \epsilon,1-\frac{b-c}{2}-\epsilon;\frac{1}{2},a+\frac{3-b+c}{2}-\epsilon;(sZ \alpha)^{2}\right)\] \[-\frac{c-2\epsilon}{2}(sZ\alpha)^{c+1-2\epsilon}B\left(a+\frac{1 }{2},\frac{3-b+c}{2}-\epsilon\right)\] \[\times\,_{3}F_{2}\left(\frac{c}{2}+1-\epsilon,\frac{c+1}{2}- \epsilon,\frac{3-b+c}{2}-\epsilon;\frac{3}{2},a+2-\frac{b-c}{2}-\epsilon;(sZ \alpha)^{2}\right)\,, \tag{40}\]
where \(s=m_{\rm e}/m_{\rm l}\) is the ratio of the electron and the loop particle masses, \(\epsilon=1-\gamma\) with \(\gamma=\sqrt{1-(Z\alpha)^{2}}\), \(B(x,y)\) is the beta function and \({}_{3}F_{2}(a_{1},a_{2},a_{3};b_{1},b_{2};z)\) is a generalized hypergeometric function [42].
## Appendix B Hadronic Uehling potential for extended nuclei
The analytic hadronic Uehling potential for an extended nucleus with a spherical homogeneous charge distribution with effective radius \(R\) is given by [36]
\(r>R\):
\[\delta V_{\rm{fns,out}}^{\rm{had.}}(r)=-\frac{3Z\alpha B_{1}\sqrt{C_{1}}}{rR^{ 3}}\left[\sqrt{C_{1}}R\,D_{3}^{+}(r,R)-C_{1}D_{4}^{-}(r,R)\right]. \tag{41}\]
\(r\leq R\):
\[\delta V_{\rm{fns,in}}^{\rm{had.}}(r)=-\frac{3Z\alpha B_{1}\sqrt {C_{1}}}{rR^{3}} \left[\sqrt{C_{1}}r+\sqrt{C_{1}}RE_{3}\left(\frac{r+R}{\sqrt{C_{1}}} \right)+C_{1}E_{4}\left(\frac{r+R}{\sqrt{C_{1}}}\right)\right.\] \[-\left.\frac{(r-R)^{2}(r+2R)}{6\sqrt{C_{1}}}E_{1}\left(\frac{R-r }{\sqrt{C_{1}}}\right)\right]. \tag{42}\]
The parameters \(B_{1}\) and \(C_{1}\) characterize the hadronic polarization function and are given in Section II.2. The functions \(D_{n}^{\pm}(r,R)\) and \(E_{n}(x)\) are defined in Eq. (18) and Eq. (16), respectively.
|
2307.04755 | Information decomposition in complex systems via machine learning | One of the fundamental steps toward understanding a complex system is
identifying variation at the scale of the system's components that is most
relevant to behavior on a macroscopic scale. Mutual information provides a
natural means of linking variation across scales of a system due to its
independence of functional relationship between observables. However,
characterizing the manner in which information is distributed across a set of
observables is computationally challenging and generally infeasible beyond a
handful of measurements. Here we propose a practical and general methodology
that uses machine learning to decompose the information contained in a set of
measurements by jointly optimizing a lossy compression of each measurement.
Guided by the distributed information bottleneck as a learning objective, the
information decomposition identifies the variation in the measurements of the
system state most relevant to specified macroscale behavior. We focus our
analysis on two paradigmatic complex systems: a Boolean circuit and an
amorphous material undergoing plastic deformation. In both examples, the large
amount of entropy of the system state is decomposed, bit by bit, in terms of
what is most related to macroscale behavior. The identification of meaningful
variation in data, with the full generality brought by information theory, is
made practical for studying the connection between micro- and macroscale
structure in complex systems. | Kieran A. Murphy, Dani S. Bassett | 2023-07-10T17:57:32Z | http://arxiv.org/abs/2307.04755v2 | # Information decomposition to identify relevant variation in complex systems
###### Abstract
One of the fundamental steps toward understanding a complex system is identifying variation at the scale of the system's components that is most relevant to behavior on a macroscopic scale. Mutual information is a natural means of linking variation across scales of a system due to its independence of the particular functional relationship between variables. However, estimating mutual information given high-dimensional, continuous-valued data is notoriously difficult, and the desideratum--to reveal important variation in a comprehensible manner--is only readily achieved through exhaustive search. Here we propose a practical, efficient, and broadly applicable methodology to decompose the information contained in a set of measurements by lossily compressing each measurement with machine learning. Guided by the distributed information bottleneck as a learning objective, the information decomposition sorts variation in the measurements of the system state by relevance to specified macroscale behavior, revealing the most important subsets of measurements for different amounts of predictive information. Additional granularity is achieved by inspection of the learned compression schemes: the variation transmitted during compression is composed of distinctions among measurement values that are most relevant to the macroscale behavior. We focus our analysis on two paradigmatic complex systems: a Boolean circuit and an amorphous material undergoing plastic deformation. In both examples, specific bits of entropy are identified out of the high entropy of the system state as most related to macroscale behavior for insight about the connection between micro- and macro- in the complex system. The identification of meaningful variation in data, with the full generality brought by information theory, is made practical for the study of complex systems.
A complex system is a system of interacting components where some sense of order present at the scale of the system is not apparent, or even conceivable, from the observations of single components [1; 2]. A broad categorization, it includes many systems of relevance to our daily lives, from the economy to the internet and from the human brain to artificial neural networks [3; 4]. Before attempting a reductionist description of a complex system, one must first identify variation in the system that is most relevant to emergent order at larger scales. The notion of relevance can be formalized with information theory, wherein mutual information serves as a general measure of statistical dependence to connect variation across different scales of system behavior [5; 6]. Information theory and complexity science have a rich history; information theory commonly forms the foundation of definitions of what it means to be complex [7; 8; 9; 10].
Machine learning is well-suited for the analysis of complex systems, grounded in its natural capacity to identify patterns in high dimensional data [11]. However, distilling insight from a successfully trained model is often infeasible due to a characteristic lack of interpretability of machine learning models [12; 13]. Restricting to simpler classes of models, for example linear combinations of observables, recovers a degree of interpretability at the expense of functional expressivity [14]. For the study of complex systems, such a trade-off is unacceptable if the complexity of the system is no longer faithfully represented. In this work, we do not attempt to explain the relationship between microscale and macroscale, and are instead interested in identifying the information contained in microscale observables that is most predictive of macroscale behavior--independent of functional relationship.
We employ a recent method from interpretable machine learning that identifies the most relevant information in a set of measurements [15]. Based on the distributed information bottleneck [16; 17], a variant of the information bottleneck (IB) [18], the method lossily compresses a set of measurements while preserving information about a relevance quantity. Optimization serves to decompose the information present in the measurements, providing a general-purpose method to identify the important variation in composite measurements of complex systems.
Identifying important variation is a powerful means of analysis of complex systems, as we demonstrate on two paradigmatic examples. First we study a Boolean circuit, whose fully-specified joint distribution and intuitive interactions between variables facilitate understanding of
the information decomposition found by the distributed IB. Boolean circuits are networks of binary variables that interact through logic functions, serving as the building blocks of computation [19] and as elementary models of gene control networks [20; 21]. Second, we decompose the information contained in the local structure of an amorphous material subjected to global deformation. Amorphous materials are condensed matter systems composed of simple elements (e.g., atoms or grains) that interact via volume exclusion and whose disorder gives rise to a host of complex macroscale phenomena, such as collective rearrangement events spanning a wide range of magnitudes [22; 23] and nontrivial phase transitions [24; 25; 26; 27]. Although the state space that describes all of the degrees of freedom is large, as is generally true of complex systems, the proposed method is able to identify important bits of variation by partitioning entropy and leveraging machine learning to process the high dimensional data.
## Methods
Mutual information is a measure of statistical dependence between two random variables \(X\) and \(Y\) that is independent of the functional transformation that relates \(X\) and \(Y\) (in contrast to linear correlation, for example, which measures the degree to which two variables are linearly related). Mutual information is defined as the entropy reduction in one variable after learning the value of the other [28],
\[I(X;Y)=H(Y)-H(Y|X), \tag{1}\]
with \(H(X)=\mathbb{E}_{x\sim p(x)}[-\log\,p(x)]\) Shannon's entropy [29].
The distributed information bottleneck is an optimization objective to extract the information most relevant to a variable \(Y\) from a composite measurement: a random vector \(\mathbf{X}=(X_{1},...,X_{N})\)[15; 16; 30]. Each component \(X_{i}\) undergoes lossy compression to an auxiliary variable \(U_{i}=f(X_{i})\), and then the compressed variables \(\mathbf{U}=(U_{1},...,U_{N})\) are used to predict the output \(Y\). Minimization of the distributed IB Lagrangian,
\[\mathcal{L}_{\text{DIB}}=\beta\sum_{i=1}^{N}I(U_{i};X_{i})-I(\mathbf{U};Y), \tag{2}\]
extracts the entropy (or information) in \(\mathbf{X}\) that is most descriptive of \(Y\). By sweeping over the magnitude of the bottleneck strength \(\beta\), a continuous spectrum of approximations to the relationship between \(\mathbf{X}\) and \(Y\) is found. The optimized compression schemes for each component of \(\mathbf{X}\) reveal the amount of relevant information and the specific entropy selected for every level of approximation.
In place of Eqn. 2, variational bounds on mutual information have been developed that are amenable to data and machine learning [17; 31]. The lossy compression schemes are parameterized by neural networks that encode data points to probability distributions in a continuous latent space. Transmitted information is upper bounded by the expectation of the Kullback-Leibler divergence [28] between the encoded distributions and an arbitrary prior distribution, identical to the process of information restriction in a variational autoencoder [31; 32]. Over the course of training, the amount of information conveyed by each compression scheme \(I(U_{i};X_{i})\) is estimated using bounds derived in Ref. [33]. Although mutual information is generally difficult to es
Figure 1: **Decomposing the information contained in the inputs of a Boolean circuit.****(a)** Ten binary inputs \(\mathbf{X}=(X_{1},...,X_{10})\) are connected via AND, OR, and XOR gates to a binary output \(Y\). **(b)** A lossy compression \(U_{i}\) is learned for each \(X_{i}\) and then all \(U_{i}\) are combined as input to a machine learning model trained to predict \(Y\). **(c)** The distributed information plane displays the predictive information about the output (left vertical axis, black) as a function of the total information utilized about the input. For each value of total information into the model there is an allocation of information to the input gates indicating their relevance to the output \(Y\) (right vertical axis, colors corresponding to input gates in panel **(a)**). The subset of inputs identified as containing the most relevant information (\(I(U_{i};X_{i})\geq 0.1\) bits) are indicated at the top of the plot. **(d)** The mutual information between all subsets of input channels and \(Y\) are displayed on the distributed information plane as black circles. The optimization of the distributed IB (gray curve) identified subsets of inputs that contain the most predictive information (open circles).
timate from data [34], compressing the partial measurements \(X_{i}\) separately isolates the information such that the amount of mutual information is small enough to allow precise estimates, with the interval between bounds on the order of 0.01 bits. Details about mutual information estimation are in Appendix A.
## Results
**Boolean circuit.** A Boolean circuit (Fig. 1a) was constructed with ten binary inputs \(\mathbf{X}=(X_{1},...,X_{10})\) and a binary output \(Y\). Assuming a uniform distribution over inputs, the truth table specifies the joint distribution \(p(x_{1},...,x_{10},y)\), and the interactions between inputs are prescribed by a wiring of logical AND, OR, and XOR gates. An information bottleneck was distributed to every input \(X_{i}\) to monitor from where the predictive information originated via compressed variables \(U_{i}\) (Fig. 1b). We trained a multilayer perceptron (MLP) to learn the relationship between the lossy compressions \(\mathbf{U}\) and \(Y\).
Over the course of a single training run, the coefficient of the information bottleneck strength \(\beta\) was swept to obtain a spectrum of predictive models. The distributed information plane (Fig. 1c) [15] displays the predictive power as a function of the total information about the inputs \(\sum I(U_{i};X_{i})\). The predictive performance ranged from zero predictive information without any information about the inputs (Fig. 1c, lower left) to all entropy \(H(Y)\) accounted for by utilizing all ten bits of input information (Fig. 1c, upper right). For every point on the spectrum there was an allocation of information over the inputs; the distributed IB objective identified the information across all inputs that was most predictive. The most predictive information about \(Y\) was found to reside in \(X_{3}\)--the input that routes through the fewest gates to \(Y\)--and then in the pair \(X_{3},X_{10}\), and so on.
Powered by machine learning, we traversed the space of lossy compression schemes of \(X_{i}\), decomposing the information contained in the circuit inputs about the output. Included in the space of compression schemes is information transmitted about each of the \(2^{10}\) discrete subsets of the inputs. To be concrete, there are ten subsets of a single input, 45 pairs of inputs, and so on, with each subset sharing mutual information with \(Y\) based on the role of the specific inputs inside the circuit. Fig. 1d displays the information contained in every discrete subset of inputs (black points) along with the continuous trajectory found by optimization of the distributed IB (gray curve). The distributed IB, maximizing predictive information while minimizing information taken from the inputs, closely traced the upper boundary of discrete information allocations and identified a majority of the most informative subsets of inputs. To decompose the information in the circuit's inputs required only a single sweep with the distributed IB, not an exhaustive search through all subsets of inputs. We note that the product of the distributed IB is not an ordering of single variable mutual information terms \(I(X_{i};Y)\), which would be straightforward to calculate, but instead the ordering of information selected from all of \(\mathbf{X}\) that is maximally informative about \(Y\).
**Decomposing structural information in a physical system.** Linking structure and dynamics in amorphous materials--complex systems consisting of particles that interact primarily through volume exclusion--has been a longstanding challenge in physics [35; 36; 37; 38]. Searching for signatures of collective behavior in the multitude of microscopic degrees of freedom is an endeavor emblematic of complex systems more generally and one well-suited for machine learning and information theory. We accept that the functional relationship between the micro- and macroscale variation is potentially incomprehensible, and are instead interested in the information at the microscale that is maximally predictive of behavior at the macroscale. While prior work has analyzed the information content of hand-crafted structural descriptors individually [39; 40; 41], the distributed IB searches through the space of information from many structural measurements in combination.
Two-dimensional simulated glasses, prepared by either rapid or gradual quenching and composed of small (type A) and large (type B) particles that interact with a Lennard-Jones potential, were subjected to global shear deformation [38]. Local regions were identified as the origins of imminent rearrangement and paired with negative samples from elsewhere in the system to create a binary classification dataset.
We first considered a scheme of measurements of the microscale structure that has been associated with plastic rearrangement in a variety of amorphous systems: the densities of radial bands around the center of a region [42; 43]. By training a support vector machine (SVM) to predict rearrangement based on the radial density measurements, a linear combination of the values is learned. In the literature, that combination is commonly referred to as _softness_, and has proven to be a useful local order parameter [44; 45; 46; 47].
We approached the same prediction task from an information theoretic perspective, seeking the specific bits of variation in the density measurements that are most predictive of collective rearrangement. Each radial density measurement underwent lossy compression by its own neural network before all compressions were concatenated and used as input to an MLP to predict rearrangement. By sweeping \(\beta\), a single optimization recovered a sequence of approximations, each allocating a limited amount of information across the 100 density measurements to be most predictive of imminent rearrangement (Fig. 2).
The trajectories in the distributed information plane, for both gradually and rapidly quenched glasses, reflect the growth of predictive information and prediction ac
curacy given maximally predictive information about the radial densities (Fig. 2a,c). With only one bit of information from the density measurements, 71.8% predictive accuracy was achieved for the gradually quenched glass and 69.5% was achieved for the rapidly quenched glass; with twenty bits, the accuracy jumped to 91.3% and 85.4%, respectively. Beyond twenty bits of density information, the predictive accuracy became comparable to that of the support vector machine, which can utilize all of the continuous-valued density measurements for prediction with a linear relationship.
For every point along the trajectory, information was identified from the density measurements that, together, formed the combination of bits that were most predictive of rearrangement (Fig. 2b,d). The majority of the information was selected from smaller radii (close to the center of the region), which can be expected given the localized nature of rearrangement events [35, 36]. Less intuitive is the information decomposition as it relates to the radial distribution functions \(g_{\rm AA}(r)\) and \(g_{\rm AB}(r)\), the system-averaged radial densities of type A and B particles in regions with a type A particle at the center. For both glasses, the most predictive bits originated in the low density radial bands nearest the center. As more information was incorporated into the prediction, the additional bits came from radial bands that corresponded to particular features of \(g_{\rm AA}(r)\) and \(g_{\rm AB}(r)\). Outside of the first low density trough, the selected information often came from the high density radii of type \(A\) particles and the low density radii of type \(B\) particles; this trend held true for both glasses. While the information decomposition highlighted similar features in both glasses,
Figure 2: **Decomposing structural information about imminent rearrangement in a sheared glass.****(a)**_Inset:_ Given a local neighborhood in a sheared glass, fifty densities each of radial shells for the small (type A) and large (type B) particles were used to predict whether the neighborhood is the locus of an imminent rearrangement event. _Main:_ For a gradually quenched glass, the information that is predictive of rearrangement (black) increased as the most predictive density information was identified and incorporated into the machine learning model. The accuracy (blue) was comparable to a support vector machine (SVM) (dashed line) after around twenty bits. **(b)** Sharing the horizontal axis with panel **(a)**, the amount of information extracted about each of the radial density measurements of small (top) and large (bottom) particles reveals the radii with the most predictive information at each level of approximation. The system’s average density values for each particle type with type A at the center, also known as the radial distribution functions \(g_{\rm AA}(r)\) and \(g_{\rm AB}(r)\), are shown on the right. **(c,d)** The same as panels **(a, b)** but for glass that was prepared via a rapid quench rather than a gradual quench.
the more pronounced structure of selected information out to larger radii for the gradually quenched glass is indicative of its higher structural regularity, which is also seen in the pronounced features of its radial distribution functions \(g_{\mathrm{AA}}(r)\) and \(g_{\mathrm{AB}}(r)\).
The amount of information utilized from each density measurement was predominantly a single bit or less. Of the ways to compress the infinite entropy of a continuous-valued density to a single bit, what was the specific variation extracted from each density measurement? Through inspection of the learned compression schemes, the extracted information can be further decomposed by the degree of distinctions between density values that were preserved for the predictive model (Fig. 3a) [15]. The single most important bit of information for the gradually quenched glass was a composition of partial bits from multiple density measurements, mostly arising from the first low-density shell of each type of particle (Fig. 3b). For both measurements, the compression scheme acted as a threshold on the range of possible density values: values less than a cutoff \(\rho^{\prime}\) were indistinguishable from each other for the purposes of prediction and were partially distinguishable from density values above the cutoff. By examining the distribution of density values in these radial shells, we see that the cutoff values leverage the separability of the density distributions when conditioned on rearrangement.
With more information utilized for prediction, some of the compression schemes differed from simple thresholds (shown for the rapidly quenched glass in Fig. 3c). For the predictive model operating with a total of twenty bits of density information, two density measurements contributed more than a bit each. The learned compression of the first high-density shell of type A particles essentially counted the number of particles in the shell, with distinguishability between densities as if there were several thresholds over the range of the values that act to roughly discretize the density measurement.
Information decomposition with the distributed IB depends upon the particular scheme used to measure the system [48]. In the study of complex systems, there can be multiple 'natural' schemes of measuring a system state. Density measurements of radial bands lead to an essentially linear relationship between structure and rearrangement [44]; what if we had not inherited such a fortuitous measurement scheme? Another natural basis of measurements is the position of all of the particles (Fig. 4a). In contrast to radial density measurements, per-particle measurements lack a canonical ordering; accordingly, we used a permutation-invariant transformer architecture for the predictive model [49]. Every particle position was transmitted in parallel through a single compression channel, rather than through a uniquely learned compression scheme per measurement as before. An analogue of the distributed IB task is to write a note for each particle in the region with the goal to predict whether the region will rearrange. Under a constraint on time or effort, more careful notes would be taken for the informative particles, while less careful notes would be taken for the rest.
The per-particle measurement scheme imposed no structure on the selection of configurational information.
Figure 3: **Selected bits of information as distinctions among raw measurement values.****(a)** Lossy compression is achieved by mapping the raw values of \(X\) to probability distributions in latent space. The statistical similarity of the conditional distributions, visualized as a matrix for all pairs of feature values, determines how distinguishable the raw feature values are to the predictive model. **(b)** The single most predictive bit of information about rearrangement in the gradually quenched glass came predominantly from two density measurements. The distinguishability matrices indicate that the compression scheme applied a simple threshold to these measurements: density values less than a cutoff value \(\rho^{\prime}\) were indistinguishable from each other, as were values above the cutoff. The histograms of density values conditioned on rearrangement (right) show that the learned cutoff value separates the probability masses. **(c)** The twenty most predictive bits of radial density information in the rapidly quenched glass were selected from many radial bands. The two that contribute more than a bit of information each correspond to the density of type \(A\) particles near the center; one compression scheme effectively counted the number of particles in the high density shell. The distinguishability matrices of the next five most informative radial bands are shown below.
Nevertheless, we found that the information cost per particle as a function of the position in the neighborhood had a radial structure (Fig. 4b). The information per particle was highest in the low density radial bands near the center of the region (Fig. 4c), and inspection of the compression scheme indicated that negligible azimuthal information was transmitted (Fig. 4d). The information decomposition allowed for similar insights to be derived as in the radial density measurement scheme, even though the nature of the predictive model in the two cases was substantially different. Additionally, because the distributed IB operates entirely on the input side of an arbitrary predictive model, the information analysis was agnostic to whether the model was a simple fully connected network or a more complicated transformer architecture.
## Discussion
A universal challenge faced when studying complex systems, fundamental to what makes a system _complex_, is the abundance of entropy from the perspective of the microscale that obscures relevant information about macroscale behavior. The generality of mutual information as a measure of statistical relatedness, and the expressivity of deep learning when handling high-dimensional data, allow the distributed IB to be as readily utilized to identify structural defects relevant to a given material property as it is to reveal gene variation relevant to a given affliction. Tens, hundreds, and potentially thousands of measurements of a complex system are handled simultaneously, rendering practical analyses that would have previously been infeasible through exhaustive search or severely limited by constraints on functional relationships between variables.
Information theory has long held appeal for the analysis of complex systems owing to the generality of mutual information [3; 28]. However, the estimation of mutual information from data is fraught with difficulties [33; 50; 34], which have hindered information theoretic analyses of data from complex systems. By distributing information bottlenecks across multiple partial measurements of a complex system, entropy is partitioned to a degree that makes precise estimation of mutual information possible while simultaneously revealing the most important combinations of bits for insight about the system. Machine learning navigates the space of lossy compression schemes for each variable and allows the identification of meaningful variation without consideration of the black box functional relationship found by the predictive model.
Instead of compressing partial measurements in parallel, the information bottleneck [18] extracts the relevant information from one random variable in its entirety about another, and is foundational to many works in representation learning [51; 52]. In the physical sciences, the IB has been used to extract relevant degrees of freedom with a theoretical equivalence to coarse-graining in the renormalization group [53; 54], and to identify useful reaction coordinates in biomolecular reactions [55]. However, the IB has limited capacity to find useful approximations, particularly when the relationship between \(X\) and \(Y\) is deterministic (or nearly so) [56; 57]. Much of the spectrum of learned approximations is the trivial noisy rendition of a high-fidelity reconstruction [56; 48]. Additionally, compression schemes found by IB are rarely interpretable because the singular bottleneck occurs after processing the complete input, allowing the compres
Figure 4: **Measuring the positions of all particles.****(a)** Instead of the density of radial shells, each particle’s position and type in a local neighborhood were used as input measurements to relate to rearrangement. **(b)** The per-particle information transmitted as a function of particle position, for the small type A (left) and large type B (right) particles, for the predictive model utilizing 66 bits of information about the rapidly quenched glass. The scale bar is a distance of one in simulation units. **(c)** Averaged radially, the information (black) resides in particles that are situated in the first troughs of the radial distribution function, \(g(r)\) (colored curves). **(d)** For a particle at position \(\vec{r}_{0}\), the distinguishability of particles of the same type at all other locations indicates that negligible azimuthal information was transmitted.
sion scheme to involve arbitrarily complex relationships between components of the input without penalty. The distribution of information bottlenecks is critical to an interpretable information decomposition, and to accurately estimating the necessary mutual information terms.
A growing body of literature focuses on a fundamentally different route to decompose the information contained in multiple random variables \(\{X_{i}\}\) about a relevant random variable \(Y\); that alternative route is partial information decomposition (PID) [58; 59]. Although there is no consensus on how to achieve PID in practice, its goal is to account for the mutual information between \(\{X_{i}\}\) and \(Y\) in terms of subsets of \(\{X_{i}\}\), by analogy to set theory [60]. PID allocates information to the input variables in their entirety, whereas the distributed IB selects partial entropy from the input variables in the form of lossy compression schemes, with one scheme per variable. While PID has been proposed as an information theoretic route to study complex systems [61] and quantify complexity [62], the super-exponential growth of PID terms renders the methodology rather impractical. There are \(5\times 10^{22}\) PID terms for a Boolean circuit with 8 inputs [58] and the number of terms for the simple 10 input circuit from Fig. 1 is not known [59]. By contrast, the distributed IB offers a pragmatic route to the decomposition of information in a complex system: it is amenable to machine learning and data, and can readily process one hundred (continuous) input variables as in the amorphous plasticity experiments.
## Acknowledgements
We gratefully acknowledge Sam Dillavou and Zhuowen Yin for helpful discussions and comments on the manuscript, and Sylvain Patinet for the amorphous plasticity data.
## Code availability
The full code base has been released on Github and may be found through the following link: distributed-information-bottleneck.github.io. Every analysis included in this work can be repeated from scratch with the corresponding Google Colab iPython notebook in this directory.
## Data availability
The train and validation splits of the amorphous plasticity data, consisting of local neighborhoods that were subsequently "measured" as radial densities (Figs. 2,3) or as per-particle descriptors (Fig. 4), can be found through the project page and can be downloaded here. The full dataset with all particle locations before and after all events is available with the permission of the authors of Ref. [38].
## Appendix A: Mutual information bounds
The full method presented in this work requires us to bound the mutual information for high dimensional data; identifying this bound is notoriously difficult [34; 50]. Fortunately, there are factors in our favor to facilitate optimization with machine learning and the recovery of tight bounds on the information transmitted by the compression channels \(U_{i}\).
To optimize the distributed information bottleneck objective (Eqn. 2) requires an upper bound on \(I(U_{i};X_{i})\) and a lower bound on \(I(\mathbf{U};Y)\). The (distributed) variational information bottleneck objective [17; 31] upper bounds \(I(U_{i};X_{i})\) with the expectation of the Kullback-Leibler (KL) divergence between the encoded distributions \(p(u_{i}|x_{i})\) and an arbitrary prior distribution \(r(u_{i})\) in latent space,
\[I(U_{i};X_{i})\leq\mathbb{E}_{x_{i}\sim p(x_{i})}[D_{\mathrm{KL}}(p(u_{i}|x_{i })||r(u_{i}))]. \tag{3}\]
Normal distributions are used for both the encoded distribution, \(p(u_{i}|x_{i})=\mathcal{N}(\mathbf{\mu}=f_{\mu}(x_{i}),\mathbf{\sigma}=f_{\sigma}(x_{ i}))\), and the prior, \(r(u_{i})=\mathcal{N}(\mathbf{0},\mathbf{1})\) so that the KL divergence has a simple analytic form.
Over the course of training, the KL divergence is computed for each channel \(U_{i}\), thereby providing a proxy quantity for the amount of information that is contained in the compression scheme. Although the KL divergence can be used for a qualitative sense of information allocation to features [15], it is a rather poor estimate of the mutual information. Because the encoded distributions \(p(u_{i}|x_{i})\) have a known form, we can use the noise contrastive estimation (InfoNCE) lower bound and "leave one out" upper bound from Ref. [33] with a large number of samples to obtain tight bounds on the amount of mutual information in the learned compression schemes.
The lower and upper bounds on \(I(U_{i};X_{i})\) are based on likelihood ratios at points sampled from the dataset \(x_{i}\sim p(x_{i})\) and from the corresponding conditional distributions, \(u_{i}\sim p(u_{i}|x_{i})\). To be specific, the mutual information for each channel \(U=f(X)\) (dropping channel indices for simplicity) is lower bounded by
\[I(U;X)\geq\mathbb{E}\left[\frac{1}{K}\sum_{i}^{K}\log\frac{p(u_{i}|x_{i})}{ \frac{1}{K}\sum_{j\neq i}^{K}p(u_{i}|x_{j})}\right] \tag{4}\]
and upper bounded by
\[I(U;X)\leq\mathbb{E}\left[\frac{1}{K}\sum_{i}^{K}\log\frac{p(u_{i}|x_{i})}{ \frac{1}{K-1}\sum_{j\neq i}^{K}p(u_{i}|x_{j})}\right]. \tag{5}\]
The expectation values in both equations are taken over samples \(\{u_{i},x_{i}\}_{i=1}^{K}\) of size \(K\) extracted repeatedly from the joint distribution \(p(u,x)=p(x)p(u|x)\). We estimated with as large an evaluation batch size \(K\) as feasible given memory and time considerations, and then averaged over multiple batches to reduce the variance of the bound. Evaluation with a batch size of 1024, averaged over 8 draws, yielded bounds on the mutual information that was on the order of 0.01 bits for the Boolean circuit and glass data. The size of the validation dataset for the glass and the size of the truth table of the Boolean circuit were both on the order of one thousand points. Hence, the average of the validation dataset for the \(K\)-bit dataset for the \(K\)-
the benefit of averaging comes from repeated sampling of the latent representations.
We show in Fig. 5 the performance of the mutual information bounds for compression schemes that encode up to several bits of information. \(X\) is a discrete random variable that is uniformly distributed over its support and has one to six bits of entropy; for each \(X\) a fixed dataset of size \(1024\) was sampled for mutual information estimation according to the following method of compression. Each outcome \(x\) was encoded to a normal distribution with unit variance in \(32\)-dimensional space, \(p(u|x)=\mathcal{N}(\mathbf{\mu},\mathbf{1})\). The encoded distributions were placed along orthogonal axes a distance \(d\) from the origin; in the limits of \(d=0\) and \(d\gg 1\) the information transmitted by the compression scheme is \(0\) and \(H(X)\), respectively.
A Monte Carlo estimate of the mutual information sampled \(2\times 10^{5}\) points from \(p(u,x)\) to compute \(\mathbb{E}_{p(u,x)}[\log p(u|x)/p(u)]\). The "leave one out" upper and InfoNCE lower bounds were computed with different evaluation batch sizes \(K\), and averaged over \(4096\) sampled batches. The standard deviation of the bounds is displayed as the shaded region around each trace, and is left out of the plots for the residual (the difference between the bound and the Monte Carlo estimate) for all but the evaluation batch size of \(1024\).
When the information contained in the compression is less than about two bits--as was the case for the majority of the experiments of the main text--the bounds are tight in expectation for even the smallest evaluation batch size. The variance is reducible by averaging over multiple batches. As the transmitted information grows, the benefit of increasing the evaluation batch size grows more pronounced, though bounds with a range of less than \(0.1\) bits can still be achieved for up to six bits of transmitted information.
### Information transmitted per particle
For the per-particle measurement scheme on the amorphous plasticity data, a single compression channel \(U\) was used for all particles. The information conveyed by the channel \(I(U;X)\) may be estimated as above, with \(X\) being the particle position and type. Note that we are particularly interested in the information cost for specific particle positions and for each particle type. The outer summation of the bounds (Eqn. 4 and 5) serves to average over the measurement outcomes \(x_{i}\) in a random sample; we use the summand corresponding to \(\{x_{i},u_{i}\}\) as the information contribution for the specific outcome \(x_{i}\). To generate the information heatmaps of Fig. 4b, we randomly sampled \(512\) neighborhoods from the dataset, corresponding to an evaluation batch size \(K=512\) neighborhoods \(\times\,50\) particles / neighborhood \(=25,600\) particles (data points), and averaged over \(100\) such batches. A probe particle with specified particle type and position (one for each point in the grid) was inserted into the batch, and then the corresponding summand for the lower and upper information bounds served to quantify the information transmitted per particle. To be specific,
\[I(X=x;U)\geq\mathbb{E}\left[\log\frac{p(u|x)}{\frac{1}{K}\sum_{j}^{K}p(u|x_{j} )}\right], \tag{6}\]
with the expectation taken over \(u\sim p(u|x)\) and samples \(\{x_{i}\}_{i=1}^{K}\sim\prod_{i}^{K}p(x)\). The upper bound differed only by inclusion of the distribution \(p(u|x)\) corresponding to the probe point in the denominator's sum.
## Appendix B: Implementation specifics
All experiments were implemented in TensorFlow and run on a single computer with a \(12\) GB GeForce RTX \(3060\) GPU. Computing mutual information bounds repeatedly throughout an optimization run contributed the most to running time. Including the information estimation, the Boolean circuit optimization took about half an hour, and the glass experiments took several hours.
### Boolean circuit
Each input may take only one of two values (\(0\) or \(1\)), allowing the encoders to be extremely simple. Trainable scalars \((\vec{\mu}_{i},\log\vec{\sigma}_{i}^{2})\) were used to encode \(p(u_{i}|x_{i})=\mathcal{N}((2x_{i}-1)\times\vec{\mu}_{i},\vec{\sigma}_{i}^{2})\). The decoder was a multilayer perceptron (MLP) consisting of three fully connected layers with \(256\) Leaky ReLU units (\(\alpha=0.3\)) each. We increased the value of \(\beta\) logarithmically from \(5\times 10^{-4}\) to \(5\) in \(5\times 10^{4}\) steps, with a batch size of \(512\) input-output pairs sampled randomly from the entire \(1024\)-element truth table. The Adam optimizer was used with a learning rate of \(10^{-3}\).
### Amorphous plasticity
The simulated glass data comes from Ref. [38]: 10,000 particles in a two-dimensional cell with Lees-Edwards boundary conditions interact via a Lennard-Jones potential, slightly modified to be twice differentiable [63]. Simple shear was applied with energy minimization after each step of applied strain. The critical mode was identified as the eigenvector--existing in the \(2N\)-dimensional configuration space of all the particles' positions--of the Hessian whose eigenvalue crossed zero at the onset of global shear stress decrease. The particle that was identified as the locus of the rearrangement event had the largest contribution to the critical mode [38].
We used data from the gradual quench ("GQ") and rapid quench (high temperature liquid, "HTL") protocols. Following Ref. [44], we considered only neighborhoods with type A particles (the smaller particles) at the center. We used all of the events in the dataset: 7,255 for the gradually quenched and 10,178 for the rapidly quenched glasses. For each rearrangement event with a type A particle as the locus, we selected at random another region from the same system state with a type A particle at the center to serve as a negative example. 90% of all rearrangement events with type A particles as the locus were used for the training set and the remaining 10% were used as the validation set; the regions and specific training and validation splits used in this work can be found on the project webpage.
#### Radial density measurement scheme
For the radial density measurements (Figs. 2, 3), the local neighborhood of each sample was processed 50 radial density structure functions for each particle type, evenly spaced over the interval \(r=[0.5,4]\). Specifically, for particle \(i\) at the center and the set of neighboring particles \(\mathcal{S}_{A}\) of type A,
\[G_{A}(i;r,\delta)=\sum_{j\in\mathcal{S}_{A}}\exp(-\frac{(R_{ij}-r)^{2}}{2 \delta^{2}}), \tag{7}\]
where \(R_{ij}\) is the distance between particles \(i\) and \(j\). The same expression was used to compute \(G_{B}\), the structure functions for the type B particles in the local neighborhood. The width parameter \(\delta\) was equal to 50% of each radius interval.
After computing the 100 values summarizing each local neighborhood, the training and validation sets were normalized with the mean and standard deviation of each structure function across the training set. The best validation results from a logarithmic scan over values for the \(C\) parameter were used for the value of the SVM accuracy in Fig. 2.
For the distributed IB, each of the 100 scalar values for the structure functions were input to their own MLP consisting of 2 layers of 128 units with tanh activation. The embedding dimension of each \(U_{i}\) was 32. Then the 100 embeddings were concatenated for input to the predictive model, which was an MLP consisting of 3 layers of 256 units with tanh activation. The output was a single logit to classify whether the particle at the center is the locus of imminent rearrangement. We increased \(\beta\) in equally spaced logarithmic steps from \(10^{-6}\) to 1 over 250 epochs (an epoch is one pass through the training data). The batch size was 256. The Adam optimizer was used with a learning rate of \(10^{-4}\).
#### Per-particle measurement scheme
For the per-particle measurements, the nearest 50 particles to the center of each region were compressed by the same encoder, an MLP with two layers of 128 Leaky ReLU activation (\(\alpha\)=0.1), to a 32-dimensional latent space. The only information available to the encoder was the particle's position and type, though the values were preprocessed before input to the encoder to help with optimization: for each particle position \(\vec{r}=(x,y)\), we concatenated \(x^{2}\), \(y^{2}\), \(r=|\vec{r}|\), \(\log r\), \(\log x^{2}\), \(\log y^{2}\), and \(\vec{r}/r\). All were positionally encoded (i.e., before being passed to the MLP, inputs were mapped to \(x\leftarrow(x,\sin\omega_{1}x,\sin\omega_{2}x,...)\)) with frequencies \(\omega_{k}=2^{k}\), with \(k\in\{1,2,3,4,5\}\)[64; 15].
After compression, the 50 representations (one for each particle) were input to a set transformer [49], a permutation-invariant architecture that is free to learn how to relate different particles via self-attention. We used 6 multi-head attention (MHA) blocks with 12 heads each, and a key dimension of 128. Following Ref. [49], each MHA block adds the output of multi-head attention to a skip connection of the block's input, and applies layer normalization to the sum. This intermediate output is passed through an MLP (a single layer with 128 ReLU units, in our case) and added to itself (another skip connection) before a second round of layer normalization. After the MHA blocks, the 50 particle representations were mean-pooled and passed through a final fully connected layer of 256 units with Leaky ReLU activation (\(\alpha\)=0.1) before outputting a logit for prediction.
Training proceeded for 25,000 training steps, and the learning rate was ramped linearly from zero to \(10^{-4}\) over the first 10% of training. Over the duration of training, \(\beta\) increased logarithmically from \(3\times 10^{-8}\) to \(3\times 10^{-3}\). The batch size was 64.
## Appendix D Citation Diversity Statement
Science is a human endeavour and consequently vulnerable to many forms of bias; the responsible scientist identifies and mitigates such bias wherever possible. Meta-analyses of research in multiple fields have measured significant bias in how research works are cited, to the detriment of scholars in minority groups [65; 66; 67; 68; 69]. We use this space to amplify studies, perspectives, and tools that we found influential during the execution of this research [70; 71; 72; 73].
|
2306.13100 | Cryptanalysis on Secure ECC based Mutual Authentication Protocol for
Cloud-Assisted TMIS | The creation of TMIS (Telecare Medical Information System) makes it simpler
for patients to receive healthcare services and opens up options for seeking
medical attention and storing medical records with access control. With
Wireless Medical Sensor Network and cloud-based architecture, TMIS gives the
chance to patients to collect their physical health information from medical
sensors and also upload this information to the cloud through their mobile
devices. The communication is held through internet connectivity, therefore
security and privacy are the main motive aspects of a secure cloud-assisted
TMIS. However, because very sensitive data is transmitted between patients and
doctors through the cloud server, thus security protection is important for
this system. Recently, Kumar et al designed a mutual authentication protocol
for cloud-assisted TMIS based on ECC [2]. In this paper, we revisited this
scheme and traced out that their scheme has some significant pitfalls like
health report revelation attack, and report confidentiality. In this study, we
will provide the cryptanalysis of the scheme developed by Kumar et al. | Diksha, Meenakshi | 2023-06-13T06:58:23Z | http://arxiv.org/abs/2306.13100v1 | # Cryptanalysis on Secure ECC based Mutual Authentication Protocol for Cloud-Assisted TMIS
###### Abstract
The creation of TMIS (Telecare Medical Information System) makes it simpler for patients to receive healthcare services and opens up options for seeking medical attention and storing medical records with access control. With Wireless Medical Sensor Network and cloud-based architecture, TMIS gives the chance to patients to collect their physical health information from medical sensors and also upload this information to the cloud through their mobile devices. The communication is held through internet connectivity, therefore security and privacy are the main motive aspects of a secure cloud-assisted TMIS. However, because very sensitive data is transmitted between patients and doctors through the cloud server, thus security protection is important for this system. Recently, Kumar et al designed a mutual authentication protocol for cloud-assisted TMIS based on ECC [2]. In this paper, we revisited this scheme and traced out that their scheme has some significant pitfalls like health report revelation attack, and report confidentiality. In this study, we will provide the cryptanalysis of the scheme developed by Kumar et al.
TMIS, Cloud Computing, Digital signature, Cryptanalysis, ECC. 1
## 1 Introduction:
With the advancement in technology, patients can receive care from medical professionals through the internet. The inaccessibility of remote locations and unavailability of facilities makes modern healthcare facilities difficult and these healthcare facilities are as expert advice, proper diagnosis, clinical tests, etc. Due to poor return on investments, no one is interested to invest in these areas and also doctors are not interested to serve in those areas which are under development. This leads the patients to have to travel long distances and spend a lot of money to get medical treatment. Some patients leave their hopes on their fates or lived with treatments from local health workers. In this case, a platform Telecare Medical Information System (TMIS) facilitates the patients and doctors with the communication between them and provides medical assistance in the patient's home. At the moment, everyone is paying close attention to cloud computing. It exhibits a significant potential for providing medical services online in TMIS due to its profitable specialties including on-demand self-service, more resilience, and resource sharing.
TMIS can gain various financial and functional advantages from cloud-based architecture, including flexible medical data storage, lower costs, better accessibility, and higher standards of care. But it also faces a lot of problems like reliability, privacy, security, and many others. Since the patient's data is transmitted between the entities over an insecure public channel. Therefore data security, confidentiality, and also its authenticity are major priorities in cloud-based TMIS. |
2305.13450 | A Framework for Fine-Grained Synchronization of Dependent GPU Kernels | Machine Learning (ML) models execute several parallel computations including
Generalized Matrix Multiplication, Convolution, Dropout, etc. These
computations are commonly executed on Graphics Processing Units (GPUs), by
dividing the computation into independent processing blocks, known as tiles.
Since the number of tiles are usually higher than the execution units of a GPU,
tiles are executed on all execution units in one or more waves. However, the
number of tiles is not always a multiple of the number of execution units.
Thus, tiles executed in the final wave can under-utilize the GPU.
To address this issue, we present cuSync, a framework for synchronizing
dependent kernels using a user-defined fine-grained synchronization policy to
improve the GPU utilization. cuSync synchronizes tiles instead of kernels,
which allows executing independent tiles of dependent kernels concurrently. We
also present a compiler to generate diverse fine-grained synchronization
policies based on dependencies between kernels. Our experiments found that
synchronizing CUDA kernels using cuSync reduces the inference times of four
popular ML models: MegatronLM GPT-3 by up to 15%, LLaMA by up to 14%, ResNet-38
by up to 22%, and VGG-19 by up to 16% over several batch sizes. | Abhinav Jangda, Saeed Maleki, Maryam Mehri Dehnavi, Madan Musuvathi, Olli Saarikivi | 2023-05-22T19:49:36Z | http://arxiv.org/abs/2305.13450v3 | # A Framework for Fine-Grained Synchronization of Dependent GPU Kernels
###### Abstract
Machine Learning (ML) models contain highly-parallel computations, such as, Matrix Multiplication, Convolutions, Dropout, etc. These computations are commonly executed on Graphics Processing Units (GPUs), by dividing the computation in independent processing blocks, known as _tiles_. Since the number of tiles are usually higher than the execution units of a GPU, tiles are executed on all execution units in _waves_. However, the tiles executed in the last wave can under-utilize the execution units because tiles are not always a multiple of execution units. This under-utilization can be reduced by executing multiple _independent_ kernels concurrently on a GPU, but is not currently possible for _dependent_ kernels.
In this paper, we present cuSync, a framework to write custom fine-grained synchronization policies for dependent kernels to improve GPU utilization. cuSync's synchronizes tiles instead of kernels, which allows executing tiles of multiple dependent kernels. Using cuSync we expressed several synchronization policies in a few lines of code and reduced the inference times of GPT-3 and ResNet-38 by up to 1.19\(\times\) and 1.16\(\times\) respectively.
CUDA, GPU, Synchronization, Machine Learning
## I Introduction
The trend of large Machine Learning (ML) models has delivered remarkable results in multiple domains. These results have suddenly expanded the demand of ML models in innumerable applications. As a result, the infrastructure required to serve such large models has significantly increased. Therefore, even small improvements in the execution of these models can get us huge savings in terms of cost and energy.
ML models typically consist of embarrassingly parallel common operations, such as Generalized Matrix Multiplication (GeMM), 2-D Convolution (Conv2D) etc. Thus, massively parallel processors such as Graphics Processing Units (GPUs) are ideal for running ML models. Utilizing the parallelism of GPUs requires breaking down the computations into multiple independent blocks, known as _tiles_. Each tile is executed by a _thread block_, which runs on an execution unit of the GPU known as _Simultaneous Multiprocessor_ (SM). Each SM can execute one or more thread blocks parallel depending on the resource requirements of the thread blocks. Often times, not all thread blocks can be executed simultaneously on the limited number of parallel SMs. Therefore, the thread blocks are executed in one or more _waves_ where each wave utilizes all the SMs. When thread blocks are not a multiple of SMs, the last wave utilizes less SMs leading to a lower utilization of the GPU. This under-utilization is prevalent in large ML models.
As an example, consider the utilization of two dependent GeMMs in MegatronLM [9] GPT-3 145 Billion parameter model inference. Table I shows that two dependent GeMM CUDA kernels of GPT-3 achieves 60-80% utilization on NVIDIA Tesla V100 because the number of tiles are not the multiple of SMs. Due to the dependence between the two GeMMs, the consumer kernel is synchronized with the producer kernel across all thread blocks and therefore, making it impossible to overlap the tile executions of the two kernels. Alternatively, a common approach is to find independent kernels and execute them simultaneously [7]. However, inferencing large ML models do not contain enough such kernels [4, 9].
A recent method by Stream-K [8] for GeMM mitigates the lower utilization of the last wave by further partitioning the tiles in the last wave in order to utilize more SMs. However, this partitioning requires extra memory accesses by all thread blocks computing the same tile and not extendable to other tile based computations such as Matrix Dot Product, Dropout, and Softmax. As we will discuss in Section VI, this method does not improve the utilization of inferencing large ML models.
In this paper, we show that the fine-grained synchronization of dependent tiles of dependent kernels allows the simultaneous execution of all thread blocks, hence, improving GPU utilization. Furthermore, we show that diverse synchronization techniques provide best performance on different computation kernels and sizes. To this end, we present, cuSync, a framework to design efficient synchronization techniques for dependent tile based CUDA kernels. In cuSync users can quickly write and experiment with various synchronization
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Batch** & **GeMM** & **TBs** & **TBs per SM** & **Waves** & **Utilization** \\ \hline
256 & Producer & [1, 96, 2] & 2 & 1.2 & 60\% \\ & Consumer & [1, 96, 1] & 2 & 0.6 & 60\% \\ \hline
512 & Producer & [2, 48, 2] & 2 & 1.2 & 60\% \\ & Consumer & [2, 96, 1] & 2 & 1.2 & 60\% \\ \hline
1024 & Producer & [4, 48, 1] & 2 & 1.2 & 60\% \\ & Consumer & [4, 96, 1] & 2 & 2.4 & 80\% \\ \hline \end{tabular}
\end{table} TABLE I: Number of thread blocks, thread blocks per wave, waves and GPU utilization of two dependent GeMMs in MegatronLM GPT-3 [9] on several batch sizes when executing on an NVIDIA Tesla V100 containing 80 SMs. Each SM can execute 2 thread blocks, i.e., maximum of 160 thread blocks can execute per wave. Both GeMM kernels achieve low utilization because number of thread blocks are not a multiple of SMs.
techniques, such as synchronizing dependent tiles of \(n\)-D computations and row synchronization of 2-D computations. cuSync contains novel mechanisms to: (i) ensure that all thread blocks of a producer kernel are executed before the consumer kernel (Section IV-A), (ii) ensure that the dependence between tiles of producer and consumer-kernels is ensured using semaphores and memory fences (Section IV-D), and (iii) ensure that a processing order of producer and consumer tiles that minimizes the wait time(Section IV-C). We used cuSync to write novel synchronization techniques for diverse CUDA kernels, such as, GeMM, 2-D Convolutions, and Dot Product, within few lines of code. cuSync integrated CUDA kernels reduced the inference time of GPT-3 [9] by up to 1.19\(\times\) and of ResNet-38 [4] by up to 1.16\(\times\) (Section VI).
## II Background
In this section, we discuss NVIDIA GPUs, computations done by Machine Learning models, and the process of executing computations on an NVIDIA GPU.
### _NVIDIA Graphics Processing Units and CUDA_
A CUDA kernel executes a multiple concurrent _threads_ organized in a 3-dimensional _grid_, where each thread of the grid has a 3-dimensional unique identifier. All threads of the kernel are grouped into equally sized _thread blocks_, with each thread block having a 3-dimensional index in the grid. CUDA defines 3-dimensional indices and sizes with values for \(x\), \(y\), and \(z\)-dimension. Thus, the grid has thread blocks in \(x\), \(y\), and \(z\)-dimensions and the number of thread blocks in a grid is product of thread blocks in all dimensions.
A Graphics Processing Unit (GPU) contains multiple Streaming Multiprocessors (SMs), each of which executes one or more thread blocks. Since each SM contains a fixed amount of registers and shared memory, there are a fixed number of thread blocks that can execute in parallel on an SM. This number of thread blocks per SM, known as _occupancy_, depends on the CUDA kernel's register and shared memory usage, and the number of threads in thread block and number of thread blocks in the CUDA kernel's grid.
Thread block Wave ExecutionA GPU executes all thread blocks of a 3-dimensional grid in a _row major order_, i.e., first the \(x\)-dimension, then \(y\)-dimension, and finally \(z\)-dimension. A GPU executes thread blocks on its SMs in one or more _waves_, where each wave executes at maximum \(occupancy\times Number\_of\_SMs\) thread blocks to all SMs. Therefore, the number of waves are \(\lceil\frac{Number\_of\_TB\_sim_in\_Grid_}{occupancy\times Number\_of\_SMs}\rceil\). For instance, consider a CUDA kernel with occupancy of two thread blocks per SM and the kernel is invoked with 240 thread blocks, then executing the kernel on a NVIDIA Tesla V100 containing 80 SMs requires \(\lceil\frac{240}{2\times 80}\rceil=2\) waves. The first wave executes 160 thread blocks and the second wave executes the remaining 80 thread blocks.
Stream SynchronizationA CUDA _stream_ contains a sequence of CUDA operations that execute in the order they were issued. When two CUDA kernels with a producer-consumer relationship are invoked on the same stream, the consumer-kernel is not started before all thread blocks of the producer-kernel has finished their execution. We call this synchronization as _stream synchronization_. However, independent CUDA kernels can be invoked on different streams to execute the kernels concurrently. A stream also have an associated priority value. The CUDA driver ensures that operations on a higher priority stream are issued first before the operations on a lower priority stream.
### _Computations in Large ML Models_
Contemporary ML models contains embarrassingly parallel computations, such as Dot product, Generalized Matrix Multiplication (GeMM), 2-D Convolution (Conv2D), Dropout, and Softmax. These parallel computations are executed on NVIDIA GPUs as CUDA kernels. In this paper, we consider two widely used machine learning models: GPT and ResNet. Below we briefly explain the computations involved in these models and their execution on large clusters of GPUs.
#### Ii-B1 GPT Models
Generative Pre-trained Transformers (GPT) is a class of natural language models that consists of multiple Multi-Layer Perceptron (MLP) and Self-Attention blocks. Both MLP and Self-Attention contains two weight matrices and takes an input matrix, \(\mathbb{X}\). With model parallelism these weight matrices are divided among all GPUs [9]. Figure 1 shows the computations of GPT with model parallelism of 8 GPUs, which is the commonly used setting. Both MLP
Fig. 1: Architecture of two building blocks of GPT models: Multi-Layer Perceptron (MLP) and Self-Attention. With model parallelism on 8 GPUs, the weight matrices of both layers are equally divided among all 8 GPUs. B is the Batch size and H is the hidden dimension. In GPT-3, H is 12288.
and Self-Attention perform the first GeMM of \(\mathbb{X}\) with their first weight matrix. Then, they perform several pointwise operations, such as GeLU, Softmax, and Dropout on the result of first GeMM. State-of-the-art implementations for MLP fuses GeLU activation with the first GeMM (line 3 in Figure 0(a)) and for Self-Attention the dot product and activations are fused in a single CUDA kernel (line 9 in Figure 0(b)). Finally, they perform the second GeMM with the second weight matrix.
#### Ii-B2 ResNet
Residual Network (ResNet) is a computer vision model consisting of multiple convolution layers. Each convolution layer perform two Conv2D operations with same kernel size and channels. Table II shows the details of each layer.
### _Tile based Computations_
Efficient CUDA kernels of common computations, including GeMM and Conv2D, divides the computation into multiple _tiles_. Each tile is computed by one or more thread blocks and all threads of a thread block compute one or more elements of the tile. This decomposition enables computing tiles independent of each other, which leads to high-parallelism.
Figure 1(a) shows an example workflow of a computing GeMM, \(C_{[M,N]}=A_{[M,K]}\times B_{[K,N]}\), with tile size \([T_{M},T_{N}]\) on a GPU. A tile, \(C^{xy}\), represents a 2-D sub-matrix of \(C\) starting at row \(x\) and column \(y\) of \(C\), and is of the same size as the tile size, i.e., \([T_{M},T_{N}]\). Therefore, the number of tiles in the row dimension are \(\frac{M}{T_{M}}\) and in column dimension are \(\frac{N}{T_{N}}\). Each tile is computed by one thread block and the CUDA kernel is invoked with a 3-D grid \(\left[\frac{M}{T_{M}},\frac{N}{T_{N}}\right]\), which is same as the number of tiles in row and column dimensions. To compute \(C^{xy}\), the thread block multiplies \(T_{M}\) rows starting at \(x\) row of \(A\) and \(T_{N}\) columns starting at \(y\) column of \(B\).
High-performance CUDA libraries, such as NVIDIA CUT-LASS and CUBLAS, uses the above tile based methodology for GeMM. Additionally, these libraries computes Convolutions using GeMM to utilize the optimized GeMM kernels. In this paper, we focus on these tile based computations and their implementation in NVIDIA CUTLASS.
## III Motivation
In this section, we show how the traditional stream synchronization can lead to under-utilization and how our _fine-grained synchronization_ can ensure maximum utilization of GPU resources.
Let us consider two dependent GeMMs on three matrices: \(A\) of shape \([12,8]\), \(B\) of shape \([8,8]\), and \(D\) of shape \([8,8]\):
producer-gemm: \[C_{[12,8]}=A_{[12,8]}\times B_{[8,8]}\] consumer-gemm: \[E_{[12,8]}=C_{[12,8]}\times D_{[8,8]}\]
The producer-gemm computes \(C\) by multiplying \(A\) and \(B\), and the consumer-gemm computes \(E\) by multiplying \(C\) and \(D\). Let us also assume that the CUDA kernel for both GeMMs has a tile size of \([4,4]\) and each tile is computed by one thread block. Thus, both gemm kernels are invoked with a 3-D grid of size \(\left[\frac{12}{4},\frac{8}{8}\right]\), i.e., six thread blocks.
Figure 1(b) shows the execution of thread blocks of both kernels on a GPU with four SMs. The GPU executes all six thread blocks of both kernels in \(\left[\frac{6}{4}\right]=2\) waves. The producer-gemm kernel executes four thread blocks in the first wave and after these thread blocks are finished it executes the remaining two thread blocks in the second wave. Similarly, the consumer-gemm kernel executes its thread blocks in two waves. With stream synchronization between kernels, thread blocks of the consumer-gemm kernel are executed only after the producer-gemm kernel is finished. Hence, in the second wave of both kernels only SM-0 and SM-1 are utilized while SM-2 and SM-3 are idle. In summary, the stream synchronization between producer- and consumer-kernels can lead to under-utilization of GPU resources.
In this paper, we present a novel _fine-grained synchronization_ of thread blocks, thus synchronizing dependent tiles, of producer- and consumer-kernels to improve the utilization of GPU resources. Figure 1(c) shows how we obtain full utilization in our synthetic example. We invoke both kernels on separate streams to execute thread blocks of both kernels simultaneously, and synchronize only the dependent thread blocks using a semaphore stored in GPU's memory. In our example, computing an \(i^{\text{th}}\) row of \(E\) requires computing the \(i^{\text{th}}\) row of \(C\), i.e., \(E^{i,0}\) and \(E^{i,1}\) depends on \(C^{i,0}\) and \(C^{i,1}\). To maintain this dependency, each thread block of the consumer-gemm waits until all thread blocks of the corresponding row of producer-gemm has been computed. In our example, both \(E^{i,0}\) and \(E^{i,1}\) waits until \(C^{i,0}\) and \(C^{i,1}\) has been computed. Since both kernels are running on different CUDA streams, thread blocks of both kernels are executed in only three waves as compared to four waves in stream synchronization. Hence, with fine-grained synchronization between thread blocks of producer and consumer-kernels can improve the utilization of all SMs on a GPU.
In summary, stream synchronization of CUDA kernels can suffer from under-utilization of GPU resources, while our fine-grained synchronization of tile based CUDA kernels improves the utilization of GPU resources. We now show how that executing ML models using stream synchronization leads to under utilization of GPU resources.
### _Under-utilization in Training of GPT-3_
The training and inference of large ML models suffer from under-utilization due to stream synchronization among dependent CUDA kernels. As a motivation, consider both
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Image Size** & **Kernel** & **Channels** & **Convs per Layer** & **Layers** \\ \hline \([56,56,64]\) & \([3,3]\) & 64 & 2 & 3 \\ \([28,28,128]\) & \([3,3]\) & 128 & 2 & 4 \\ \([14,14,256]\) & \([3,3]\) & 256 & 2 & 6 \\ \([7,7,512]\) & \([3,3]\) & 512 & 2 & 3 \\ \hline \end{tabular}
\end{table} TABLE II: Input/output image size, kernel size, and input/output channels for each pair of Conv2D in ResNet-38. The last column shows the number of layers for each Conv2D pair.
GeMMs of MLP in GPT-3 with float16 weights, hidden dimension H\(=12288\), and batch size B\(=1024\). Figure 0(a) shows the GeMM sizes in MLP for GPT-3 are: \(\texttt{XW}_{1}\) of shape \([1024,6144]\) and \(\texttt{XW}_{12}\) of shape \([1024,12288]\). Hand-written GeMM CUDA kernels in NVIDIA CUTLASS use the tile size for both GeMMs to \([256,128]\). The GeMM kernel has an occupancy of 2 thread blocks per SM and when executed on an NVIDIA Tesla V100 GPU with 80 SMs, each wave can execute up to 160 thread blocks.
Based on the tile sizes, the producer-gemm is invoked with \([4,48]=192\) thread blocks and requires \(\lceil\frac{192}{160}\rceil=2\) waves to execute all thread blocks. While the first wave executes 160 thread blocks, the second wave can only execute the remaining \(192\%160=32\) thread blocks. Thus, the second wave under-utilizes GPU resources. Similarly, the consumer-gemm is invoked with \([4,96]=384\) thread blocks and the last wave executes \(384\%160=64\) thread blocks. Therefore, thread block execution of both kernels is done in \(2+3=5\) waves using stream synchronization.
With our fine-grained synchronization, thread blocks of both kernels are executed in only \(\lceil\frac{192+384}{160}\rceil=4\) waves, which is one less than with stream synchronization. Moreover, with fine-grained synchronization the last wave improves utilization of GPU by executing \((192+384)\%160=96\) thread blocks, which is far more than the thread blocks executed in the last wave of both kernels. That is why, our fine-grained synchronization obtains higher utilization of the GPU and provides 1.31\(\times\) speedup over the stream synchronization.
In the next section, we present, cuSync, a framework to write synchronization techniques among diverse kernels. Using cuSync we show that diverse fine-grained synchronization techniques can improve the GPU utilization of ML models.
## IV Fine-Grained Synchronization using cuSync
Our fine-grained synchronization of dependent CUDA kernels consists of three mechanisms: (i) executing thread blocks of all kernels on SMs without coarse-grained stream synchronizations, (ii) adding fine-grained synchronization among the dependent tiles of every producer-consumer kernel pair, and (iii) controlling the order of tile processing in each kernel to minimize the wait time of synchronization. We have implemented these mechanisms in a header-only standalone CUDA library, we call as cuSync.
We explain these mechanisms through the MLP (Figure 0(a)) running example with two dependent GeMM kernels. Figure 2(a) shows an implementation of MLP that uses cuSync for fine-grained synchronization. The gemm function is a standard generalized matrix multiplication kernel (we use CUTLASS for our experiments) with additional code (shown as underlined) to call into cuSync. Specifically, cuSync associates each kernel with a CuStage object that supports fine-grained synchronization facilities among kernels. The MLP function creates these stage objects, declares dependencies between them, and invokes the kernels.
### _Invoke Dependent Kernels_
The first part of fine-grained synchronization is to eliminate the stream synchronization between kernels. cuSync achieves this by invoking the kernels on different CUDA streams. cuSync associates a CuStage object with each kernel. Lines 20 and 21 in Figure 2(a) creates a producer prod and consumer cons stage for the two GeMM kernels. These kernels are invoked on different streams associated with respective stages. Before doing so, the code declares the dependency between the two stages in line 23 specifying
Fig. 2: Thread block execution with existing stream synchronization and fine-grained fine-grained synchronization on 4 SMs for a GeMM kernel to compute two dependent GeMMs: \(C_{12\times 8}=A_{12\times 8}\times B_{8\times 8}\) and \(E_{12\times 8}=C_{12\times 8}\times B_{8\times 8}\).
that input A of the consumer depends on the output of the producer. Section IV-D describes how cuSync enforces this dependency.
### _Stage Processing Order_
CUDA runtime lacks a straightforward mechanism to enforce a scheduling order among kernels belonging to different streams. Therefore, there is a possibility that the consumer kernel is scheduled to execute on the GPU before the producer kernel. This can lead to poor performance as thread blocks of the consumer kernel occupy SMs without any useful work to do. In the worst case, this can lead to deadlocks if there are no SMs available for thread blocks of the producer kernel.
To avoid this, cuSync provides a mechanism for enforcing the scheduling order of kernels. In this example, the consumer stage has a wait-kernel invoked in line 28 in Figure 2(a). Wait kernels are invoked with a single thread associated with stream of the consumer stage. This thread waits on semaphores stored in global memory for each dependent kernel using a busy-wait while loop. cuSync requires each kernel to call the stage.start() method (line 4) that sets this semaphore. Once the wait kernel exits, the CUDA runtime will schedule the kernel associated with the stage. In our example, this mechanisms ensures none of the thread blocks of the consumer kernel are scheduled before at least one of the thread blocks of the producer kernel.
### _Tile Processing Order_
The CUDA runtime can schedule thread blocks to SMs in an arbitrary order, which can lead to unpredictable wait times in dependent kernels. Ideally, we want the thread blocks of the consumer kernel to be scheduled in the order producer kernels generate the tiles. cuSync enforces this order as follows. Each kernel calls into stage.tile() (line 5) to compute the tile it needs to compute next. Internally, cuSync maintains a global counter per stage that determines the tile computation order. In the example, the template parameter RowMajor in line 20 and line 21 ensures that the two kernels produce tiles in row major order independent of how the CUDA runtime schedules thread blocks. cuSync supports other tile orders such as column major and strided row major.
Fig. 3: Fine-grained synchronization of two GeMMs of MLP using TileSync and RowSync policies.
### _Synchronizing Dependent Tiles_
The second mechanism required for fine-grained synchronization is to ensure the dependence between tiles of producer- and consumer-kernels because we have now eliminated the stream synchronization between the kernels. This is accomplished by invoking the wait (line 7 and line 9) and post (line 13) methods of the CuStage object associated with the consumer kernel. To compute a tile of GeMM, the GeMM kernel depends on a row of tiles from input A and a column of tiles from input B. For the consumer kernel, the input row tiles are generated by the producer kernel. This dependency is declared in line 23. Accordingly, the wait before loading a tile of A waits for the corresponding post of the producer kernel. Since there are no depenences for loading the tiles of B, the corresponding wait becomes a no-op. Similarly, both waits are no-ops for the producer kernel. Figure 2(b) describes the implementation of the post and wait methods, which will be described in detail below.
### _Synchronization Policies_
cuSync allows implementation of diverse synchronization policies. Each synchronization policy is implemented as a CUDA class with three functions: (i) init allocates an array of semaphore, (ii) post increments the status of semaphore for given tile, and (iii) wait waits until the value of semaphore for the tile reaches an expected value. Below we discuss three policies we implemented in cuSync in few lines of code based on our workloads.
_TileSync:_ We refer to the policy of synchronizing on the semaphore of each tile of the producer-kernel as _TileSync_ policy. To minimize the wait time of each tile of the consumer-kernel, both kernels computes their tiles in a row major order. Figure 2(b) shows the implementation of this policy in cuSync in a few lines of code. Both wait and post methods works on a single semaphore for each tile (line 17 and 21). For example, in Figure 2(a) to compute a tile \(E^{xy}\), the TileSync policy requires waiting first on \(C^{x0}\) and \(C^{x1}\).
_RowSync:_ Computing tile \(E^{xy}\) in Figure 2(a) using TileSync requires waiting for 2 tiles of \(C\). In general, for two dependent GeMM, TileSync requires \(\frac{N}{T_{n}}\) number of synchronizations, where \(N\) is columns of \(C\) and \(T_{n}\) is the column tile. For sufficiently high value of \(N\), the high number of synchronizations can become a bottleneck. We can reduce the number of synchronizations for tile \(E^{ij}\) by synchronizing over dependent rows, i.e., share the semaphore of all tiles computing the \(i^{th}\) row of \(C\). Figure 2(b) shows the implementation of this policy in CUDA. The post function increments the semaphore for the row of given tile (line 32) and wait function only waits for the row of tile (line 28). To minimize the wait time of each tile of the consumer-kernel, both kernels computes their tiles in the row major order.
_StridedSync:_ Figure 4 shows the dependence of tiles between the three kernels of Self-Attention of GPT-3. The result of first GeMM, XQKV, is equally divided along the columns into three matrices: XQ, XK, and XV. The dot product kernel performs the dot product of XQ, XK, and XV, and softmax on the output of dot product, to return matrix XDot. Hence, the \(i^{th}\) column tile of XDot depends on the \(i^{th}\) column tile of XQ, XK, and XV. Moreover, each row tile of XW\({}_{12}\) depends on all tiles of XDot computing the same row.
Figure 3(b) shows the implementation of _StridedSync_ policy for synchronizing the first GeMM and the dot product kernel. In StridedSync, all tiles of the first GeMM that are stride apart shares the same semaphore. Moreover, the prodOrder function orders tiles of the first GeMM that are stride apart first and then next group of tiles. For Self-Attention, the value of stride is \(\frac{\texttt{Column~{}Tiles~{}of~{}XQKV}}{3}=\frac{H}{8\times T_{y}}\), where \(T_{y}\) is the tile size.
_Conv2DTileSync:_ The implicit GeMM kernel performs the convolution by converting the input image and convolution matrix to a matrix representation in shared memory. Our _Conv2DTileSync_ synchronizes a tile \([i,j]\) of the output matrix \(\left[i,\frac{j}{K\times K}\right]\) tile of input image, where \(K\times K\) is the size of convolution matrix.
_1) Synchronization Implementation:_ We now discuss the implementation of semaphores.
**Post**: The post method of a policy obtains a semaphore and then call post on the semaphore. In Figure 2(b), the post method first performs a _syncthreads to ensure that all threads of the thread block has computed the tile and has issued store instructions to write computed tile elements to the global memory (line 7). Then the method executes a memory fence to ensure that all global memory writes will be visible to threads of the other kernel (line 9). Finally, the method increments the semaphore value by 1.
**Wait**: The wait method of a policy obtains a semaphore and then call wait_till on the semaphore. In Figure 2(b), the wait_till function reads the value of semaphore in a while loop using only the first thread of the thread block (line 3). While the first thread is waiting on the semaphore, all other threads of thread block waits on the _syncthreads (line 4). When the value changes to the expected value, the first thread reaches the _syncthreads and all threads of the thread-block reads from the producer tile.
## V Optimizations
We present two optimizations that the user can do to improve the performance of fine-grained synchronization.
#### V-1 Avoid Wait Kernel
The wait-kernel is required to ensure that all thread blocks of the producer-kernel are scheduled on the GPU before the consumer-kernel. However, we can avoid invoking the wait-kernel if thread blocks of both producer- and consumer-kernel can be scheduled in a single wave. This condition is satisfied when the sum of thread blocks of both kernels is less than or equal to the minimum occupancy of both kernels.
_2) Reorder Tile Loads and Synchronization:_ The general workflow of tile based CUDA kernels is to load tile of each input and then perform operations on the tile. We can re-order the waiting of tile of one input with the loading of other tile, to overlap the waiting of one tile with the loading of the other input's tile.
For example, in Figure 2(a) the second GeMM kernel loads a tile of both inputs (A and B) and compute the tile of output matrix (C) (line 7-10). We can reorder the loading of B tile with the waiting on A tile, i.e., swap lines 7-8 with lines 9-10. Since there is no waiting for tile of B, loading B tile can overlap with waiting of A tile, thus, improving performance.
## VI Evaluation
In this section, we evaluate the performance of cuSync against state-of-the-art baselines on diverse computations.
Baseline and DatasetWe consider computations in two most widely used machine learning models: (i) MegatronLM's GPT-3 145 Billion [9] parameter model that contains GeMM, Dot Products, Softmax, and Dropout (Figure 1), and (ii) ResNet-38 model that contains different Conv2Ds (Table II). We consider multiple batch sizes from 1 to 2048 for GPT-3 and 1 to 32 for ResNet-38 [4]. We use the CUDA kernels of GeMM and Conv2D in NVIDIA CUTLASS 2.13 and we implemented a fused pointwise CUDA kernel that performs dot product, softmax, and dropout in Self-Attention. We modified these kernels to do fine-grained synchronization using cuSync and compare cuSync against two baselines:
* **StreamSync** the traditional uses stream synchronization between dependent kernels.
* **Stream-K**[8] divides GeMM workload of the last thread block wave equally among all SMs to improve utilization. Stream-K divides the GeMM computation into two kernels. The first kernel computes tiles for thread blocks executed in the full waves using blocked GeMM. The second kernel computes remaining tiles by partitioning the summation dimension across all SMs. Each thread block of the second kernel computes it tile-partition and update the partial tile stored in the global memory. Therefore, Stream-K performs extra memory accesses for last remaining tiles than the traditional GeMM.
Sync PoliciesWe consider following synchronization policies and optimizations cases:
* **RowSync** synchronizes dependent rows (Figure 2(b)).
* **TileSync** synchronizes dependent tiles (Figure 2(b)).
* **TileSync+** extends TileSync by avoiding the wait-kernel for kernels with low grid sizes (Section V-1).
* **TileSync+WR** extends TileSync+W by reordering the tile synchronization on first input with loads of other inputs (Section V-2).
* **Strided+TileSync+WR**, only for Self-Attention, synchronizes the first GeMM with dot product using StridedSync, and the dot product with second GeMM using TileSync (Figure 4). The policy also avoids wait-kernel and reorder tile loads.
* **Conv2DTileSync** synchronizes dependent tiles of Conv2Ds.
* **Conv2DTileSync+W** and **Conv2DTileSync+WR** apply avoid wait-kernel and reordering tile loads optimizations.
Experimental SetupWe run our experiments on a machine containing a 2.60GHz 12-core Intel Xeon CPU E5-2690 v4 with 448GB RAM and an NVIDIA Tesla V100 GPU with 32 GB memory. All kernels are compiled with CUDA 11.2. We report the average time of 20 executions after a warmup of 5 executions. The execution time only contains running time of CUDA kernels and do not include any host and device memory transfer time.
Fig. 4: Tile dependence between all three CUDA kernels of Self-Attention and the StridedTileSync policy in cuSync for synchronizing the first GeMM and dot product. The dot product and second GeMM are synchronized using TileSync.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Kernel** & **Implementation** & \multicolumn{2}{c|}{**Lines Changed**} \\ \cline{3-4} & & \multicolumn{2}{c|}{**Number**} & **Fraction** \\ \hline GeMM & CUTLASS & 25 & 0.5\% \\ Dot-Product & Ours & 5 & 1\% \\ Conv2D & CUTLASS & 22 & 0.6\% \\ \hline \end{tabular}
\end{table} TABLE III: Fraction of lines of code changed in GeMM, DotProduct, and Conv2D kernels to enable fine-grained synchronization using cuSync.
### _Ease of Programming_
Table III shows that the number of lines added and changed to support fine-grained synchronization of GeMM, Conv2D, and Dot Product kernels using cuSync are negligible compared to the lines of code of these kernels. Thus, cuSync approach enables diverse synchronizations of tile based computation kernels through few modifications.
### _GPT-3 Inference Results_
In this section, we evaluate reduction in inference times of GPT-3 using cuSync. We first show the speedup of cuSync's synchronization of computation kernels of MLP and Self-Attention, and then discuss the reduction in end-to-end inference time of GPT-3. Figure 5 shows our inference results of GPT-3 over StreamSync.
#### Iv-B1 MLP Results
Figure (a)a shows that synchronizing two dependent GeMMs using cuSync in MLP decreases the combined execution time of both GeMMs by 1.04\(\times\)-1.31\(\times\) for different batch sizes. We discuss these improvements using Table IV that shows the number of waves for each batch size for both StreamSync and cuSync. TileSync+WR performs best for 1 to 256 batch sizes because there is a single thread block in the \(x\)-dimension of grid as shown in Table IV. The speedup at batch size of 256 is higher than small batch sizes because the GPU can execute both GeMMs in 2 waves using TileSync+WR while StreamSync requires 3 waves. On small batch sizes, even though the number of waves are not decreased, TileSync+WR still provides a speedup of up to 1.07\(\times\) because the second GeMM can overlap the loading of W2 tile in shared memory with the computation of first GeMM.
RowSync performs best for sizes greater than 256 because synchronizing over a row once than multiple tiles reduces memory accesses, and multiple rows provide more opportunities for row synchronization. Moreover, increasing the number of rows also increases the speedup of RowSync from 1.13\(\times\) at 512 to 1.31\(\times\) at 1024. However, the speedup decreases to 1.12\(\times\) at 2048 because the fraction of waves reduced by cuSync decreases with more thread blocks in the grid.
We measured the time spent in invocation of a kernel is around 6\(\mu\)s. The difference in execution time of StreamSync and cuSync in Table IV is significantly higher than the kernel invocation. Therefore, the speedups are significantly higher than what would be only achieved by only overlapping the invocation of consumer-kernel with producer-kernel's execution.
Fig. 5: Speedup of cuSync’s policies for synchronizing CUDA kernels in MLP and Self-Attention over StreamSync, and speedup of GPT-3 inference for batch sizes 1–2048. Numbers shows the maximum speedup of all policies.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline
**Batch** & \multicolumn{2}{c|}{**First GeMM**} & \multicolumn{2}{c|}{**Second GeMM**} & \multicolumn{2}{c|}{**StreamSync**} & \multicolumn{2}{c|}{**cuSync**} & \multicolumn{2}{c|}{**Speedup**} \\ \cline{2-11} & **Grid** & **Waves** & **Grid** & **Waves** & **Waves** & **Time(\(\mu\)s)** & **Waves** & **Policy** & **Time(\(\mu\)s)** & \\ \hline
1–64 & 1\(\times\)24\(\times\)3 & 0.3 & 1\(\times\)48\(\times\)1 & 0.2 & 2 & 445 & 0.5 & Tile & 411 & 1.04–1.07\(\times\) \\
128 & 1\(\times\)48\(\times\)2 & 0.4 & 1\(\times\)96\(\times\)1 & 0.4 & 2 & 780 & 0.8 & Tile & 730 & 1.07\(\times\) \\
256 & 1\(\times\)96\(\times\)2 & 1.2 & 1\(\times\)96\(\times\)1 & 0.6 & 3 & 951 & 1.8 & Tile & 840 & 1.12\(\times\) \\
512 & 2\(\times\)48\(\times\)2 & 1.2 & 2\(\times\)96\(\times\)1 & 1.2 & 4 & 1651 & 2.4 & Row & 1450 & 1.13\(\times\) \\
1024 & 4\(\times\)48\(\times\)1 & 1.2 & 4\(\times\)96\(\times\)1 & 2.4 & 5 & 2832 & 3.6 & Row & 2451 & 1.31\(\times\) \\
2048 & 8\(\times\)48\(\times\)1 & 2.4 & 8\(\times\)96\(\times\)1 & 4.8 & 8 & 4838 & 7.2 & Row & 4262 & 1.12\(\times\) \\ \hline \end{tabular}
\end{table} TABLE IV: Grid size and the number of waves for each GeMM, total waves and execution time in stream synchronization and in cuSync for both GeMMs of MLP for batch size from 1 to 2048. The grid \(x\) and \(y\)-dims are obtained by dividing the size of GeMM with the tile size and the \(z\)-dim is the number of thread blocks used for split-k.
Fig. 6: Speedup of best cuSync policy of Self-Attention and MLP over StreamK for different batch sizes.
#### Vi-B2 Self-Attention Results
Figure 4(b) shows that synchronizing both GeMMs and the dot-product kernel of Self-Attention using cuSync provides up to 1.12\(\times\) speedup over StreamSync for different batch sizes. Similar to the MLP, both tile synchronization policies, Strided+TileSync+WR and TileSync+WR, performs better than RowSync for batch sizes up to 256 and RowSync performs better than other policies for batch sizes more than 256. For batch sizes up to 256, Strided+TileSync+WR obtains 1.15\(\times\) speedup over StreamSync and performs better than TileSync+WR because the former policy combines synchronizations of three tiles into a single synchronization. RowSync achieves the maximum speedup of 1.12\(\times\) at 1024 batch size.
#### Vi-B3 End-to-End GPT-3 Inference Results
We integrated cuSync synchronized MLP and Self-Attention in Megatron-LM [9] to evaluate the reduction in inference times of GPT-3. Figure 4(c) shows that our fine-grained synchronization decreases the inference times by 1.05\(\times\)-1.19\(\times\).
#### Vi-B4 Comparison with Stream-K
Since Stream-K improves the utilization of individual GeMMs, we compare the performance of cuSync against Stream-K's GeMM kernels in MLP and Self-Attention. Figure 6 shows that best policy of cuSync obtains up to 1.31\(\times\) speedup over Stream-K.
The speedup of cuSync over Stream-K is due to three reasons. First, computing a tile using multiple thread blocks in Stream-K requires extra memory accesses than computing a tile using single thread block. Whereas cuSync perform a single write to post the status of computed tile and single read to wait on the status of producer-tile. Second, dividing the work of last wave among all SMs require smaller tile sizes that decreases the locality and hence, the performance of individual thread blocks, while cuSync the tile sizes are not needed to be changed. Third, Stream-K is not straightforward to apply to other tile based kernels other than GeMM, such as the dot-product kernel in Self-Attention, while cuSync is valid for any tile based kernels.
In summary, cuSync provides significant speedups over StreamSync and Stream-K in GPT-3, and there is no one policy that provides best speedup for all cases.
### _ResNet-38 Inference Results_
We now evaluate cuSync on reducing the inference times of ResNet-38 by synchronizing both Conv2D kernels of each layer of ResNet-38. Figure 7 shows the speedup of cuSync over StreamSync for synchronizing both Conv2D kernels of each layer in ResNet. We first discuss the performance of each pair of Conv2Ds using RowSync and TileSync policies, and then show the end-to-end ResNet-38 inference results.
#### Vi-C1 Individual Layer Results
Figure 6(a) shows that synchronizing two Conv2D kernels using cuSync provides 1.02\(\times\)-1.20\(\times\) speedup over StreamSync for different channels and batch sizes. We notice following two interesting trends, which we explain using the number of grids and waves for Conv2D with 128 channels shown in Table V.
First, for each channel, the speedup increases to a maximum possible value with larger batch size because larger batch size invokes more number of thread blocks, which leads to higher fraction of waves reduced by cuSync. However, the speedup decreases after the maximum value because further increasing the number of thread blocks eventually decreases the fraction of waves reduced by cuSync. For example, for 128 channels, Table V shows that RowSync obtains maximum reduction in the number of waves, i.e., 33%, which leads to maximum speedup of 1.12\(\times\) for batch size 16.
Second, similar to MLP and Self-Attention, Conv2DTileSync performs better than RowSync for smaller batch sizes while RowSync performs better for larger batch sizes. For 128 channels, Table V shows that batch size 8 and more invokes high number of thread blocks in \(x\)-dimension of grid leading to better speedup by RowSync. However, for 64 channels, both Conv2DTileSync and RowSync performs similarly because there is a single tile in the \(y\)-dimension.
\begin{table}
\begin{tabular}{|r|r|r|r|r|r|r|} \hline \multicolumn{1}{|c|}{**B**} & \multicolumn{1}{|c|}{**\(\mathbf{TBs}\)**} & \multicolumn{1}{|c|}{\multicolumn{1}{|c|}{**TileSync**}} & \multicolumn{1}{|c|}{\multirow{2}{*}{**\(\mathbf{Vails}\)**}} & \multicolumn{1}{|c|}{\multirow{2}{*}{**\(\mathbf{TilesSync+WR}\)**}} \\ \cline{5-6} \multicolumn{1}{|c|}{} & & & **Waves** & & **Time** & & **Wave** & **Policy** & **Time** \\ \hline
1 & 13 & 0.24 & 2 & 56 & 0.48 & Tile & 48 \\
4 & 49 & 0.61 & 2 & 93 & 1.22 & Tile & 77 \\
8 & 98 & 0.61 & 2 & 151 & 1.22 & Row & 132 \\
12 & 147 & 0.91 & 2 & 160 & 1.84 & Row & 150 \\
16 & 196 & 1.22 & 4 & 210 & 2.45 & Row & 196 \\
20 & 245 & 1.53 & 4 & 262 & 3.06 & Row & 234 \\
24 & 294 & 1.83 & 4 & 290 & 3.68 & Row & 267 \\
28 & 343 & 2.14 & 6 & 332 & 4.3 & Row & 316 \\
32 & 392 & 2.45 & 6 & 369 & 4.9 & Row & 350 \\ \hline \end{tabular}
\end{table} TABLE V: Number of \(x\)-dim thread blocks and waves of individual Conv2D kernels, waves and execution time in \(\mu\)s of StreamSync and of cuSync of two Conv2D with 128 channels for different batch sizes.
#### Vi-C2 End to End Inference
We integrated cuSync version of Conv2D kernels of each layer in ResNet-38 to evaluate the decrease in inference times with vanilla ResNet-38 on different batch sizes. Figure 6(b) shows that the integration of ResNet-38 obtains up to 1.16\(\times\) speedup over the vanilla version.
### _Impact of Optimizations_
We now discuss the performance improvements provided by the optimizations on the top of TileSync for ResNet and MLP. Table VIa shows that both TileSync+W and TileSync+WR has lower execution times than TileSync for kernels with low thread blocks. While the reordering of tile loads and waiting provides larger improvement, avoiding wait kernel also reduces some execution time.
## VII Related Work
Several works have focussed on efficient software-based synchronization between threads of the same CUDA kernel for irregular GPU applications [6, 11, 12]. Li et. al. [6] developed an approach for inter-thread synchronizations by reassembling the micro-instructions of shared memory atomic operations in an efficient way. Kai et. al. [11] presented a hierarchical approach to synchronization for irregular applications by synchronizing thread blocks using global memory and threads of a thread block using shared memory. Xu et. al. [12] present a lock design that uses lock stealing to avoid deadlocks. CoCoNet[5] performs synchronization between computation and communication kernel to overlap the communication transfers with computation. cuSync targets synchronization between threads of multiple CUDA kernels and provides an abstraction to easily design several synchronization policies, both of these are missing from above works.
Moreover, some works have focussed on hardware-supported synchronization primitives for inter-kernel threads. GLocks [1] is the first hardware supported implementation for highly-contented locks using message passing. HQL [13] is a hardware-accelerated fine-grained lock scheme for GPUs, which adds support for queuing locks in L1 and L2 caches and uses a customized communication protocol for faster lock transfer and reduced lock retries. EITantway [3] et. al. propose a hardware warp scheduling policy that reduces lock retries by de-prioritizing warps whose threads are spin waiting. They also propose a hardware mechanism for accurately detecting busy-wait synchronization on GPUs. Dalmia [2] et. al. designed multi-level barrier and priority mechanisms for semaphores for GPU based synchronization primitives. cuSync is a software solution for synchronization threads of multiple CUDA kernels and these hardware-supported mechanisms are complementary to cuSync.
Lingqi et. al. [14] studied the performance and pitfalls of several CUDA synchronization methods for reduction operation. Sinclair et. al. [10] presented a benchmark suite to measure the performance of synchronization primitives for different coherence protocols and consistency models.
Stream-K [8] is a GeMM implementation that improves the utilization of SMs of a GPU by dividing the workload among all SMs. However, Stream-K is not straightforward to apply to computations other than GeMMs. In contrast, cuSync fits thread blocks of multiple kernels in each wave and is applicable to any tile based computations.
## VIII Conclusion
Machine learning models contains several dependent computations. Each computation cannot completely utilize the GPU but it is not possible to independently execute dependent computations. In this paper, we show that fine-grained synchronization of dependent computations can improve the performance by higher utilization and more optimizations. In future, we will apply our techniques on better computation and communication overlap techniques.
|
2310.09866 | Federated Multi-Objective Learning | In recent years, multi-objective optimization (MOO) emerges as a foundational
problem underpinning many multi-agent multi-task learning applications.
However, existing algorithms in MOO literature remain limited to centralized
learning settings, which do not satisfy the distributed nature and data privacy
needs of such multi-agent multi-task learning applications. This motivates us
to propose a new federated multi-objective learning (FMOL) framework with
multiple clients distributively and collaboratively solving an MOO problem
while keeping their training data private. Notably, our FMOL framework allows a
different set of objective functions across different clients to support a wide
range of applications, which advances and generalizes the MOO formulation to
the federated learning paradigm for the first time. For this FMOL framework, we
propose two new federated multi-objective optimization (FMOO) algorithms called
federated multi-gradient descent averaging (FMGDA) and federated stochastic
multi-gradient descent averaging (FSMGDA). Both algorithms allow local updates
to significantly reduce communication costs, while achieving the {\em same}
convergence rates as those of their algorithmic counterparts in the
single-objective federated learning. Our extensive experiments also corroborate
the efficacy of our proposed FMOO algorithms. | Haibo Yang, Zhuqing Liu, Jia Liu, Chaosheng Dong, Michinari Momma | 2023-10-15T15:45:51Z | http://arxiv.org/abs/2310.09866v3 | # Federated Multi-Objective Learning
###### Abstract
In recent years, multi-objective optimization (MOO) emerges as a foundational problem underpinning many multi-agent multi-task learning applications. However, existing algorithms in MOO literature remain limited to centralized learning settings, which do not satisfy the distributed nature and data privacy needs of such multi-agent multi-task learning applications. This motivates us to propose a new federated multi-objective learning (FMOL) framework with multiple clients distributively and collaboratively solving an MOO problem while keeping their training data private. Notably, our FMOL framework allows a different set of objective functions across different clients to support a wide range of applications, which advances and generalizes the MOO formulation to the federated learning paradigm for the first time. For this FMOL framework, we propose two new federated multi-objective optimization (FMOO) algorithms called federated multi-gradient descent averaging (FMGDA) and federated stochastic multi-gradient descent averaging (FSMGDA). Both algorithms allow local updates to significantly reduce communication costs, while achieving the _same_ convergence rates as those of their algorithmic counterparts in the single-objective federated learning. Our extensive experiments also corroborate the efficacy of our proposed FMOO algorithms.
## 1 Introduction
In recent years, multi-objective optimization (MOO) has emerged as a foundational problem underpinning many multi-agent multi-task learning applications, such as training neural networks for multiple tasks [1], hydrocarbon production optimization [2], recommendation system [3], tissue engineering [4], and learning-to-rank [5; 6; 7]. MOO aims at optimizing multiple objectives simultaneously, which can be mathematically cast as:
\[\min_{\mathbf{x}\in\mathcal{D}}\mathbf{F}(\mathbf{x}):=[f_{1}(\mathbf{x}), \cdots,f_{S}(\mathbf{x})], \tag{1}\]
where \(\mathbf{x}\in\mathcal{D}\subseteq\mathbb{R}^{d}\) is the model parameter, and \(f_{s}:\mathbb{R}^{d}\rightarrow\mathbb{R}\), \(s\in[S]\) is one of the objective functions. Compared to conventional single-objective optimization, one key difference in MOO is the coupling and potential conflicts between different objective functions. As a result, there may not exist a common \(\mathbf{x}\)-solution that minimizes all objective functions. Rather, the goal in MOO is to find a _Pareto stationary solution_ that is not improvable for all objectives without sacrificing some objectives. For example, in recommender system designs for e-commerce, the platform needs to consider different
customers with substantially conflicting shopping objectives (price, brand preferences, delivery speed, etc.). Therefore, the platform's best interest is often to find a Pareto-stationary solution, where one cannot deviate to favor one consumer group further without hurting any other group. MOO with conflicting objectives also has natural incarnations in many competitive game-theoretic problems, where the goal is to determine an equilibrium among the conflicting agents in the Pareto sense.
Since its inception dating back to the 1950s, MOO algorithm design has evolved into two major categories: gradient-free and gradient-based methods, with the latter garnering increasing attention in the learning community in recent years due to their better performances (see Section 2 for more detailed discussions). However, despite these advances, all existing algorithms in the current MOO literature remain limited to centralized settings (i.e., training data are aggregated and accessible to a centralized learning algorithm). Somewhat ironically, such centralized settings do _not_ satisfy the distributed nature and data privacy needs of many multi-agent multi-task learning applications, which motivates application of MOO in the first place. This gap between the existing MOO approaches and the rapidly growing importance of distributed MOO motivates us to make the first attempt to pursue a new **federated multi-objective learning** (FMOL) framework, with the aim to enable multiple clients to distributively solve MOO problems while keeping their computation and training data private.
So far, however, developing distributed optimization algorithms for FMOL with provable Pareto-stationary convergence remains uncharted territory. There are several key technical challenges that render FMOL far from being a straightforward extension of centralized MOO problems. First of all, due to the distributed nature of FMOL problems, one has to consider and model the _objective heterogeneity_ (i.e., different clients could have different sets of objective functions) that is unseen in centralized MOO. Moreover, with local and private datasets being a defining feature in FMOL, the impacts of _data heterogeneity_ (i.e., datasets are non-i.i.d. distributed across clients) also need to be mitigated in FMOL algorithm design. Last but not least, under the combined influence of objective and data heterogeneity, FMOL algorithms could be extremely sensitive to small perturbations in the determination of common descent direction among all objectives. This makes the FMOL algorithm design and the associated convergence analysis far more complicated than those of the centralized MOO. Toward this end, a fundamental question naturally arises:
_Under both objective and data heterogeneity in FMOL, is it possible to design effective and efficient algorithms with Pareto-stationary convergence guarantees?_
In this paper, we give an affirmative answer to the above question. Our key contribution is that we propose a new FMOL framework that captures both objective and data heterogeneity, based on which we develop two gradient-based algorithms with provable Pareto-stationary convergence rate guarantees. To our knowledge, our work is the first systematic attempt to bridge the gap between federated learning and MOO. Our main results and contributions are summarized as follows:
* We formalize the first federated multi-objective learning (FMOL) framework that supports both _objective and data heterogeneity_ across clients, which significantly advances and generalizes the MOO formulation to the federated learning paradigm. As a result, our FMOL framework becomes a generic model that covers existing MOO models and various applications as special cases (see Section 3.2 for further details). This new FMOL framework lays the foundation to enable us to systematically develop FMOO algorithms with provable Pareto-stationary convergence guarantees.
* For the proposed FMOL framework, we first propose a federated multi-gradient descent averaging (FMGDA) algorithm based on the use of local full gradient evaluation at each client. Our analysis reveals that FMGDA achieves a linear \(\mathcal{O}(\exp(-\mu T))\) and a sublinear \(\mathcal{O}(1/T)\) Pareto-stationary convergence rates for \(\mu\)-strongly convex and non-convex settings, respectively. Also, FMGDA employs a two-sided learning rates strategy to significantly lower communication costs (a key concern in the federated learning paradigm). It is worth pointing out that, in the single-machine special case where FMOL degenerates to a centralized MOO problem and FMGDA reduces to the traditional MGD method [8], our results improve the state-of-the-art analysis of MGD by eliminating the restrictive assumptions on the linear search of learning rate and extra sequence convergence. Thus, our results also advance the state of the art in general MOO theory.
* To alleviate the cost of full gradient evaluation in the large dataset regime, we further propose a federated stochastic multi-gradient descent averaging (FSMGDA) algorithm based on the use of stochastic gradient evaluations at each client. We show that FSMGDA achieves \(\tilde{\mathcal{O}}(1/T)\) and \(\mathcal{O}(1/\sqrt{T})\) Pareto-stationary convergence rate for \(\mu\)-strongly convex and non-convex settings, respectively. We establish our convergence proof by proposing a new (\(\alpha,\beta\))-Lipschitz continuous
stochastic gradient assumption (cf. Assumption 4), which relaxes the strong assumptions on first moment bound and Lipschitz continuity on common descent directions in [9]. We note that this new (\(\alpha,\beta\))-Lipschitz continuous stochastic gradient assumption can be viewed as a natural extension of the classical Lipschitz-continuous gradient assumption and could be of independent interest.
The rest of the paper is organized as follows. In Section 2, we review related works. In Section 3, we introduce our FMOL framework and two gradient-based algorithms (FMGDA and FSMGDA), which are followed by their convergence analyses in Section 4. We present the numerical results in Section 5 and conclude the work in Section 6. Due to space limitations, we relegate all proofs and some experiments to supplementary material.
## 2 Related work
In this section, we will provide an overview on algorithm designs for MOO and federated learning (FL), thereby placing our work in a comparative perspective to highlight our contributions and novelty.
**1) Multi-objective Optimization (MOO):** As mentioned in Section 1, since federated/distributed MOO has not been studied in the literature, all existing works we review below are centralized MOO algorithms. Roughly speaking, MOO algorithms can be grouped into two main categories. The first line of works are gradient-free methods (e.g., evolutionary MOO algorithms and Bayesian MOO algorithms [10; 11; 12; 13]). These methods are more suitable for small-scale problems but less practical for high-dimensional MOO models (e.g., deep neural networks). The second line of works focus on gradient-based approaches [14; 15; 8; 16; 9; 17], which are more practical for high-dimensional MOO problems. However, while having received increasing attention from the community in recent years, Pareto-stationary convergence analysis of these gradient-based MOO methods remains in its infancy.
Existing gradient-based MOO methods can be further categorized as i) multi-gradient descent (MGD) algorithms with full gradients and ii) stochastic multi-gradient descent (SMGD) algorithms. It has been shown in [8] that MGD methods achieve \(\mathcal{O}(r^{T})\) for some \(r\in(0,1)\) and \(\mathcal{O}(1/T)\) Pareto-stationary convergence rates for \(\mu\)-strongly convex and non-convex functions, respectively. However, these results are established under the unconventional linear search of learning rate and sequence convergence assumptions, which are difficult to verify in practice. In comparison, FMGDA achieves a linear rate without needing such assumptions. For SMGD methods, the Pareto-stationary convergence analysis is further complicated by the stochastic gradient noise. Toward this end, an \(\mathcal{O}(1/T)\) rate analysis for SMGD was provided in [9] based on rather strong assumptions on a first-moment bound and Lipschitz continuity of common descent direction. As a negative result, it was shown in [9] and [18] that the common descent direction needed in the SMGD method is likely to be a biased estimation, which may cause divergence issues.
In contrast, our FSMGDA achieves state-of-the-art \(\tilde{\mathcal{O}}(1/T)\) and \(\mathcal{O}(1/\sqrt{T})\) convergence rates for strongly-convex and non-convex settings, respectively, under a much milder assumption on Lipschitz continuous stochastic gradients. For easy comparisons, we summarize our results and the existing works in Table 1. It is worth noting recent works [18; 19; 20] established faster convergence rates in
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{Strongly Convex} & \multicolumn{2}{c|}{Non-convex} \\ \cline{2-5} & Rate & Assumption\({}^{*}\) & Rate & Assumption\({}^{*}\) \\ \hline MGD [8] & \(\mathcal{O}(r^{T})\)\({}^{\#}\) & Linear search \(\&\) & \(\mathcal{O}(1/T)\) & Linear search \(\&\) \\ & & sequence convergence & & sequence convergence \\ \hline SMGD [9] & \(\mathcal{O}(1/T)\) & First moment bound \(\&\) Lipschitz & Not provided & Not provided \\ & & continuity of \(\lambda\) & & \\ \hline FMGDA & \(\mathcal{O}(\text{exp}(-\mu T))\)\({}^{\#}\) & Not needed & \(\mathcal{O}(1/T)\) & Not needed \\ \hline FSMGDA & \(\mathcal{O}(1/T)\) & \((\alpha,\beta)\)-Lipschitz continuous stochastic gradient & \(\mathcal{O}(1/\sqrt{T})\) & \((\alpha,\beta)\)-Lipschitz continuous stochastic gradient \\ \hline \end{tabular} \({}^{\#}\)Notes on constants: \(\mu\) is the strong convexity modulus; \(r\) is a constant depends on \(\mu\), \(\mathbf{x}\), \(r\in(0,1)\).
\({}^{*}\)Assumption short-hands: “Linear search”: learning rate linear search [8]; “Sequence convergence”: \(\{\mathbf{x}_{t}\}\) converges to \(\mathbf{x}^{*}\)[8]; “First moment bound” (Asm. 5.2(b) [9]): \(\mathbb{E}[\|\nabla f(\mathbf{x},\xi)-\nabla f(\mathbf{x})\|]\leq\eta(a+b\| \nabla f(\mathbf{x})\|)\);“Lipschitz continuity of \(\lambda\)” (Asm. 5.4 [9]): \(\|\mathbf{\lambda}_{k}-\mathbf{\lambda}_{t}\|\leq\beta\left\|\left[(\nabla f_{1}(\mathbf{ x}_{k})-\nabla f_{1}(\mathbf{x}_{t}))^{T},\ldots,(\nabla f_{S}(\mathbf{x}_{k})- \nabla f_{S}(\mathbf{x}_{t}))^{T}\right]\right\|\); “\((\alpha,\beta)\)-Lipschitz continuous stochastic gradient”: see Asm. 4.
\end{table}
Table 1: Convergence rate results (shaded parts are our results) comparisons.
the centralized MOO setting by using acceleration techniques, such as momentum, regularization and bi-level formulation. However, due to different settings and focuses, these results are orthogonal to ours and thus not directly comparable. Also, since acceleration itself is a non-trivial topic and could be quite brittle if not done right, in this paper, we focus on the basic and more robust stochastic gradient approach in FMOL. But for a comprehensive comparison on assumptions and main results of accelerated centralized MOO, we refer readers to Appendix A for further details.
**Federated Learning (FL) :** Since the seminal work by [21], FL has emerged as a popular distributed learning paradigm. Traditional FL aims at solving single-objective minimization problems with a large number of clients with decentralized data. Recent FL algorithms enjoy both high communication efficiency and good generalization performance [21; 22; 23; 24; 25; 26]. Theoretically, many FL methods have the same convergence rates as their centralized counterparts under different FL settings [27; 28; 29; 30]. Recent works have also considered FL problems with more sophisticated problem structures, such as min-max learning [31; 32], reinforcement learning [33], multi-armed bandits [34], and bilevel and compositional optimization [35]. Although not directly related, classic FL has been reformulated in the form of MOO[36], which allows the use of a MGD-type algorithm instead of vanilla local SGD to solve the standard FL problem. We will show later that this MOO reformulation is a special case of our FMOL framework. So far, despite a wide range of applications (see Section 3.2 for examples), there remains a lack of a general FL framework for MOO. This motivates us to bridge the gap by proposing a general FMOL framework and designing gradient-based methods with provable Pareto-stationary convergence rates.
## 3 Federated multi-objective learning
### Multi-objective optimization: A primer
As mentioned in Section 1, due to potential conflicts among the objective functions in MOO problem in (1), MOO problems adopt the the notion of Pareto optimality:
**Definition 1** ((Weak) Pareto Optimality).: _For any two solutions \(\mathbf{x}\) and \(\mathbf{y}\), we say \(\mathbf{x}\) dominates \(\mathbf{y}\) if and only if \(f_{s}(\mathbf{x})\leq f_{s}(\mathbf{y}),\forall s\in[S]\) and \(f_{s}(\mathbf{x})<f_{s}(\mathbf{y}),\exists s\in[S]\). A solution \(\mathbf{x}\) is Pareto optimal if it is not dominated by any other solution. One solution \(\mathbf{x}\) is weakly Pareto optimal if there does not exist a solution \(\mathbf{y}\) such that \(f_{s}(\mathbf{x})>f_{s}(\mathbf{y}),\forall s\in[S]\)._
Similar to solving single-objective non-convex optimization problems, finding a Pareto-optimal solution in MOO is NP-Hard in general. As a result, it is often of practical interest to find a solution satisfying Pareto-stationarity (a necessary condition for Pareto optimality) stated as follows [14; 37]:
**Definition 2** (Pareto Stationarity).: _A solution \(\mathbf{x}\) is said to be Pareto stationary if there is no common descent direction \(\mathbf{d}\in\mathbb{R}^{d}\) such that \(\nabla f_{s}(\mathbf{x})^{\top}\mathbf{d}<0,\forall s\in[S]\)._
Note that for strongly convex functions, Pareto stationary solutions are also Pareto optimal. Following Definition 2, gradient-based MOO algorithms typically search for a common descent direction \(\mathbf{d}\in\mathbb{R}^{d}\) such that \(\nabla f_{s}(\mathbf{x})^{\top}\mathbf{d}\leq 0,\forall s\in[S]\). If no such a common descent direction exists at \(\mathbf{x}\), then \(\mathbf{x}\) is a Pareto stationary solution. For example, MGD [15] searches for an optimal weight \(\mathbf{\lambda}^{*}\) of gradients \(\nabla\mathbf{F}(\mathbf{x})\triangleq\{\nabla f_{s}(\mathbf{x}),\forall s \in[S]\}\) by solving \(\mathbf{\lambda}^{*}(\mathbf{x})=\operatorname*{argmin}_{\mathbf{x}\in C}\|\mathbf{ \lambda}^{\top}\nabla\mathbf{F}(\mathbf{x})\|^{2}\). Then, a common descent direction can be chosen as: \(\mathbf{d}=\mathbf{\lambda}^{\top}\nabla\mathbf{F}(\mathbf{x})\). MGD performs the iterative update rule: \(\mathbf{x}\leftarrow\mathbf{x}-\eta\mathbf{d}\) until a Pareto stationary point is reached, where \(\eta\) is a learning rate. SMGD [9] also follows the same process except for replacing full gradients by stochastic gradients. For MGD and SMGD methods, it is shown in [8] and [18] show that if \(\|\mathbf{\lambda}^{\top}\nabla\mathbf{F}(\mathbf{x})\|=0\) for some \(\mathbf{\lambda}\in C\), where \(C\triangleq\{\mathbf{y}\in[0,1]^{S},\sum_{s\in[S]}y_{s}=1\}\), then \(\mathbf{x}\) is a Pareto stationary solution. Thus, \(\|\mathbf{d}\|^{2}=\|\mathbf{\lambda}^{\top}\nabla\mathbf{F}(\mathbf{x})\|^{2}\) can be used as a metric to measure the convergence of non-convex MOO algorithms [8; 18; 19]. On the other hand, for more tractable strongly convex MOO problems, the optimality gap \(\sum_{s\in[S]}\lambda_{s}\left[f_{s}(\mathbf{x})-f_{s}(\mathbf{x}^{*})\right]\) is typically used as the metric to measure the convergence of an algorithm [9], where \(\mathbf{x}^{*}\) denotes the Pareto optimal point. We summarize and compare different convergence metrics as well as assumptions in MOO, detailed in Appendix A.
### A general federated multi-objective learning framework
With the MOO preliminaries in Section 3.1, we now formalize our general federated multi-objective learning (FMOL) framework. For a system with \(M\) clients and \(S\) tasks (objectives), our FMOL framework can be written as:
\[\min_{\mathbf{x}} \operatorname{Diag}(\mathbf{FA}^{\top}), \tag{2}\] \[\mathbf{F}\triangleq\begin{bmatrix}f_{1,1}&\cdots&f_{1,M}\\ \vdots&\ddots&\vdots\\ f_{S,1}&\cdots&f_{S,M}\end{bmatrix}_{S\times M},\mathbf{A}\triangleq\begin{bmatrix} a_{1,1}&\cdots&a_{1,M}\\ \vdots&\ddots&\vdots\\ a_{S,1}&\cdots&a_{S,M}\end{bmatrix}_{S\times M},\]
where matrix \(\mathbf{F}\) groups all potential objectives \(f_{s,i}(\mathbf{x})\) for each task \(s\) at each client \(i\), and \(\mathbf{A}\in\{0,1\}^{S\times M}\) is a _binary_ objective indicator matrix, with each element \(a_{s,i}=1\) if task \(s\) is of client \(i\)'s interest and \(a_{s,i}=0\) otherwise. For each task \(s\in[S]\), the global objective function \(f_{s}(\mathbf{x})\) is the average of local objectives over all related clients, i.e., \(f_{s}(\mathbf{x})\triangleq\frac{1}{|R_{s}|}\sum_{i\in R_{s}}f_{s,i}(\mathbf{ x})\), where \(R_{s}=\{i:a_{s,i}=1,i\in[M]\}\). Note that, for notation simplicity, here we use simple average in \(f_{s}(\mathbf{x})\), which corresponds to the balanced dataset setting. Our FMLO framework can be directly extended to imbalanced dataset settings by using weighted average proportional to dataset sizes of related clients. For a client \(i\in[M]\), its objectives of interest are \(\{f_{s,i}(\mathbf{x})\colon a_{s,i}\!=\!1,s\in[S]\}\), which is a subset of \([S]\).
We note that FMOL generalizes MOO to the FL paradigm, which includes many existing MOO problems as special cases and corresponds to a wide range of applications.
* If each client has only one distinct objective, i.e., \(\mathbf{A}=\mathbb{I}_{M}\), \(S=M\), then \(\operatorname{Diag}(\mathbf{FA}^{\top})=[f_{1}(\mathbf{x}),\ldots,f_{S}( \mathbf{x})]^{\top}\), where each objective \(f_{s}(\mathbf{x}),s\in[S]\) is optimized only by client \(s\). This special FMOL setting corresponds to the conventional multi-task learning and federated learning. Indeed, [1] and [38] formulated a multi-task learning problem as MOO and considered Pareto optimal solutions with various trade-offs. [36] also formulated FL as as distributed MOO problems. Other examples of this setting include bi-objective formulation of offline reinforcement learning [39] and decentralized MOO [40].
* If all clients share the same \(S\) objectives, i.e., \(\mathbf{A}\) is an all-one matrix, then \(\operatorname{Diag}(\mathbf{FA}^{\top})=\left[\frac{1}{M}\sum_{i\in[M]}f_{1,i} (\mathbf{x}),\ldots,\frac{1}{M}\sum_{i\in[M]}f_{S,i}(\mathbf{x})\right]^{\top}\). In this case, FMOL reduces to federated MOO problems with decentralized data that jointly optimizing fairness, privacy, and accuracy [41; 42; 43], as well as MOO with decentralized data under privacy constraints (e.g., machine reassignment among data centres [44] and engineering problems [45; 46; 47; 48]).
* If each client has a different subset of objectives (i.e., objective heterogeneity), FMLO allows distinct preferences at each client. For example, each customer group on a recommender system in e-commerce platforms might have different combinations of shopping preferences, such as product price, brand, delivery speed, etc.
### Federated Multi-Objective Learning Algorithms
Upon formalizing our FMOL framework, our next goal is to develop gradient-based algorithms for solving large-scale high-dimensional FMOL problems with _provable_ Pareto stationary convergence guarantees and low communication costs. To this end, we propose two FMOL algorithms, namely federated multiple gradient descent averaging (FMGDA) and federated stochastic multiple gradient descent averaging (FSMGDA) as shown in Algorithm 1. We summarize our key notation in Table 3 in Appendix to allow easy references for readers.
As shown in Algorithm 1, in each communication round \(t\in[T]\), each client synchronizes its local model with the current global model \(\mathbf{x}_{t}\) from the server (cf. Step 1). Then each client runs \(K\) local steps based on local data for all effective objectives (cf. Step 2) with two options: i) for FMGDA, each local step performs local full gradient descent, i.e., \(\mathbf{x}_{s,i}^{t,k+1}=\mathbf{x}_{s,i}^{t,k}-\eta_{L}\nabla f_{s,i}(\mathbf{ x}_{s,i}^{t,k}),\forall s\in S_{i}\); ii) For FSMGDA, the local step performs stochastic gradient descent, i.e., \(\mathbf{x}_{s,i}^{t,k+1}=\mathbf{x}_{s,i}^{t,k}-\eta_{L}\nabla f_{s,i}(\mathbf{ x}_{s,i}^{t,k},\xi_{i}^{t,k}),\forall s\in S_{i}\), where \(\xi_{i}^{t,k}\) denotes a random sample in local step \(k\) and round \(t\) at client \(i\). Upon finishing \(K\) local updates, each client returns the accumulated update \(\Delta_{s,i}^{t}\) for each effective objective to the server (cf. Step 3). Then, the server aggregates all returned \(\Delta\)-updates from
the clients to obtain the overall updates \(\Delta_{s}^{t}\) for each objective \(s\in[S]\) (cf. Steps 4 and 5), which will be used in solving a convex quadratic optimization problem with linear constraints to obtain an approximate common descent direction \(\mathbf{d}_{t}\) (cf. Step 6). Lastly, the global model is updated following the direction \(\mathbf{d}_{t}\) with global learning rate \(\eta_{t}\) (cf. Step 7).
Two remarks on Algorithm 1 are in order. First, we note that a two-sided learning rates strategy is used in Algorithm 1, which decouples the update schedules of local and global model parameters at clients and server, respectively. As shown in Section 4 later, this two-sided learning rates strategy enables better convergence rates by choosing appropriate learning rates. Second, to achieve low communication costs, Algorithm 1 leverages \(K\) local updates at each client and infrequent periodic communications between each client and the server. By adjusting the two-sided learning rates appropriately, the \(K\)-value can be made large to further reduce communication costs.
```
At Each Client \(i\):
1. Synchronize local models \(\mathbf{x}_{s,i}^{t,0}=\mathbf{x}_{t},\forall s\in S_{i}\).
2. Local updates: for all \(s\in S_{i}\), for \(k=1,\ldots,K\), (FMGDA): \(\mathbf{x}_{s,i}^{t,k}=\mathbf{x}_{s,i}^{t,k-1}-\eta_{L}\nabla f_{s,i}( \mathbf{x}_{s,i}^{t,k-1})\), (FSMGDA): \(\mathbf{x}_{s,i}^{t,k}=\mathbf{x}_{s,i}^{t,k-1}-\eta_{L}\nabla f_{s,i}( \mathbf{x}_{s,i}^{t,k-1},\xi_{i}^{t,k})\).
3. Return accumulated updates to server \(\{\Delta_{s,i}^{t},s\in S_{i}\}\): (FMGDA): \(\Delta_{s,i}^{t}=\sum_{k\in[K]}\nabla f_{s,i}(\mathbf{x}_{s,i}^{t,k})\). (FSMGDA): \(\Delta_{s,i}^{t}=\sum_{k\in[K]}\nabla f_{s,i}(\mathbf{x}_{s,i}^{t,k},\xi_{i}^ {t,k})\). At the Server:
4. Receive accumulated updates \(\{\Delta_{s,i}^{t},\forall s\!\in\!S_{i},\forall i\!\in\![M]\}\).
5. Compute \(\Delta_{s}^{t}=\frac{1}{|R_{s}|}\sum_{i\in R_{s}}\Delta_{s,i}^{t},\forall s\in [S]\), where \(R_{s}=\{i:a_{s,i}=1,i\in[M]\}\).
6. Compute \(\boldsymbol{\lambda}_{t}^{*}\in[0,1]^{S}\) by solving \[\min_{\boldsymbol{\lambda}_{t}\geq\boldsymbol{0}}\left\|\sum\nolimits_{s\in[S ]}\lambda_{s}^{t}\Delta_{s}^{t}\right\|^{2},\quad\text{s.t.}\sum\nolimits_{s \in[S]}\lambda_{s}^{t}=1.\] (3)
7. Let \(\mathbf{d}_{t}=\sum_{s\in[S]}\lambda_{s}^{t,*}\Delta_{s}^{t}\) and update the global model as: \(\mathbf{x}_{t+1}=\mathbf{x}_{t}-\eta_{t}\mathbf{d}_{t}\), with a global learning rate \(\eta_{t}\).
```
**Algorithm 1** Federated (Stochastic) Multiple Gradient Descent Averaging (FMGDA/FSMGDA).
## 4 Pareto stationary convergence analysis
In this section, we analyze the Pareto stationary convergence performance for our FMGDA and FSMGDA algorithms in Sections 4.1 and 4.2, respectively, each of which include non-convex and strongly convex settings.
### Pareto stationary convergence of FMGDA
In what follows, we show FMGDA enjoys linear rate \(\tilde{\mathcal{O}}(\exp(-\mu T))\) for \(\mu\)-strongly convex functions and sub-linear rate \(\mathcal{O}(\frac{1}{T})\) for non-convex functions.
**1) FMGDA: The Non-convex Setting.** Before presenting our Pareto stationary convergence results for FMGDA, we first state several assumptions as follows:
**Assumption 1**.: _(L-Lipschitz continuous) There exists a constant \(L>0\) such that \(\|\nabla f_{s}(\mathbf{x})-\nabla f_{s}(\mathbf{y})\|\leq L\|\mathbf{x}- \mathbf{y}\|,\forall\mathbf{x},\mathbf{y}\in\mathbb{R}^{d},s\in[S]\)._
**Assumption 2**.: _(Bounded Gradient) The gradient of each objective at any client is bounded, i.e., there exists a constant \(G>0\) such that \(\|\nabla f_{s,i}(\mathbf{x})\|^{2}\leq G^{2},\forall s\in[S],i\in[M]\)._
With the assumptions above, we state the Pareto stationary convergence of FMGDA for non-convex FMOL as follows:
**Theorem 1** (FMGDA for Non-convex FMOL).: _Let \(\eta_{t}=\eta\leq\frac{3}{2(1+L)}\). Under Assumptions 1 and 2, if at least one function \(f_{s},s\in[S]\) is bounded from below by \(f_{s}^{\min}\), then the sequence \(\{\mathbf{x}_{t}\}\) output by FMGDA satisfies: \(\min_{t\in[T]}\|\bar{\mathbf{d}}_{t}\|^{2}\leq\frac{16(f_{0}^{2}-f_{s}^{\min} )}{T\eta}+\delta\), where \(\delta\triangleq\frac{16\eta_{L}^{2}K^{2}L^{2}G^{2}(1+S^{2})}{\eta}\)._
In non-convex functions, we use \(\left\|\bar{\mathbf{d}}_{t}\right\|^{2}\) as the metrics for FMOO, where \(\bar{\mathbf{d}}_{t}=\boldsymbol{\lambda}_{t}^{T}\nabla(\operatorname{Diag}( \mathbf{FA}^{\top}))\) and \(\boldsymbol{\lambda}_{t}\) is calculated by the quadratic programming problem 3 based on accumulated (stochastic) gradients \(\Delta_{t}\). We compare different metrics for MOO in Appendix A. The convergence bound in Theorem 1 contains two parts. The first part is an optimization error, which depends on the initial point and vanishes as \(T\) increases. The second part is due to local update steps \(K\) and data heterogeneity \(G\), which can be mitigated by carefully choosing the local learning rate \(\eta_{L}\). Specifically, the following Pareto stationary convergence rate of FMGDA follows immediately from Theorem 1 with an appropriate choice of local learning rate \(\eta_{L}\):
**Corollary 2**.: _With a constant global learning rate \(\eta_{t}=\eta\), \(\forall t\), and a local learning rate \(\eta_{L}=\mathcal{O}(1/\sqrt{T})\), the Pareto stationary convergence rate of FMGDA is \((1/T)\sum_{t\in[T]}\|\bar{\mathbf{d}}_{t}\|^{2}=\mathcal{O}(1/T)\)._
Several interesting insights of Theorem 1 and Corollary 2 are worth pointing out: **1)** We note that FMGDA achieves a Pareto stationary convergence rate \(\mathcal{O}(1/T)\) for non-convex FMOL, which is the _same_ as the Pareto stationary rate of MGD for centralized MOO and the _same_ convergence rate of gradient descent (GD) for single objective problems. This is somewhat surprising because FMGDA needs to handle more complex objective and data heterogeneity under FMOL; **2)** The two-sided learning rates strategy decouples the operation of clients and server by utilizing different learning rate schedules, thus better controlling the errors from local updates due to data heterogeneity; **3)** Note that in the single-client special case, FMGDA degenerates to the basic MGD algorithm. Hence, Theorem 1 directly implies a Pareto stationary convergence bound for MGD by setting \(\delta=0\) due to no local updates in centralized MOO. This convergence rate bound is consistent with that in [8]. However, we note that our result is achieved _without_ using the linear search step for learning rate [8], which is much easier to implement in practice (especially for deep learning models); **4)** Our proof is based on standard assumptions in first-order optimization, while previous works require strong and unconventional assumptions. For example, a convergence of \(\mathbf{x}\)-sequence is assumed in [8].
**2) FMGDA: The Strongly Convex Setting.** Now, we consider the strongly convex setting for FMOL, which is more tractable but still of interest in many learning problems in practice. In the strongly convex setting, we have the following additional assumption:
**Assumption 3**.: _(\(\mu\)-Strongly Convex Function) Each objective \(f_{s}(\mathbf{x}),s\in[S]\) is a \(\mu\)-strongly convex function, i.e., \(f_{s}(\mathbf{y})\geq f_{s}(\mathbf{x})+\nabla f_{s}(\mathbf{x})(\mathbf{y}- \mathbf{x})+\frac{\mu}{2}\|\mathbf{y}-\mathbf{x}\|^{2}\) for some \(\mu>0\)._
For more tractable strongly-convex FMOL problems, we show that FMGDA achieves a stronger Pareto stationary convergence performance as follows:
**Theorem 3** (FMGDA for \(\mu\)-Strongly Convex FMOL).: _Let \(\eta_{t}=\eta\) such that \(\eta\leq\frac{3}{2(1+L)}\), \(\eta\leq\frac{1}{2L+\mu}\) and \(\eta\geq\frac{1}{\mu^{T}}\). Under Assumptions 1- 3, pick \(\mathbf{x}_{t}\) as the final output of the FMGDA algorithm with weights \(w_{t}=(1-\frac{\mu\eta}{2})^{1-t}\). Then, it holds that \(\mathbb{E}[\Delta_{Q}^{t}]\leq\|\mathbf{x}_{0}-\mathbf{x}_{*}\|^{2}\mu\exp(- \frac{\mu\eta T}{2})+\delta\), where \(\Delta_{Q}^{t}\triangleq\sum_{s\in[S]}\lambda_{s}^{t,*}\left[f_{s}(\mathbf{x }_{t})-f_{s}(\mathbf{x}_{*})\right]\) and \(\delta=\frac{8\eta_{L}^{2}K^{2}L^{2}G^{2}S^{2}}{\mu}+2\eta_{L}^{2}K^{2}L^{2}G^ {2}\)._
Theorem 3 immediately implies following Pareto stationary convergence rate for FMGDA with a proper choice of local learning rate:
**Corollary 4**.: _If \(\eta_{L}\) is chosen sufficiently small such that \(\delta=\mathcal{O}(\mu\exp(-\mu T))\), then the Pareto stationary convergence rate of FMGDA is \(\mathbb{E}[\Delta_{Q}^{t}]=\mathcal{O}(\mu\exp(-\mu T))\)._
Again, several interesting insights can be drawn from Theorem 3 and Corollary 4. First, for strongly convex FMOL, FMGDA achieves a linear convergence rate \(\mathcal{O}(\mu\exp(-\mu T))\), which again matches those of MGD for centralized MOO and GD for single-objective problems. Second, compared with the non-convex case, the convergence bounds suggest FMGDA could use a larger local learning rate for non-convex functions thanks to our two-sided learning rates design. A novel feature of FMGDA for strongly convex FMOL is the randomly chosen output \(x_{t}\) with weight \(w_{t}\) from the \(\mathbf{x}_{t}\)-trajectory, which is inspired by the classical work in stochastic gradient descent (SGD) [49]. Note that, for implementation in practice, one does not need to store all \(\mathbf{x}_{t}\)-values. Instead, the algorithm can be implemented by using a random clock for stopping [49].
### Pareto stationary convergence of FSMGDA
While enjoying strong performances, FMGDA uses local full gradients at each client, which could be costly in the large dataset regime. Thus, it is of theoretical and practical importance to consider the stochastic version of FMGDA, i.e., federated stochastic multi-gradient descent averaging (FSMGDA).
**1) FSMGDA: The Non-convex Setting.** A fundamental challenge in analyzing the Pareto stationarity convergence of FSMGDA and other stochastic multi-gradient descent (SMGD) methods stems from bounding the error of the common descent direction estimation, which is affected by both \(\mathbf{\lambda}_{t}^{*}\) (obtained by solving a quadratic programming problem) and the stochastic gradient variance. In fact, it is shown in [9] and [18] that the stochastic common descent direction in SMGD-type methods could be biased, leading to divergence issues. To address these challenges, in this paper, we propose to use a _new_ assumption on the stochastic gradients, which is stated as follows:
**Assumption 4** (\((\alpha,\beta)\)-Lipschitz Continuous Stochastic Gradient).: _A function \(f\) has (\(\alpha,\beta\))-Lipschitz continuous stochastic gradients if there exist two constants \(\alpha,\beta>0\) such that, for any two independent training samples \(\xi\) and \(\xi^{{}^{\prime}}\), \(\mathbb{E}[\|\nabla f(\mathbf{x},\xi)-\nabla f(\mathbf{y},\xi^{{}^{\prime}})\| ^{2}]\leq\alpha\|\mathbf{x}-\mathbf{y}\|^{2}+\beta\sigma^{2}\)._
In plain language, Assumption 4 says that the stochastic gradient estimation of an objective does not change too rapidly. We note that the (\(\alpha,\beta\))-Lipschitz continuous stochastic gradient assumption is a natural extension of the classic \(L\)-Lipschitz continuous gradient assumption (cf. Assumption 1) and generalizes several assumptions of SMGD convergence analysis in previous works. We note that Assumption 1 is not necessarily too hard to satisfy in practice. For example, when the underlying distribution of training samples \(\xi\) has a bounded support (typically a safe assumption for most applications in practice due to the finite representation limit of computing systems), suppose that Assumption 1 holds (also a common assumption in the optimization literature), then for any given \(\mathbf{x}\) and \(\mathbf{y}\), the left-hand-side of the inequality in Assumption 4 is bounded due to the L-smoothness in Assumption 1. In this case, there always exist a sufficiently large \(\alpha\) and a \(\beta\) such that the right-hand-side of the inequality in Assumption 1 holds. Please see Appendix A for further details. In addition, we need the following assumptions for the stochastic gradients, which are commonly used in standard SGD-based analyses [49; 50; 51; 52].
**Assumption 5**.: _(Unbiased Stochastic Estimation) The stochastic gradient estimation is unbiased for each objective among clients, i.e., \(\mathbb{E}[\nabla f_{s,i}(\mathbf{x},\xi)]=\nabla f_{s,i}(\mathbf{x}),\forall s \in[S],i\in[M]\)._
**Assumption 6**.: _(Bounded Stochastic Gradient) The stochastic gradients satisfy \(\mathbb{E}[\|\nabla f_{s,i}(\mathbf{x},\xi)\|^{2}]\leq D^{2},\forall s\in[S], i\in[M]\) for some constant \(D>0\)._
With the assumptions above, we now state the Pareto stationarity convergence of FSMGDA as follows:
**Theorem 5** (FSMGDA for Non-convex FMOL).: _Let \(\eta_{t}=\eta\leq\frac{3}{2(1+L)}\). Under Assumptions 4-6, if an objective \(f_{s}\) is bounded from below by \(f_{s}^{\min}\), then the sequence \(\{\mathbf{x}_{t}\}\) output by FSMGDA satisfies: \(\min_{t\in[T]}\mathbb{E}\left\|\hat{\mathbf{d}}_{t}\right\|^{2}\leq\frac{2S \left(f_{s}^{0}-f_{s}^{\min}\right)}{\eta T}+\delta\), where \(\delta=L\eta S^{2}D^{2}+S(\alpha\eta_{L}^{2}K^{2}D^{2}+\beta\sigma^{2})\)._
Theorem 5 immediately implies an \(\mathcal{O}(1/\sqrt{T})\) convergence rate of FSMGDA for non-convex FMOL:
**Corollary 6**.: _With a constant global learning rate \(\eta_{t}=\eta=\mathcal{O}(1/\sqrt{T})\), \(\forall t\) and a local learning rate \(\eta_{L}=\mathcal{O}\left(1/T^{1/4}\right)\), and if \(\beta=\mathcal{O}(\eta)\), the Pareto stationarity convergence rate of FSMGDA is \(\min_{t\in[T]}\mathbb{E}\|\hat{\mathbf{d}}_{t}\|^{2}=\mathcal{O}(1/\sqrt{T})\)._
**2) The Strongly Convex Setting:** For more tractable strongly convex FMOL problems, we can show that FSMGDA achieve stronger convergence results as follows:
**Theorem 7** (FSMGDA for \(\mu\)-Strongly Convex FMOL).: _Let \(\eta_{t}=\eta=\Omega(\frac{1}{\mu T})\). Under Assumptions 3, 5 and 6, pick \(\mathbf{x}_{t}\) as the final output of the FSMGDA algorithm with weight \(w_{t}=(1-\frac{\mu\eta}{2})^{1-t}\). Then, it holds that: \(\mathbb{E}[\Delta_{0}^{t}]\leq\|\mathbf{x}_{0}-\mathbf{x}_{*}\|^{2}\mu\exp(- \frac{\eta}{2}\mu T)+\delta\), where \(\Delta_{Q}^{t}=\sum_{s\in[S]}\lambda_{s}^{t,*}\left[f_{s}(\mathbf{x}_{t})-f_{ s}(\mathbf{x}_{*})\right]\) and \(\delta=\frac{1}{\mu}S^{2}(\alpha\eta_{L}^{2}K^{2}D^{2}+\beta\sigma^{2})+\frac{ \eta S^{2}D^{2}}{2}\)._
The following Pareto station convergence rate of FSMGDA follows immediately from Theorem 7:
**Corollary 8**.: _Choose \(\eta_{L}=\mathcal{O}(\frac{1}{\sqrt{T}})\) and \(\eta=\Theta(\frac{\log(\max(1,\mu^{2}T))}{\mu T})\). If \(\beta=\mathcal{O}(\eta)\), then the Pareto stationary convergence rate of FSMGDA is \(\mathbb{E}[\Delta_{Q}^{t}]\leq\tilde{\mathcal{O}}(1/T)\)._
Corollary 8 says that, With proper learning rates, FSMGDA achieves \(\tilde{\mathcal{O}}(1/T)\) Pareto stationary convergence rate (i.e., ignoring logarithmic factors) for strongly convex FMOL. Also, in the single-client special case with no local updates, FSMGDA reduces to the SMGD algorithm and \(\delta=\frac{4}{\mu}\beta S^{2}\sigma^{2}+\frac{\eta S^{2}D^{2}}{2}\) in this case. Then, Theorem 7 implies an \(\tilde{\mathcal{O}}(\frac{1}{T})\) Pareto stationarity convergence rate for SMGD for strongly convex MOO problems, which is consistent with previous works [9]. However, our convergence rate proof uses a more conventional \((\alpha,\beta)\)-Lipschitz stochastic gradient assumption, rather than the unconventional assumptions on the first moment bound and Lipschitz continuity of common descent directions in [9].
## 5 Numerical results
In this section, we show the main numerical experiments of our FMGDA and FSMGDA algorithms in different datasets, while relegating the experimental settings and details to the appendix.
**1) Ablation Experiments on Two-Tasks FMOL: _1-a) Impacts of Batch Size on Convergence:_** First, we compare the convergence results in terms of the number of communication rounds using the "MultiMNIST" dataset [53] with two tasks (L and R) as objectives. We test our algorithms with four different cases with batch sizes being \([16,64,128,256]\). To reduce computational costs in this experiment, the dataset size for each client is limited to \(256\). Hence, the batch size \(256\) corresponds to FMGDA and all other batch sizes correspond to FSMGDA. As shown in Fig. 1(a), under non-i.i.d. data partition, both FMGDA and FSMGDA algorithms converge. Also, the convergence speed of the FSMGDA algorithm increases as the batch size gets larger. These results are consistent with our theoretical analyses as outlined in Theorems 1 and 5.
_1-b) Impacts of Local Update Steps on Convergence:_ Next, we evaluate our algorithms with different numbers of local update steps \(K\). As shown in Fig. 1(b) and Table 2, both algorithms converge faster as the number of the local steps \(K\) increases. This is because both algorithms effectively run more iterative updates as \(K\) gets large.
_1-c) Comparisons between FMOL and Centralized MOO:_ Since this work is the first that investigates FMOL, it is also interesting to empirically compare the differences between FMOL and centralized MOO methods. In Fig. 2(a), we compare the training loss of FMGDA and FSMGDA with those of the centralized MGD and SMGD methods after 100 communication rounds. For fair comparisons, the centralized MGD and SMGD methods use \(\sum_{i}^{M}|S_{i}|\) batch-sizes and run \(K\times T\) iterations. Our results indicate that FMGDA and MGD produce similar results, while the performance of FSMGDA is slightly worse than that of SMGD due to FSMGDA's sensitivity to objective and data heterogeneity in stochastic settings. These numerical results confirm our theoretical convergence analysis.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline & \multicolumn{2}{c|}{i.i.d.} & \multicolumn{2}{c|}{non-i.i.d.} \\ \cline{2-5} & Task L & Task R & Task L & Task R \\ \hline \(K=1\) & 82 & 84 & 96 & 82 \\ \hline \(K=5\) & 18(4.6\(\times\)) & 20(4.2\(\times\)) & 24(4.0\(\times\)) & 20(4.1\(\times\)) \\ \hline \(K=10\) & 10(8.2\(\times\)) & 9(9.3\(\times\)) & 13(7.4\(\times\)) & 10(8.2\(\times\)) \\ \hline \(K=20\) & 5(16.4\(\times\)) & 5(16.8\(\times\)) & 6(16.0\(\times\)) & 5(16.4\(\times\)) \\ \hline \end{tabular}
\end{table}
Table 2: Communication rounds needed for \(10^{-2}\) loss.
Figure 1: Training loss convergence comparison.
**2) Experiments on Larger FMOL:** We further test our algorithms on FMOL problems of larger sizes. In this experiment, we use the River Flow dataset[54], which contains _eight_ tasks in this problem. To better visualize 8 different tasks, we illustrate the normalized loss in radar charts in Fig. 2(b). In this 8-task setting, we can again verify that more local steps \(K\) and a larger training batch size lead to faster convergence. In the appendix, we also verify the effectiveness of our FMGDA and FSMGDA algorithms in CelebA [55] (_40 tasks_), alongside with other hyperparmeter tuning results.
## 6 Conclusion and discussions
In this paper, we proposed the first general framework to extend multi-objective optimization to the federated learning paradigm, which considers both objective and data heterogeneity. We showed that, even under objective and data heterogeneity, both of our proposed algorithms enjoy the same Pareto stationary convergence rate as their centralized counterparts. In our future work, we will go beyond the limitation in the analysis of MOO that an extra assumption on the stochastic gradients (and \(\mathbf{\lambda}\)). In this paper, we have proposed a weaker assumption (Assumption 4). We conjecture that using acceleration techniques, e.g., momentum, variance reduction, and regularization, could relax such assumption and achieve better convergence rate, which is a promising direction for future works. In addition, MOO in distributed learning gives rise to substantially expensive communication costs, which scales linearly with the number of clients and the number of objectives in each client. Developing communication-efficient MOO beyond typical gradient compression methods for distributed learning is also a promising direction for future works.
## Acknowledgments and Disclosure of Funding
This work has been supported in part by NSF grants CAREER CNS-2110259 and CNS-2112471.
Figure 2: Training losses comparison |
2306.05762 | Real-time COVID-19 hospital admissions forecasting with leading
indicators and ensemble methods in England | Hospitalisations from COVID-19 with Omicron sub-lineages have put a sustained
pressure on the English healthcare system. Understanding the expected
healthcare demand enables more effective and timely planning from public
health. We collect syndromic surveillance sources, which include online search
data, NHS 111 telephonic and online triages. Incorporating this data we explore
generalised additive models, generalised linear mixed-models, penalised
generalised linear models and model ensemble methods to forecast over a
two-week forecast horizon at an NHS Trust level. Furthermore, we showcase how
model combinations improve forecast scoring through a mean ensemble, weighted
ensemble, and ensemble by regression. Validated over multiple Omicron waves, at
different spatial scales, we show that leading indicators can improve
performance of forecasting models, particularly at epidemic changepoints. Using
a variety of scoring rules, we show that ensemble approaches outperformed all
individual models, providing higher performance at a 21-day window than the
corresponding individual models at 14-days. We introduce a modelling structure
used by public health officials in England in 2022 to inform NHS healthcare
strategy and policy decision making. This paper explores the significance of
ensemble methods to improve forecasting performance and how novel syndromic
surveillance can be practically applied in epidemic forecasting. | Jonathon Mellor, Rachel Christie, Robert S Paton, Rhianna Leslie, Maria Tang, Martyn Fyles, Sarah Deeny, Thomas Ward, Christopher E Overton | 2023-06-09T08:56:03Z | http://arxiv.org/abs/2306.05762v3 | **Real-time COVID-19 hospital admissions forecasting with leading indicators and ensemble methods in England**
**Real-time COVID-19 hospital admissions forecasting with leading indicators and ensemble methods in England**
Jonathon Mellor1*, Rachel Christie1, Robert S Paton1, Rhianna Leslie1, Maria Tang1, Martyn Fyles1, Sarah Deeny1, Thomas Ward1, Christopher E Overton1,2
1. UK Health Security Agency, Data Analytics and Science, Noble House, London, United Kingdom
2. University of Liverpool, Department of Mathematical Sciences, Liverpool, United Kingdom
*Corresponding Author: [email protected]
**Abstract**
## Background
Hospitalisations from COVID-19 with Omicron sub-lineages have put a sustained pressure on the English healthcare system. Understanding the expected healthcare demand enables more effective and timely planning from public health.
## Methods
We collect syndromic surveillance sources, which include online search data, NHS 111 telephonic and online triages. Incorporating this data we explore generalised additive models, generalised linear mixed-models, penalised generalised linear models and model ensemble methods to forecast over a two-week forecast horizon at an NHS Trust level. Furthermore, we showcase how model combinations improve forecast scoring through a mean ensemble, weighted ensemble, and ensemble by regression.
## Results
Validated over multiple Omicron waves, at different spatial scales, we show that leading indicators can improve performance of forecasting models, particularly at epidemic changepoints. Using a variety of scoring rules, we show that ensemble approaches outperformed all individual models, providing higher performance at a 21-day window than the corresponding individual models at 14-days.
## Interpretation
We introduce a modelling structure used by public health officials in England in 2022 to inform NHS healthcare strategy and policy decision making. This paper explores the significance of ensemble methods to improve forecasting performance and how novel syndromic surveillance can be practically applied in epidemic forecasting.
## 1 Introduction
Over the course of 2022 there were over 390,000 hospitalisations due to COVID-19 in England, an increase of approximately 90,000 from the year before [1]. This was a consequence of a reduction in non-pharmaceutical interventions and the high infectivity of the Omicron sub-lineages compared to previous variants [2]. The burden on healthcare systems remains high and hospital admissions with COVID-19 are a key metric for monitoring the SARS-CoV-2 pandemic. While the infection hospitalisation risk has reduced since 2021 [3] the higher transmission of Omicron and its emergent sub-lineages has sustained epidemic waves of admissions from COVID-19 in England and worldwide. These admissions are primarily in older age groups [4], and those with comorbidities [5]. Once admitted, patients with COVID-19 occupy beds for a median of 7.0 days in 2022 [6] with variation due to regional heterogeneity, risk factors and the patient pathways taken [7].
Due to the healthcare burden of COVID-19, system leaders request hospital admissions forecasts to inform management and policy decisions. There are a range of existing COVID forecasting approaches and models [8], as for epidemiological forecasting more generally [9], though they have limitations for our specific policy problem. Mechanistic or transmission models rely on parametric values, such as relative susceptibility in a population [10] which are often unknowable for new variant-driven waves and can change substantially over time. On the other hand, purely time series models, such as ARIMAs, will not be able to anticipate turning points such as epidemic peaks [11], which is the period where accurate forecasts are crucial. To enhance performance, leading indicators such as incidence can be incorporated to help predict changes in hospital metrics [12], though each data stream is subject to its own biases and sources of error and may have a changing relationship with hospitalisations over time [13]. Due to Universal testing in the community ending in 2022 [14] there is a greater reliance on non-clinical leading indicators and novel syndromic surveillance in order to anticipate hospital admissions. There has been significant work on the analysis of leading indicators of COVID-19 activity [15, 16], but limited exploration across Omicron epidemic waves. There is significant body of work that shows forecasting accuracy can be improved by bringing together a range of model structures in an ensemble [17], for example using an unweighted average of candidate forecasts [18].
In this paper we introduce multiple model structures used to forecast hospital admissions in England throughout 2022 into 2023 operationally in UKHSA - which we validated across multiple epidemic waves. These models rely on a single time series or utilise leading indicators to forecasts admissions at National Health Service (NHS) Trust level - a collection of hospitals. These projections are produced at NHS Trust, NHS Commissioning Region and national levels in England. We both combine data for individual models and combine models in ensembles [19], using three different methods. This reduces the bias of individual models to improve predictive performance. Importantly, we show how these models score over time and contrast the different approaches and their performance throughout the epidemic wave, using proper scoring rules [20].
## 2 Methodology
#### 2.1.1 Hospital Admissions
NHS England (NHSE) COVID-19 data is provided by individual acute NHS Trusts in England, who deliver a daily situation report (SitRep) covering the previous 24 hours on metrics relating to patients, beds, and staff [21]. The data records the number of new patients and inpatients in the past 24 hours with a laboratory-confirmed positive COVID-19 test [22]. We define a COVID-19 admission as any patient who tested positive before admission or within their first 2 days of arrival - we are interested in community acquired admissions, so our definition excludes expected hospital acquired infections.
#### 2.1.3 Geographic Structure
The NHS in England is structured hierarchically, with national oversight from NHS England and seven commissioning regions. The hospitals within each commissioning region are managed as organisational units called NHS Trusts, each Trust with secondary care responsibility may have one or many acute / emergency hospitals. The NHS Trusts cross administrative boundaries, with nearby Trusts serving overlapping populations. This hierarchical structure can be incorporated into modelling and is shown visually in _Supplementary Figure A_.
#### 2.1.3 Leading Indicators
Healthcare seeking behaviour may not lead hospitalisation at an individual linkable level, but we expect population level behaviour to lead aggregate admissions. For example, increases in Google Searches for "what are COVID symptoms" correlate with increased transmission in an area, which should cause increased hospitalisations in the nearby Trusts following some time delay. A probabilistic population mapping was created linking patient discharge locations in a lower tier local authority (LTLA) to a service provider (NHS trust), in a similar manner to the _covid19.nhs.data_ R package [23]. We can then map trends in local populations healthcare seeking behaviour (recorded in administrative boundaries) to nearby NHS Trusts, as well as their population catchment sizes.
Candidate leading indicators were evaluated for both strength of statistical relationship with admissions, and the likelihood of being operationalisable [13]. Ultimately, the Google Trends syndromic search terms, and NHS 111 Pathways telephonic triage (calls and online), were selected due to strong correlations with localised clinical risk - originally explored in [24]. For Google, individual search terms monitored were combined by topic to increase robustness of signal. The NHS 111 Pathways were separated into online and calls data sources and aggregated to type of triage and age group.
### 2.2 Models
As there are multiple models discussed and combined in this manuscript, the high-level implementation of models used are outlined in Table 1.
#### 2.2.1 Univariate
We use two univariate (hospital admissions time series as the only predictor) models in this study. The first, "Univariate HGAM", is a Hierarchical Generalized Additive Model, which estimates and extrapolates the local growth rate per hospital Trust, with splines through time at both Trust and NHS Region levels. The second, "Univariate baseline", has a similar structure, but is not spatially hierarchical, instead fitting splines through time for each Trust independently. As a simple to apply statistical model, we use the baseline GAM model throughout to compare with other methods. The models are fit regionally for computational efficiency, and the GAMs fit using the _mgcv_ R package [25].
To forecast admissions, we need to model how the daily admission counts are changing over time, \(H(t)\). On short timescales, epidemics can often be described using an exponential structure, where the incidence at time \(t\) is a function of some initial incidence and exponential growth/decay for \(t\) days. Assuming hospital admissions are linearly related to incidence, we have
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Univariate Baseline & Generalised additive model & Hospital admissions & None \\ \hline Univariate HGAM & Hierarchical generalised additive model & Hospital admissions & None \\ \hline Google Trends & Penalised generalised linear model, input into generalised linear mixed model & Hospital admissions & None \\ \hline
111 Calls & Penalised generalised linear model, input into generalised linear mixed model & Hospital admissions & None \\ \hline
111 Online & Penalised generalised linear model, input into generalised linear mixed model & Hospital admissions & None \\ \hline Combined Indicator & Penalised generalised linear model, input into generalised linear mixed model & Hospital admissions & Include data sources in Google syndromic surveillance syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic triage syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic syndromic telephonic syndromic telephonic syndromic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic telephonic syndromic syndromic tele
\[H(t)=H(0)e^{rt},\]
where \(r\) is the exponential growth rate. Over an epidemic, the growth rate is rarely constant. This model can be generalised using a smooth function of time \(s(t)\) rather than \(rt\) in the exponent, i.e.
\[H(t)=H(0)e^{s(t)}.\]
By fitting such a model to time-series data on hospital admissions, one can generate short-term forecasts by assuming that for all \(t>t_{\max}\) the exponential growth rate remains constant, i.e., \(s(t)=s(t_{\max})+(t_{\max}-t)s_{1}\). Here \(s_{1}\) is the instantaneous exponential growth rate at \(t=t_{\max}\). Assuming the smooth function \(s(t)\) is known, \(s_{1}\) is approximately the first derivative of \(s(t)\), evaluated at \(t_{\max}\). This can be shown by taking a Taylor expansion of the smooth function,
\[s(t_{\max}+h)=s(t_{\max})+h\frac{ds}{dt}\Big{|}_{t_{\max}}+\cdots\approx s(t_{ \max})+hs_{1}.\]
Substituting this back into our hospital admissions formula gives
\[H(t_{\max}+h)=H(0)e^{s(t_{\max})+hs_{1}}=H(t_{max})e^{hs_{1}}.\]
Hospital admissions data are noisy integer-valued counts, with stochasticity from both the epidemic spread and the likelihood of requiring medical care after infection. To model this integer-valued noise, we assume that observed hospital admissions are samples from a negative binomial distribution, with expected value \(H(t)\). To fit this model, we use a Generalised Additive Model with logarithmic link function and negative binomial error structure. Under this, we obtain
\[log\left(H_{trust_{i}}(t)\right)\sim\beta_{0}+R_{trust_{i}}+s_{trust_{i}}(t)+R _{wday}(t),(1)\]
where \(\beta_{0}\) is an intercept, \(R_{trust_{i}}\) a random effect on Trust \(i\), and \(s_{trust_{i}}(t)\) is a penalised cubic regression spline and \(R_{wday(t)}\) is a random effect on the day-of-week at \(t\). Using the penalised spline, the out of sample prediction for future dates assumes a linear relationship with time, with gradient equal to the first derivate of the spline at \(t_{\max}\). Therefore, we can use out of sample prediction from the GAM to forecast admissions using a continued exponential trend.
Baseline GAM
The baseline GAM model is obtained by fitting Equation (1) to data from individual NHS Trusts independently. This leads to a unique spline for each Trust.
Univariate hierarchical GAM
The baseline GAM leads to very high uncertainty at Trust level and assumes each Trust \(i\) has an independent trend, which is typically not the case for epidemics, where spatial correlation is usually strong. Therefore, we instead construct a hierarchical GAM that accounts for correlation between Trusts nested within NHS Regions. We consider the structure
\[log\left(H_{trust_{i}}(t)\right)\sim\beta_{0}+R_{\mathrm{trust_{i}}}+s_{\mathrm{ trust_{i}}}(t)+s_{\mathrm{region_{i}}}(t)+R_{\mathrm{wday}}(t).\]
We run the model for each region independently. For the Trust splines, we use a hierarchical structure based on [26]. Since the regional models are independent, this nests the Trusts within regions. The regional spline captures the average trend across the region, with the Trust level splines and random effect \(R_{\mathrm{trust_{i}}}\) adding trust level variation.
#### 2.2.2 Leading indicator models
Each leading indicator model "Google Trends", NHS "111 Calls" and "111 Online" use a penalised generalised linear model (pen-GLM) to fit a smoothed admissions response variable with the leading indicators as predictors - then a generalised linear mixed effect model (GLMM) to fit directly to the data using the pen-GLM output as a predictor. We do this to capture the trends within the highly stochastic indicators and admissions data at fine spatial scales, which performed better than modelling the data directly within one model in initial exploration.
The leading indicators, denoted by \(x_{t}\), are noisy at fine spatial scales, as are hospital admissions, therefore the pen-GLM uses smoothed (via LOESS, given by \(u(x_{t})\)) indicators to predict smoothed admissions. The relationship between leading indicator time series is estimated at national and regional levels, to allow for spatial variation in leading relationships and national trends.
To construct the regression, a fixed lag was introduced between the indicator and admissions by the forecast horizon \(h\) steps. This allowed a prediction of admissions at \(H(t=t_{max}+h)\) using leading indicators at \(x_{t=t_{max}}\). As the optimal time-delay aligning indicator and admissions series is unknown a priori of an epidemic wave, we add further lags \(l\) between the two series, at \(t=-h-l\), with the maximum plausible lag at \(l_{max}\). This inclusion of further lags allows a higher chance of capturing a correlation in the model, though this comes at the cost of a highly autocorrelated regression. Across the \(J\) indicators indexed by \(j\) and the catchment population size of the Trust, \(p_{i}\), the model across the country becomes
\[log\big{(}u(H_{\mathrm{trust}_{i}}(t))\big{)}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
However, this does not predict admission counts directly - only the smoothed trend, which introduces bias and does not allow generation of prediction intervals. Therefore, we use the output of this model, denoted by \(\widehat{H}(t)\) as a covariate in a GLMM with admission counts as the response variable, using the structure
\[log\left(H_{trust_{i}}(t)\right)=\]
\[\beta_{0}+\beta_{trust_{i}}\]
\[+\beta_{1}\;1_{\{t<t_{max}-c\}}(t)\;\;log\left(\widehat{H}(t)\right)\]
\[+\beta_{2}\;1_{\{tzt_{max}-c\}}(t)\;\;log\left(\widehat{H}(t)\right)\]
\[+\mbox{wday}(t).\]
Where \(1_{\{X\}}\) is an indicator function. This allows different coefficients of \(log\left(\widehat{H}(t)\right)\) to be estimated, allowing for a correction near \(t_{max}\), where \(c\) is an integer value of days, taken as \(c=14\). The package _mgcv_ is used for this GLMM, as with the GAMs a negative binomial error structure is assumed and modelled at an NHS Commissioning Region level.
#### 2.2.3 Prediction intervals
To calculate prediction intervals from the fitted GAM and GLMM models, we need to capture both parameter uncertainty and the uncertainty in the error structure of the data generating process. From the fitted models, we generate a posterior distribution of the model parameters by assuming the coefficients are distributed according to a multivariate normal with mean equal to the central estimates of the model and variance-covariance matrix defined by the fitted model. From the fitted model, we capture uncertainty in parameter estimates by simulating from a multivariate normal, sampled 2000 times. For each parameter sample we produce a model forecast, and then produce estimates of the uncertainty in the mean of the forecast. These forecasts, however, do not capture the noise in the data generating process. To capture this, we take each forecast sample and simulate a sample from a negative binomial distribution with the forecast sample as the mean and theta parameter taken from the fitted GAM/GLMM. Therefore, for each posterior sample of the coefficients, we have a corresponding sample from the data generating process. Aggregating these sample trajectories from the data generating process, we can calculate prediction intervals.
Since the GAM and GLMM model use a hierarchical structure nesting Trusts within Regions, we can produce calibrated prediction intervals at both Trust and Region level. At Trust level, we generate forecast samples for each Trust and then simulate the negative binomial noise. At Region level, we aggregate the forecast samples from each nested Trust to Region level. Taking the Region level forecast samples, we then simulate the negative binomial noise using this as the expected value. Since each Region is run independently, we do not have calibrated prediction intervals at Nation level. For operational purposes, we can aggregate
the prediction intervals across each Region, but we do not score these results since these will be uncalibrated.
#### 2.2.4 Ensembles
There is substantial literature on how forecasting, and specifically epidemic forecasting, can be improved by ensembling multiple models together. In this manuscript we compare three such methods. We use a common ensemble method "ensemble by mean" comparing it to two methods which leverage past predictive performance with the aim of improving accuracy above this simple average. Ensembles have been shown to improve predictive performance for large national multi-team [28][29], though in contrast the models we use are all data driven and produced from a single team. The models selected for inclusion were chosen to each tackle specific short comings of the other candidate models. The Univariate HGAM, as a growth rate extrapolation model, performs well in epidemic growth and decline phases, but the extrapolation fails at first order turning points. The leading indicator models do not capture the epidemic dynamics as well but can anticipate short term changes in turning points. However, as the indicators are not consistent, multiple data sources are used to minimise the risk of spurious prediction and increase operational resilience.
The first method "ensemble by mean", for Trust \(i\) at time \(t\), is given by
\[\hat{\mathcal{Y}}_{i,t}=\frac{1}{M}\sum_{m=0}^{M}\hat{\mathcal{Y}}_{i,t,m}\]
where each individual model is noted by \(m\) and there are \(M\) models in the ensemble.
The second ensemble approach utilises the scores of forecasts run on historic data to weight predictions, in this case the forecasts are run weekly - "ensemble by score". For a forecast into the future, we index by week \(b=0\), for a given week we can therefore use how the model performed in the preceding \(b-1\) week. We can, of course, only determine a weighting using information that would be available at \(b=0\), so for a forecast horizon of \(h=14\) at \(b=-1\), we must truncate to \(t\leq 7\) for the weekly case. To create this ensemble on week \(b\), we evaluate the model performance at \(b-1\), and produce a weighting for the relative score of each candidate model in the ensemble. We use the Weighted Interval Score, calculated through the _scoringUtils_ R package [30] to measure historic performance, we use this package to evaluate all forecasts throughout this manuscript.
The average weighted interval for a model is then given by interval score function \(\mathit{wis}(y)\)
\[q_{m,b}=\sum_{t=0}^{7}\sum_{i=0}^{I}\mathit{wis}(\hat{\mathcal{Y}}_{i,t,m,b-1})\]
We get the weighting of each model as
\[w_{m,b}=\frac{q_{m,b}}{\sum_{m=0}^{M}q_{m,b}}\]
And therefore, a prediction of
\[\hat{y}_{i,t,b}=\sum_{m=0}^{M}\hat{y}_{i,t,m,b}\ q_{m,b}\]
The final ensemble method, "ensemble by regression", seeks to find the optimal combination of models through, in the simple case, an ordinary least squares. This structure is similar to the ensemble by score, as it uses the historic model performance on weeks \(b-1\), however, instead of directly scoring each model, the regression model finds a linear combination of the individual model's predictions that best estimate the historic data. Using the idea that past performance/model weighting can inform future best weighting, we define a regression of
\[y_{i,t,b-1}=\sum_{m=0}^{M}\beta_{m,b-1}\ \sum_{t=0}^{7}\sum_{i=0}^{I}\hat{y}_{i,t,m,b-1}\]
From this we can create a weighted ensemble for week \(b\) extracting the regression coefficients \(\beta_{m,b-1}\) for each model
\[\hat{y}_{i,t,b}=\sum_{m=0}^{M}\hat{y}_{i,t,m}\ \times\beta_{m,b-1}\]
We extend this further to the Bayesian case, where we have prior estimates on the values of \(\beta_{m,b-1}\). We take a normal prior on the weighting of the models with a value \(\beta_{m,b-1}=\frac{1}{M}\), 0.25 in this ensemble. This Bayesian framework allows us to have a prior belief in the best ensemble weighting, reconciling it with the best combination according to the data. For our model ensemble, a prior scale of 0.01 is shown, with the sensitivity shown in overall for BA.4/5 Supplementary Figure B and over time in Supplementary Figure C. Where models rely on previous week's predictions to score, the first week of data in the time series cannot be used. For this reason, the first week of data is excluded from all scoring.
## 3 Results
### Epidemic Curves
In this study we showcase the modelling approach focusing on the 2022 Omicron BA.4/5 wave, with further investigation for the 2022/23 Winter wave provided in the supporting documentation. The start date/end date and how they were defined for the BA.4/5 and Winter 2022 waves is given in _Table 2_. The national shapes of the epidemic waves are shown in _Figure 1_. Regional breakdowns of the epidemic curves are shown in _Supplementary Figure D_.
\begin{tabular}{|l|l|l|l|} \hline
**Wave** & **Phase** & **Week Start** & **Week End** \\ \hline BA.4/5 & Growth & 2022-05-15 & 2022-06-05 \\ \cline{2-4} & Peak & 2022-06-05 & 2022-06-19 \\ \cline{2-4} & Decline & 2022-06-19 & 2022-09-11 \\ \hline Winter 2022 & Growth & 2022-11-13 & 2022-12-18 \\ \cline{2-4} & Peak & 2022-12-18 & 2022-01-01 \\ \cline{2-4} & Decline & 2022-01-01 & 2023-01-22 \\ \hline \end{tabular}
Figure 1: The hospitalisation epidemic curves for the Omicron BA.4/5 and Winter 2022 waves. The BA.4/5 wave peaked at over 1,200 admissions per day and lasted through to trough approximately 14 weeks, compared to 600 and 10 weeks respectively in the Winter wave. The BA.4/5 rises fast following its low turning point compared to its slow decay. The Winter wave’s smaller peak rises slowly from the baseline daily admissions around 250. Both waves show strong day-of-week effects in reported counts.
### Forecast performance over time
Example forecasts for the BA.4/5 wave are shown in Figure 2. At a national level, we can see that the models based on hospital admissions alone (Univariate baseline, Univariate HGAM) over predict at the national peak. This effect is much smaller for the leading indicator-based models, which mostly avoid overprediction at the peak - though they do struggle to increase fast enough in the growth phase of the wave. All models appear to predict the decline phase well. In the ensemble models we see a mix of the univariate behaviour of over-predicting at peaks, though this effect is muted to different degrees. Similar effects are shown in _Supplementary Figure E_ for the Winter 2022/23 wave, though the leading indicator models do not rise as fast in the weeks preceding the peak.
Figure 2: The example forecasts of the different model structures for the BA.4/5 wave for each week period. The regional forecasts from the GAMs are aggregated to national level to show the epidemic curve and represent forecasts. The corresponding figure for the Winter 2022/23 wave is given in Supplementary Figure E.
It's important to analyse forecasts not just overall, but at specific time points to understand when they perform well or poorly, particularly when the underlying trend is an epidemic. The metrics over time for individual (non-ensembled) models are shown in _Figure 3_. The first metric, the interval score, gives a measure of model error, sharpness, and calibration - for this metric lower values indicate better performance. The second, bias, indicates whether the model is over or under predicting on average - where the closer to zero the better performance. The final metric, 95% coverage, tells us how many of the true values are contained within the prediction intervals of the models, so the nearest to 95% is best. The weekly national hospitalisation ratio \(\frac{H(t)}{H(t-7)}\) is shown to indicate growth / decline phases. In _Figure 3_, we can see across all models the interval score is highest for the forecast, including the peak, and broadly follows the pattern of the hospitalisation ratio across all models. All individual models perform similarly in the decline phase, which from the ratio has a constant decline rate. The univariate models have the highest interval scores at the epidemic peak, and we can see from their bias that they are substantially overshooting the turning point - as expected due to their model structure. The bias in the growth phase differs between the indicator and univariate models, with the indicators underpredicting and the univariates overpredicting, indicating performance could be improved by ensembling.
We extend these individual models using a variety of ensemble methods, and as shown in _Figure 4_, all the ensembles' interval scores (top metric) outperform the individual models across all time periods. Across time the ensemble by mean and ensemble by score perform similarly in all metrics with near identical interval scores, as expected. This is because the weighting method with scoring approaches equal weighing when models perform similarly. This is shown in _Figure 4_ as the top metric - the black and green lines are at the same position. The bias (middle metric) for the different ensemble approaches falls between the Univariate HGAM and Combined Indicator, and is generally closer to zero, showing the ensembles are effectively reducing bias from the individual models. The ensemble by regression has higher interval score (poorer performance) than other ensemble approaches at the peak, due to higher weighting of the univariate model, shown in further detail in _Supplementary Figure F_. The 95% coverage of the ensemble models drop slightly in the growth phase; however, this is small in comparison to the individual models.
Figure 3: Performance of individual models (non-ensembled) over time for the BA.4/5 wave. The epidemic curve (top) and hospitalisation ratio (bottom), the admissions divided by the admissions seven days prior, are shown to contextualise scores. The prediction start date represents the first date of prediction, where the predictions will be on the subsequent h=14 days. Supplementary Figure G contains the equivalent metrics for the Winter 2022/23 wave.
Figure 4: Performance of ensemble models over time for the BA.4/5 wave, the Univariate HGAM and Combined Indicators model are included to compare performance. The epidemic curve (top) and hospitalisation ratio (bottom), the admissions divided by the admissions seven days prior, are shown to contextualise scores. The prediction start date represents the first date of prediction, where the predictions will be on the subsequent h=14 days. Supplementary Figure H contains the equivalent metrics for the Winter 2022/23 wave.
### Overall performance and forecast horizons
In _Table 3_ we show how the models scored overall in a wave. Whilst instantaneous performance within wave is important, we need to understand overall how models performed, stratified by length of forecast horizon, \(h\). This is shown at NHS Trust level forecast for the BA.4/5 wave, though the Region waves are shown in _Supplementary Tables A and B_. In _Table 3_, for the interval score and median average error the ensemble models outperform all other models, though there is no clear best approach in this case. Unexpectedly, the Combined indicator model performs worse than the individual indicators. This is perhaps due to the larger feature space of indicator variables with high collinearity - which the penalisation parameter may not be tuned strongly enough to adjust for. The model may struggle to select an optimal combination of features, putting too much weight on non-informative variables. The ensemble models perform well on interval score and error in comparison to the individual models of larger forecast horizons. We can see from the table that on average the univariate models overpredict, the leading indicators underpredict, particularly in the growth phase of a wave. For the Winter 2022/23 wave the interval scores and median absolute error are lower than for the BA.4/5 wave, shown in _Table 4_, though the waves are different epidemic shapes - the Winter 2022/23 wave may be easier to predict due to its flatness and smaller peak - shown in _Figure 1_.
\begin{tabular}{|l|c|c|c|c|c|c|} \hline \hline \multicolumn{1}{|c|}{\multirow{2}{*}{**model**}} & \multicolumn{6}{c|}{**Trust geography**} \\ \hline & \multicolumn{1}{c|}{\multirow{2}{*}{**forecast**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**interval**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**95\%**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**moderate**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**underprediction**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**overprediction**}} \\ & \multicolumn{1}{c|}{\multirow{2}{*}{**horizon**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**score**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**coverage**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**error**}} & \multicolumn{1}{c|}{\multirow{2}{*}{**error**}} \\ \hline Combined Indicators & 7 & 1.02 & 0.945 & 2.31 & 0.533 & 0.323 \\ \hline Univariate baseline & 7 & 1.08 & 0.940 & 2.45 & 0.441 & 0.461 \\ \hline Ensemble by mean & 7 & 0.875 & 0.953 & 2.00 & 0.436 & **0.280*** \\ \hline Ensemble by regression & 7 & **0.870*** & 0.953 & **1.99*** & **0.391*** & 0.314 \\ \hline Ensemble by score & 7 & 0.873 & **0.950*** & 2.00 & 0.433 & 0.281 \\ \hline Google Trends & 7 & 0.955 & 0.948 & 2.17 & 0.522 & **0.280*** \\ \hline
111 Calls & 7 & 0.979 & 0.951 & 2.22 & 0.506 & 0.312 \\ \hline
111 Online & 7 & 0.974 & 0.955 & 2.21 & 0.503 & 0.302 \\ \hline Univariate HGAM & 7 & 0.897 & 0.955 & 2.04 & 0.399 & 0.344 \\ \hline Combined Indicators & 14 & 1.070 & 0.938 & 2.42 & 0.556 & 0.350 \\ \hline Univariate baseline & 14 & 1.330 & 0.913 & 2.95 & 0.492 & 0.635 \\ \hline Ensemble by mean & 14 & 0.919 & 0.944 & **2.10*** & 0.456 & 0.302 \\ \hline Ensemble by regression & 14 & 0.918 & 0.945 & **2.10*** & **0.405*** & 0.346 \\ \hline Ensemble by score & 14 & **0.916*** & 0.941 & **2.10*** & 0.454 & 0.301 \\ \hline Google Trends & 14 & 1.020 & 0.933 & 2.28 & 0.585 & **0.279*** \\ \hline
111 Calls & 14 & 1.040 & 0.940 & 2.35 & 0.540 & 0.343 \\ \hline
111 Online & 14 & 1.030 & **0.947*** & 2.33 & 0.541 & 0.321 \\ \hline Univariate HGAM & 14 & 1.020 & 0.939 & 2.30 & 0.420 & 0.435 \\ \hline Combined Indicators & 21 & 1.17 & 0.920 & 2.60 & 0.621 & 0.387 \\ \hline Univariate baseline & 21 & 1.75 & 0.885 & 3.77 & 0.564 & 0.936 \\ \hline Ensemble by mean & 21 & **0.986*** & 0.934 & 2.25 & 0.477 & 0.342 \\ \hline Ensemble by regression & 21 & 0.994 & **0.935*** & 2.27 & **0.418*** & 0.398 \\ \hline Ensemble by score & 21 & 0.976 & 0.932 & **2.22*** & 0.474 & 0.334 \\ \hline \hline \end{tabular}
## 4 Discussion
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{**Winter 2022/23 - Trust geography**} \\ \hline \multirow{2}{*}{**model**} & \multirow{2}{*}{**forecast**} & \multirow{2}{*}{**interval**} & \multirow{2}{*}{**95\%**} & \multirow{2}{*}{**median**} & \multirow{2}{*}{**underprediction**} & \multirow{2}{*}{**overprediction**} \\ & & & & & & \\ \cline{5-6} & \multirow{2}{*}{**horizon**} & \multirow{2}{*}{**score**} & \multirow{2}{*}{**coverage**} & & & & \\ \hline Combined Indicators & 7 & 0.746 & 0.956 & 1.69 & 0.384 & 0.242 \\ Univariate baseline & 7 & 0.801 & 0.945 & 1.81 & 0.353 & 0.32 \\ \hline Ensemble by mean & 7 & 0.674 & 0.952 & **1.54*** & 0.334 & 0.224 \\ Ensemble by regression & 7 & 0.674 & 0.955 & **1.54*** & **0.301*** & 0.252 \\ \hline Ensemble by score & 7 & **0.672*** & **0.951*** & **1.54*** & 0.334 & 0.223 \\ \hline Google Trends & 7 & 0.680 & 0.959 & **1.54*** & 0.384 & **0.189*** \\ \hline
**111 Calls** & 7 & 0.759 & 0.955 & 1.72 & 0.364 & 0.275 \\ \hline
**111 Online** & 7 & 0.692 & 0.970 & 1.58 & 0.347 & 0.222 \\ Univariate HGAM & 7 & 0.697 & 0.953 & 1.58 & 0.327 & 0.257 \\ \hline Combined Indicators & 14 & 0.831 & 0.938 & 1.85 & 0.443 & 0.269 \\ \hline Univariate baseline & 14 & 1.000 & 0.923 & 2.23 & 0.391 & 0.465 \\ Ensemble by mean & 14 & **0.729*** & 0.939 & **1.66*** & 0.375 & 0.239 \\ \hline Ensemble by regression & 14 & 0.735 & 0.943 & 1.68 & **0.336*** & 0.277 \\ \hline Ensemble by score & 14 & **0.729*** & 0.937 & **1.66*** & 0.374 & 0.240 \\ \hline Google Trends & 14 & 0.745 & 0.946 & **1.66*** & 0.429 & **0.207*** \\ \hline
**111 Calls** & 14 & 0.801 & 0.940 & 1.78 & 0.436 & 0.250 \\ \hline
**111 Online** & 14 & 0.744 & **0.951*** & **1.66*** & 0.408 & 0.221 \\ \hline Univariate HGAM & 14 & 0.828 & 0.938 & 1.85 & 0.353 & 0.353 \\ \hline Combined Indicators & 21 & 0.978 & 0.904 & 2.07 & 0.542 & 0.317 \\ \hline Univariate baseline & 21 & 1.380 & 0.891 & 2.95 & 0.448 & 0.734 \\ \hline Ensemble by mean & 21 & **0.795*** & 0.921 & 1.77 & 0.415 & 0.263 \\ \hline Ensemble by regression & 21 & 0.813 & 0.924 & 1.82 & **0.376*** & 0.311 \\ \hline Ensemble by score & 21 & 0.796 & 0.917 & 1.77 & 0.415 & 0.264 \\ \hline Google Trends & 21 & 0.824 & **0.932*** & 1.79 & 0.458 & 0.254 \\ \hline
**111 Calls** & 21 & 0.820 & 0.927 & 1.77 & 0.512 & **0.202*** \\ \hline
**111 Online** & 21 & 0.808 & 0.929 & **1.73*** & 0.497 & 0.203 \\ \hline Univariate HGAM & 21 & 1.060 & 0.911 & 2.30 & 0.395 & 0.523 \\ \hline \end{tabular}
\end{table}
Table 3: Scores of each individual and ensemble model across a range of forecast horizons averaged over the BA.4/5 waves, shown for predictions at NHS Trust level. The same scores are shown for regional predictions in Supplementary Table A. Best performing models within forecast horizon and metric are denoted with an asterisk (*).
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{**Winter 2022/23 - Trust geography**} \\ \hline \multirow{2}{*}{**model**} & \multirow{2}{*}{**forecast**} & \multirow{2}{*}{**interval**} & \multirow{2}{*}{**95\%**} & \multirow{2}{*}{**average**} & \multirow{2}{*}{**underprediction**} & \multirow{2}{*}{**overprediction**} \\ & & & & & & \\ \cline{5-6} & \multirow{2}{*}{**horizon**} & \multirow{2}{*}{**score**} & \multirow{2}{*}{**coverage**} & & \multirow{2}{*}{**error**} & \multirow{2}{*}{**underprediction**} \\ \hline Combined Indicators & 7 & 0.746 & 0.956 & 1.69 & 0.384 & 0.242 \\ Univariate baseline & 7 & 0.801 & 0.945 & 1.81 & 0.353 & 0.32 \\ \hline Ensemble by mean & 7 & 0.674 & 0.952 & **1.54*** & 0.334 & 0.224 \\ Ensemble by regression & 7 & 0.674 & 0.955 & **1.54*** & **0.301*** & 0.252 \\ \hline Ensemble by score & 7 & **0.672*** & **0.951*** & **1.54*** & 0.334 & 0.223 \\ \hline Google Trends & 7 & 0.680 & 0.959 & **1.54*** & 0.384 & **0.189*** \\ \hline
**111 Calls** & 7 & 0.759 & 0.955 & 1.72 & 0.364 & 0.275 \\ \hline
**111 Online** & 7 & 0.692 & 0.970 & 1.58 & 0.347 & 0.222 \\ \hline Univariate HGAM & 7 & 0.697 & 0.953 & 1.58 & 0.327 & 0.257 \\ \hline Combined Indicators & 14 & 0.831 & 0.938 & 1.85 & 0.443 & 0.269 \\ \hline Univariate baseline & 14 & 1.000 & 0.923 & 2.23 & 0.391 & 0.465 \\ Ensemble by mean & 14 & **0.729*** & 0.939 & **1.66*** & 0.375 & 0.239 \\ \hline Ensemble by regression & 14 & 0.735 & 0.943 & 1.68 & **0.336*** & 0.277 \\ \hline Ensemble by score & 14 & **0.729*** & 0.937 & **1.66*** & 0.374 & 0.240 \\ \hline Google Trends & 14 & 0.745 & 0.946 & **1.66*** & 0.429 & **0.207*** \\ \hline
**111 Calls** & 14 & 0.801 & 0.940 & 1.78 & 0.436 & 0.250 \\ \hline
**111 Online** & 14 & 0.744 & **0.951*** & **1.66*** & 0.408 & 0.221 \\ \hline Univariate HGAM & 14 & 0.828 & 0.938 & 1.85 & 0.353 & 0.353 \\ \hline Combined Indicators & 21 & 0.978 & 0.904 & 2.07 & 0.542 & 0.317 \\ \hline Univariate baseline & 21 & 1.380 & 0.891 & 2.95 & 0.448 & 0.734 \\ \hline Ensemble by mean & 21 & **0.795*** & 0.921 & 1.77 & 0.415 & 0.263 \\ \hline Ensemble by regression & 21 & 0.813 & 0.924 & 1.82 & **0.376*** & 0.311 \\ \hline Ensemble by score & 21 & 0.796 & 0.917 & 1.77 & 0.415 & 0.264 \\ \hline Google Trends & 21 & 0.824 & **0.932*** & 1.79 & 0.458 & 0.254 \\ \hline
**111 Calls** & 21 & 0.820 & 0.927 & 1.77 & 0.512 & **0.202*** \\ \hline
**111 Online** & 21 & 0.808 & 0.929 & **1.73*** & 0.497 & 0.203 \\ \hline Univariate HGAM & 21 & 1.060 & 0.911 & 2.30 & 0.395 & 0.523 \\ \hline \end{tabular}
\end{table}
To improve forecasting capability at both local and national level in England, we developed a novel forecasting framework, based on generalised additive models. These models are flexible and fast, allowing them to be used in real-time and rapidly adjust to changing variant/immunological dynamics. In addition to just modelling time series trends, our framework allows the incorporation of syndromic surveillance data as a leading indicator, including Google Trends data and NHS 111 (non-emergency) data, which we show improves forecasting performance at epidemic changepoints.
The primary strength of our methods came from the ensemble approach. Using multiple models allows different leading indicators and model structures to be aggregated into an ensemble forecast, which we have shown to outperform the individual models. This structure allows the ensemble to compensate for limitations in individual models. We validated this model across multiple waves of the COVID-19 pandemic, using proper scoring rules. We show that ensemble models out-score individual models even with a week longer forecast horizon, with the ensemble models having the better interval score and median absolute errors across horizon, wave, and geography.
Throughout the COVID-19 pandemic, hospital forecasting has been an essential part of the public health response worldwide [31][8][32]. Models have been developed for a plethora of use cases, from hospital scale workload planning [33] to national level policy making [32]. The range of use cases and challenging landscape of the pandemic have led to a range of methods being developed. Disease transmission models have been widely used for exploring potential scenarios, such as the roadmap out of lockdown in the UK [34]. However, the complex immune landscape and mixture of variants with different transmissibility and immune evasion have led to transmission models becoming increasingly hard to parameterise in real-time, particularly as the sparsity of high-resolution data increases. Therefore, statistical forecasting models have also seen widespread use, such as time series models [35].
Typical time series models for forecasting respiratory healthcare pressures included ARIMA models [36] and deep learning time series approaches [37]. For influenza, as an example, the power in these methods comes from repeated qualitative behaviour across years, where winter surges are seen over similar periods and following similar shapes. However, in the case of COVID-19, regular seasonal behaviour has not yet become established, with waves driven by a mixture of behavioural changes, waning immunity, and viral evolution. Therefore, the main strength of these typical methods is substantially reduced, limiting their forecasting potential. During the 2022/2023 Winter period in England, this reduced performance was also observed for influenza, due to the perturbations to the typical dynamics caused by the COVID-19 non-pharmaceutical interventions [38].
The main limitation in this model is the quality of the leading indicators data. To be used as a leading indicator, syndromic surveillance data must have a stable relationship with hospital admissions. Whilst this has can sometimes been the case [13], variants with a different disease severity profile could cause a step-change in the relationship. Additionally, the quality of syndromic surveillance data relies on individuals in the population mentioning
the right terms, which could be biased by public health messaging or the presence of co-circulating infections with similar symptom profiles.
Another limitation comes down the limited number of models included in the ensemble. Future work should focus on combining the individual methods presented here with other forecasting models in an improved ensemble. Additionally, we have only considered a small sample of possible ensembling techniques. More complex techniques, such as Bayesian stacking [39], could lead to stronger ensemble performance.
## Conclusion
In this manuscript we present a set of models used to forecast hospitalisations across the 2022 Omicron waves within UKHSA to guide public health policy. We show ensembling methods can improve epidemic peak performance using purely data driven models and that combining GLMs incorporating leading indicators and hierarchical GAMs with admissions improves predictive performance overall, and over time. We robustly compare predictive performance of the modelling approaches and validate the methods against two Omicron waves. We show that leading indicator models based on leading indicators can help anticipate turning points, but other approaches are can supplement performance in different epidemic phases.
## Conflict of Interest
The authors have declared that no competing interests exist. The authors were employed by the UKHSA but received no specific funding for this study.
## Data Availability Statement
UKHSA operates a robust governance process for applying to access protected data that considers:
* the benefits and risks of how the data will be used
* compliance with policy, regulatory and ethical obligations
* data minimisation
* how the confidentiality, integrity, and availability will be maintained
* retention, archival, and disposal requirements
* best practice for protecting data, including the application of 'privacy by design and by default', emerging privacy conserving technologies and contractual controls
Access to protected data is always strictly controlled using legally binding data sharing contracts.
UKHSA welcomes data applications from organisations looking to use protected data for public health purposes.
To request an application pack or discuss a request for UKHSA data you would like to submit, contact [email protected].
## References
* [1] United Kingdom Health Security Agency, "Coronavirus (COVID-19) in the UK," May 2020. [Online]. Available: [https://coronavirus.data.gov.uk/details/cases?areaType=nation&areaName=England](https://coronavirus.data.gov.uk/details/cases?areaType=nation&areaName=England).
* [2] L. B. Shrestha, C. Foster, W. Rawlinson, N. Tedla and R. A. Bull, "Evolution of the SARS-CoV-2 omicron variants BA. 1 to BA. 5: Implications for immune escape and transmission," _Reviews in Medical Virology_, vol. 32, no. 5, p. e2381, 2022.
* [3] U. H. S. Agency, "SARS-CoV-2 variants of concern and variants under investigation in England: Technical briefing 50," UK Health Security Agency, February 2023. [Online]. Available: [https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1138007/variant-technical-briefing-50-10-february-2023.pdf](https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1138007/variant-technical-briefing-50-10-february-2023.pdf). [Accessed May 2023].
* [4] A. Booth, A. B. Reed, S. Ponzo, A. Yassaee, M. Aral, D. Plans, A. Labrique and D. Mohan, "Population risk factors for severe disease and mortality in COVID-19: A global systematic review and meta-analysis," _PloS one_, vol. 16, no. 3, p. E0247461, 2021.
* [5] A. Sanyaoglu, C. Okorie, A. Marinkovic, R. Patidar, K. Younis, P. Desai, Z. Hosein, I. Padda, J. Mangat and M. Altaf, "Comorbidity and its impact on patients with COVID-19," _SN comprehensive clinical medicine_, vol. 2, no. 8, pp. 1069-1076, 2020.
* [6] J. M. Wolf, H. Petek, J. G. Maccari and L. A. Nasi, "COVID-19 pandemic in Southern Brazil: Hospitalizations, intensive care unit admissions, lethality rates, and length of stay between March 2020 and April 2022," _Journal of Medical Virology_, vol. 94, no. 10, pp. 4839-4849, 2022.
* [7] Q. J. Leclerc, N. M. Fuller, R. H. Keogh, K. Diaz-Ordaz, R. Sekula, M. G. Semple, K. E. Atkins, S. R. Procter and others, "Importance of patient bed pathways and length of stay differences in predicting COVID-19 hospital bed occupancy in England," _BMC health services research_, vol. 21, no. 1, p. 566, 2021.
* [8] J. Paireau, A. Andronico, N. Hoze, M. Layan, P. Crepey, A. Roumagnac, M. Lavielle, P.-Y. Boelle and S. Cauchemez, "An ensemble model based on early predictors to forecast COVID-19 health care demand in France," _Proceedings of the National Academy of Sciences_, vol. 119, no. 18, p. e2103302119, 2022.
* [9] J.-P. Chretien, D. George, J. Shaman, R. A. Chitale and F. E. McKenzie, "Influenza Forecasting in Human Populations: A Scoping Review," _PloS one_, vol. 9, no. 4, p. e94130, 2014.
* [10] P. Birrell, J. Blake, E. Van Leeuwen, N. Gent and D. De Angelis, "Real-time nowcasting and forecasting of COVID-19 dynamics in England: the first wave," _Philosophical Transactions of the Royal Society B_, vol. 376, no. 1829, p. 20200279, 2021.
* [11] A. K. Sahai, N. Rath, V. Sood and M. P. Singh, "ARIMA modelling & forecasting of COVID-19 in top five affected countries," _Diabetes & Metabolic Syndrome: Clinical Research & Reviews_, vol. 14, no. 5, pp. 1419-1427, 2020.
* [12] H. M. Nguyen, P. J. Turk and A. D. McWilliams, "Forecasting covid-19 hospital census: A multivariate time-series model based on local infection incidence," _IMIR Public Health and Surveillance_, vol. 7, no. 8, p. e28195, 2021.
* [13] J. Mellor, C. E. Overton, M. Fyles, L. Chawner, J. Baxter, T. Baird and T. Ward, "Understanding the leading indicators of hospital admissions from COVID-19 across successive waves in the UK," Cornell University, 21 March 2023. [Online]. Available: [https://arxiv.org/abs/2303.12037](https://arxiv.org/abs/2303.12037).
* [14] Cabinet Office, "Guidance COVID-19 Response: Living with COVID-19," 6 May 2022. [Online]. Available: [https://www.gov.uk/government/publications/covid-19-response-living-with-covid-19/covid-19-response-living-with-covid-19](https://www.gov.uk/government/publications/covid-19-response-living-with-covid-19/covid-19-response-living-with-covid-19).
* [15] N. E. Kogan, L. Clemente, P. Liautaud, J. Kaashoek, N. B. Link, A. T. Nguyen, F. S. Lu, P. Huybers, B. Resch, C. Havas and others, "An early warning approach to monitor COVID-19 activity with multiple digital traces in near real time," _Science Advances_, vol. 7, no. 10, p. eabd6989, 2021.
* [16] D. J. McDonald, J. Bien, A. Green, A. J. Hu, N. Defries, S. Hyun, N. L. Oliveira, J. Sharpack, J. Tang, R. Tibshirani and others, "Can auxiliary indicators improve COVID-19 forecasting and hotspot prediction?," _Proceedings of the National Academy of Sciences_, vol. 118, no. 51, p. e2111453118, 2021.
* [17] S. Meakin, S. Abbott, N. Bosse, J. Munday, H. Gruson, J. Hellewell, K. Sherratt and S. Funk, "Comparative assessment of methods for short-term forecasts of COVID-19 hospital admissions in England at the local level," _BMC medicine_, vol. 20, no. 1, pp. 1-15, 2022.
* [18] J. Paireau, A. Andronico, N. Hoze, M. Layan, P. Crepey, A. Roumagnac, M. Lavielle, P.-Y. Boelle and S. Cauchemez, "An ensemble model based on early predictors to forecast COVID-19 health care demand in France," _Proceedings of the National Academy of Sciences_, vol. 119, no. 18, p. e2103302119, 2022.
* [19] H. Wu and D. Levinson, "The ensemble approach to forecasting: a review and synthesis," _Transportation Research Part C: Emerging Technologies_, vol. 132, p. 103357, 2021.
* [20] T. Gneiting and M. Katzfuss, "Probabilistic forecasting," _Annual Review of Statistics and Its Application_, vol. 1, pp. 125-151, 2014.
* [21] NHS, "COVID-19 Hospital Activity," 2023. [Online]. Available: [https://www.england.nhs.uk/statistics/statistical-work-areas/covid-19-hospital-activity/](https://www.england.nhs.uk/statistics/statistical-work-areas/covid-19-hospital-activity/). [Accessed May 2023].
* [22] N. England, "Process and definitions for the daily situation report," 2021. [Online]. Available: [https://www.england.nhs.uk/publication/process-and-definitions-for-the-daily-situation-report/](https://www.england.nhs.uk/publication/process-and-definitions-for-the-daily-situation-report/). [Accessed November 2022].
* [23] S. Meakin, S. Abbott and S. Funk, "NHS trust level Covid-19 data aggregated to a range of spatial scales," 2021.
* [24] T. Ward, A. Johnsen, S. Ng and F. Chollet, "Forecasting SARS-CoV-2 transmission and clinical risk at small spatial scales by the application of machine learning architectures to syndromic surveillance data," _Nature Machine Intelligence_, vol. 4, no. 10, pp. 814-827, 2022.
* [25] S. Wood and M. S. Wood, "Package "mgcv"," _R package version_, vol. 1, no. 29, p. 729, 2015.
* [26] E. J. Pedersen, D. L. Miller, G. L. Simpson and N. Ross, "Hierarchical generalized additive models in ecology: an introduction with mgcv," _PeerJ_, vol. 7, p. e6876, 2019.
* [27] J. Friedman, T. Hastie and R. Tibshirani, "Regularization paths for generalized linear models via coordinate descent," _Journal of statistical software_, vol. 33, no. 1, p. 1, 2021.
* [28] J. Bracher, D. Wolffram, J. Deuschel, K. Gorgen, J. L. Ketterer, A. Ullrich, S. Abbott, M. V. Barbarossa, D. Bertsimas, S. Bhatia and others, "A pre-registered short-term forecasting study of COVID-19 in Germany and Poland during the second wave," _Nature communications_, vol. 12, no. 1, p. 5173, 2021.
* [29] E. Y. Cramer, E. L. Ray, V. K. Lopez, J. Bracher, A. Brennen, A. J. Castro Rivadeneira, A. Gerding, T. Gneiting, K. H. House, Y. Huang and others, "Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the United States," _Proceedings of the National Academy of Sciences_, vol. 119, no. 15, p. e2113561119, 2022.
* [30] N. I. Bosse, H. Gruson, A. Cori, E. van Leeuwen, S. Funk and S. Abbott, "Evaluating forecasts with scoringutils in R," arXiv, May 2022. [Online]. Available: [https://arxiv.org/abs/2205.07090](https://arxiv.org/abs/2205.07090).
* [31] M. Bicher, M. Zuba, L. Rainer, F. Bachner, C. Rippinger, H. Ostermann, N. Popper, S. Thurner and P. Klimek, "Supporting COVID-19 policy-making with a predictive epidemiological multi-model warning system," _Nature Communications Medicine_, vol. 2, no. 1, p. 157, 2022.
* [32] M. Biggerstaff, R. B. Slayton, M. A. Johansson and J. C. Butler, "Improving Pandemic Response: Employing Mathematical Modeling to Confront Coronavirus Disease 2019," _Clinical Infectious Diseases_, vol. 74, no. 5, pp. 913-917, 2022.
* [33] M. G. Klein, C. J. Cheng, E. Lii, K. Mao, H. Mesbahi, T. Zhu, J. A. Muckstadt and N. Hupert, "COVID-19 models for hospital surge capacity planning: a systematic review," _Disaster medicine and public health preparedness_, vol. 16, no. 1, pp. 390-397, 2022.
* Roadmap Step 2, 31 March 2021," GOV.UK, 5 April 2021. [Online]. Available: [https://www.gov.uk/government/publications/spi-m-o-summary-of-further-modelling-of-easing-restrictions-roadmap-step-2-31-march-2021](https://www.gov.uk/government/publications/spi-m-o-summary-of-further-modelling-of-easing-restrictions-roadmap-step-2-31-march-2021).
* [35] S. Meakin, S. Abbott, N. Bosse, J. Munday, H. Gruson, J. Hellewell, K. Sherratt and S. Funk, "Comparative assessment of methods for short-term forecasts of COVID-19 hospital admissions in England at the local level," _BMC medicine_, vol. 20, no. 1, pp. 1-15, 2022.
* [36] Y. Wang, Z. Yan, D. Wang, M. Yang, Z. Li, X. Gong, D. Wu, L. Zhai, W. Zhang and Y. Wang, "Prediction and analysis of COVID-19 daily new cases and cumulative cases: times series
forecasting and machine learning models," _BMC Infectious Diseases_, vol. 22, no. 1, pp. 1-12, 2022.
* [37] A. Zeroual, F. Harrow, A. Dairi and Y. Sun, "Deep learning methods for forecasting COVID-19 time-Series data: A Comparative study," _Chaos, Solitons & Fractals_, vol. 140, p. 110121, 2020.
* [38] J. Mellor, R. Christie, C. E. Overton, R. S. Paton, R. Leslie, M. Tang, S. Deepy and T. Ward, "Forecasting influenza hospital admissions within English sub-regions using hierarchical generalised additive models," Arxiv, 23 February 2023. [Online]. Available: [https://arxiv.org/abs/2302.11904](https://arxiv.org/abs/2302.11904).
* [39] Y. Yao, A. Vehtari, D. Simpson and A. Gelman, "Using Stacking to Average Bayesian Predictive Distributions (with Discussion)," _Bayesian Analysis_, vol. 3, no. 13, pp. 917-1007, 2018.
* [40] UK Health Security Agency, "SARS-CoV-2 variants of concern and variants under investigation in England Technical briefing 43," 24 June 2022. [Online]. Available: [https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1103533/Technical-Briefing-43-24June2022.pdf](https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1103533/Technical-Briefing-43-24June2022.pdf).
* [41] S. Evans, E. Agnew, E. Vynnycky, J. Stimson, A. Bhattacharya, C. Rooney, B. Warne and J. Robotham, "The impact of testing and infection prevention and control strategies on within-hospital transmission dynamics of COVID-19 in English hospitals," _The impact of testing and infection prevention and control strategies on within-hospital transmission dynamics of COVID-19 in English hospitals_, vol. 376, no. 1829, p. 20200268, 2021.
* [42] D. S. Silk, V. E. Bowman, D. Semochkina, U. Dalrymple and D. C. Woods, "Uncertainty quantification for epidemiological forecasts of COVID-19 through combinations of model predictions," _Statistical Methods in Medical Research_, vol. 31, no. 9, 2022.
## Supplementary Section A
The exact definition of when an epidemic wave due to a COVID variant is not a single correct date, this also extends to hospital admissions. To discretize the waves and allow comparative analysis we have defined start and end dates for the analysis of each wave in the study. This was done by selecting the lowest points between admissions peaks in England from the UK COVID-19 Dashboard, using the 7-day moving average. While the epidemic peak is of interest, so is the turning point before the growth phase, therefore we include an offset of X days, to capture some time before the turning point. To ensure alignment for weekly forecasts and consistent day-of-weeks, for a start date we select the preceding Sunday, and for the end date the following Sunday. This produces the following start and end dates for each admission wave.
_Supplementary Figure B. Sensitivity with interval score for ensemble regression approach over the BA.4/5 wave across a range of prior scale values with the prior normal(1/n_ models, prior scale). A non-informative prior (high value) performs worse than a strong prior (low value)._ |
2304.13779 | Simulation of the Earth's radio leakage from mobile towers as seen from
selected nearby stellar systems | Mobile communication towers represent a relatively new but growing
contributor to the total radio leakage associated with planet Earth. We
investigate the overall power contribution of mobile communication towers to
the Earth\'s radio leakage budget, as seen from a selection of different nearby
stellar systems. We created a model of this leakage using publicly available
data of mobile tower locations. The model grids the planet's surface into
small, computationally manageable regions, assuming a simple integrated
transmission pattern for the mobile antennas. In this model, these mobile tower
regions rise and set as the Earth rotates. In this way, a dynamic power
spectrum of the Earth was determined, summed over all cellular frequency bands.
We calculated this dynamic power spectrum from three different viewing points,
HD 95735, Barnard star, and Alpha Centauri A. Our preliminary results
demonstrate that the peak power leaking into space from mobile towers is $\sim
4$GW. This is associated with LTE mobile tower technology emanating from the
East Coast of China as viewed from HD 95735. We demonstrate that the mobile
tower leakage is periodic, direction dependent, and could not currently be
detected by a nearby civilization located within 10 light years of the Earth,
using instrumentation with a sensitivity similar to the Green Bank Telescope.
We plan to extend our model to include more powerful 5G mobile systems, radar
installations, ground based uplinks (including the Deep Space Network), and
various types of satellite services, including low Earth orbit constellations
such as Starlink and OneWeb. | Ramiro C. Saide, Michael A. Garrett, Nalini. Heeralall-Issur | 2023-04-26T18:51:32Z | http://arxiv.org/abs/2304.13779v1 | Simulation of the Earth's radio-leakage from mobile towers as seen from selected nearby stellar systems
###### Abstract
Mobile communication towers represent a relatively new but growing contributor to the total radio leakage associated with planet Earth. We investigate the overall power contribution of mobile communication towers to the Earth's radio leakage budget, as seen from a selection of different nearby stellar systems. We created a model of this leakage using publicly available data of mobile tower locations. The model grids the surface of the planet into small, computationally manageable regions, assuming a simple integrated transmission pattern for the mobile antennas. In this model, these mobile tower regions rise and set as the Earth rotates. In this way, a dynamic power spectrum of the Earth was determined, summed over all cellular frequency bands. We calculated this dynamic power spectrum from three different viewing points - HD 95735, Barnard's star, and Alpha Centauri A. Our preliminary results demonstrate that the peak power leaking into space from mobile towers is \(\sim 4\)GW. This is associated with LTE mobile tower technology emanating from the East Coast of China as viewed from HD 95735. We demonstrate that the mobile tower leakage is periodic, direction dependent, and could not currently be detected by a nearby civilisation located within 10 light years of the Earth, using instrumentation with a sensitivity similar to the Green Bank Telescope (GBT). We plan to extend our model to include more powerful 5G mobile systems, radar installations, ground based up-links (including the Deep Space Network), and various types of satellite services, including low-Earth orbit constellations such as Starlink and OneWeb.
keywords: Exoplanets - Earth - Astronomical instrumentation, methods, and techniques
## 1 Introduction
The goal of SETI (Search for Extraterrestrial Intelligence) is to discover evidence of intelligent life beyond the Earth by looking for so-called "techno-signatures" (artificially generated signals that are not produced by nature). Unfortunately, all signals detected by SETI radio experiments to date have not been attributable to an intelligent civilisation, other than our own (Enriquez et al., 2017; Pinchuk et al., 2019; Traas et al., 2021; Perez et al., 2020; Wlodarczyk-Sroka et al., 2020; Price et al., 2020; Harp et al., 2016; Heller and Pudritz, 2016). In principle, SETI surveys need to be sensitive to a wide range of parameter space - this is due to our ignorance regarding some very basic aspects of the signal we are looking for - including the timing of any transmissions, their location on the sky and their central frequency (Wright, 2021; Gray, 2020).
In parallel with the search for signs of intelligent life, the topic of exoplanets has had a major impact on the possible incidence of extraterrestrial life in the galaxy, as we understand more about the conditions on these planetary systems and their potential habitability (Wandel, 2017; Ribas et al., 2018; Robinson, 2017). Future advances in observing capabilities from space and on the ground have brought up new and intriguing prospects in the search for extraterrestrial life (Li et al., 2020; Shuch, 2011; Tarter, 2001).
Most SETI surveys are optimised to detect narrow-band signals from powerful beacons (Harp et al., 2011; Siemion et al., 2011; Tarter, 2001). It is usually assumed that the detection of fainter, broadband leakage signals is only relevant to very nearby stellar systems. The possibility of "eavesdropping" on the every-day radio transmissions inadvertently leaking into space from other technical civilisations was first considered by Sullivan et al. (1978). They considered the specific case of our planet Earth, and concluded that the most detectable signals were associated with military radar systems and television stations. They created a model of the Earth's leakage radiation, and demonstrated that variations in the signal power would be observed by an external observer as different regions of the Earth rotated in and out of view.
The nature of the Earth's radio leakage has changed significantly since the pioneering work of Sullivan et al. (1978) was published over 40 years ago. For example, powerful TV transmissions are no longer a major contributor to the Earth's leakage radiation with the rise of cable TV and the internet. In addition, mobile communication systems were unknown until the 1990s, and they currently represent a new and still growing component of the Earth's human-generated radio emission. According to Statista (2022), the current number of mobile phone users is 7.26 billion, which suggests in excess of
\(\sim 91.00\%\) of people in the world are cell phone owners. Individual handsets are serviced by a huge network of mobile tower systems that are spread across the landmass of the planet. Although each of these towers generates radio transmissions with relatively low power levels (\(\sim\) Hundreds of watts), the directivity of these antennas and the sheer numbers involved, make them a significant component worthy of further study. It should also be noted that these mobile towers transmit at frequencies within or close to L-band, a major band for radio astronomy that includes the "water hole" defined by the natural emissions from HI and OH at \(\lambda=21\)cm and \(\lambda=18\)cm (Oliver and Billingham, 1971; Siemion et al., 2010).
As far as we know, no previous research has investigated the cumulative effect of the mobile tower emissions and the implication for eavesdropping and SETI more generally. Other studies have demonstrated how other Earth techno-signatures would be detectable from space at near-IR and optical wavelengths (Beatry, 2021), and from the use of power beaming to transfer energy and accelerate spacecraft (Benford and Benford, 2016; Benford and Matloff, 2019). Here we propose to investigate how the leakage of radiation from mobile towers on Earth would appear if viewed by an extraterrestrial civilisation using instrumentation similar to our current radio telescope technology. More specifically, we calculate the overall radio power spectrum of the Earth and the associated detection limits for a number of different exoplanet systems. We also attempt to analyse the future evolution of our mobile tower radio leakage using 5G towers as a proxy. Our study provides some insight on what we might expect if there is a human-like civilisation located elsewhere in the Milky Way with similar or indeed more advanced levels of radio telescope technology.
The paper is organised as follows. In section 2, we provide a brief overview of the main radio communication systems currently deployed on the Earth, and details of how we model mobile communication towers in particular. We determine the integrated power spectrum of mobile towers following the methods of Sullivan et al. (1978). Our main results are presented in section 3, including various power spectra for different observers located on exoplanet systems. In section 4 we explore the detectability of these emissions assuming a range of different instrumentation, including next generation telescopes such as the SKA. Our conclusions are given in section 5, including future plans to extend our model to include additional sources of leakage radiation.
## 2 Radio Leakage from the Earth.
The earlier work of Sullivan et al. (1978) identified TV transmitters and military radar systems, in particular "early-warning" radar systems, to be the most likely form of radio leakage to be detected by another intelligent civilisation observing the Earth. In particular, the detection by another civilisation of individual narrow-band TV and/or radio transmissions could be used to infer properties of our planet such as details of its rotation rate and orbital motion, information about the planet's ionosphere and troposphere, the global distribution of transmitters and possibly some cultural aspects of our civilisation.
In this section, we consider how the Earth's radio leakage has changed in recent times, arguing that a new and important component of that emission is associated with mobile communication systems, and in particular, mobile communication towers. We have attempted to model this new component of leakage, in order to determine the radio power spectrum profile as a function of time for a specific time of the year, location of the transmitter (latitude and longitude), and celestial coordinates/distance of an outside observer (receiver).
### Earth radio mobile communication
The radio spectrum is used by a wide range of different services - these include: radio and television broadcasting, radio navigation and position determination, military and civilian radar systems, space and satellite communications, remote sensing applications, and so on. Over the last few decades, the development and use of mobile services by land, maritime, aeronautical, and satellite applications have grown enormously. The frequency range used by most of these services is typically confined from 3 kHz to 30GHz, and the power levels transmitted can span many orders of magnitude (\(1-10^{6}\) W) (Bianchi and Meloni, 2007). Each service is allocated a range of operating frequencies by the International Telecommunication Union (ITU) (Intven, 2000).
Mobile telecommunication services have become an essential part of our modern lives, permitting us to exchange information, including video, across the planet almost instantaneously. Cellphone communication is part of a wider wireless communication service that has experienced exceptional growth in the last few decades. Since the early development of cell phones in 1973 (Murphy, 2013), the number of mobile device connections has now surpassed the number of people on the planet, making it the fastest-growing human-made technology phenomenon ever. In addition, this technology is spread across the major land areas on the planet where people live. There are now over 10.98 billion mobile connections worldwide, outnumbering the current world population of 7.978 billion, according to United Nation digital analyst estimates, there are 3.002 billion more mobile connections than people on Earth (Statista, 2022).
By comparison, the number of TV stations has declined in recent years, with a massive shift towards viewing TV via cable or online streaming internet services. While TV and radio transmitters are still an important source of Earth leakage radiation at frequencies around \(40-700\) MHz, there can be little doubt that mobile technologies also represent a significant part of the overall leakage budget at frequencies between \(400-3000\) MHz. Currently, the most powerful sources of leakage remain military radar systems, as originally identified by Sullivan et al. (1978).
### Mobile Towers
Mobile phones communicate by transmitting radio waves through a network of fixed antennas called mobile towers. The handsets operate at frequencies between 450 and 3000 MHz transmitting isotropically with peak powers in the range of only 0.1 to 2W (Bianchi and Meloni, 2007). By comparison, a mobile tower generates peak powers of 100-2000W, and the antennas are directional with significant forward gain (\(\sim 50\times\)) towards the horizon (see figure 1).
A mobile tower is a physical tower or pole for the placement of cellular radio equipment used to transmit or receive telecommunication broadcasts. The type of mobile tower is related to its purpose. Micro cell towers are smaller mobile towers that provide mobile connectivity in a small area. The distance between each micro tower is about \(400-800\) m, whereas the distance between each macro tower is typically about \(2-4\) km. The number of mobile towers in a country is dependent on the area they are required to cover, the population density and the cellular technology being deployed e.g. GSM (Global System for Mobile Communications), UMTS (Universal Mobile Telecommunications Service), LTE (Long Term Evolution), Most countries presently deploy LTE.
#### 2.2.1 Mobile network frequency bands
The radio-frequency mobile spectrum is mainly divided into the following bands and deployed technologies:
1. GSM base station antennas transmit in the frequency range of \(935-960\) MHz. This frequency band of \(25\) MHz is divided into twenty sub-bands of \(1.2\) MHz, which are allocated to various operators (Kumar, 2010). There may be several carrier frequencies (1 to 5) allotted to one operator with an upper limit of \(6.2\) MHz bandwidth. Each carrier frequency may transmit \(10\) to \(20\)W of power. So, one operator may transmit \(50\) to \(100\)W of power and there may be \(3\) - \(4\) operators on the same roof top or tower. Total transmitted power levels are therefore in the region of \(200\) to \(400\)W. In addition, directional antennas are used, which typically may have a gain of around \(17\) dB, so effectively, several kW of power may be transmitted in the main beam direction (Kumar, 2010). The radiation pattern of directional antennas is something which is very critical in the whole transmission process (see figure 1).
2. UMTS is defined as the third-generation (3G) mobile network built on the global GSM standard, and transmits in the frequency range of \(1920-1980\) MHz, \(2110-2170\) MHz, and these frequency bands are allocated to be used in Europe and Asia. UMTS yields channel bandwidths of \(5\) MHz mainly1 but \(10\) MHz and \(20\) MHz are also possible. UTMS presents almost the same transmitting power characteristics as GSM.
Footnote 1: [https://www.umtsworld.com/technology/frequencies.htm](https://www.umtsworld.com/technology/frequencies.htm)
(iii) LTE yields more bands from around \(600-3000\) MHz with a bandwidth variation from \(10\) MHz to greater than \(100\) MHz. LTE permits the following channel bandwidths: \(1.4,3,5,10,15,20\) MHz (Sautter, 2010). The signal strength values are defined by several different measurements. Shi et al. (2012) showed that LTE produces more powerful radiation than UMTS and GSM.
This mobile technology can be prominently seen as a radio interference in radio observations, including recent SETI surveys e.g. Smith et al. (2021). In Fig.2 of Smith et al. (2021), we can see the signature of the interference associated with cellular bands across the observing bandwidth. This is what an extraterrestrial observer with a very sensitive radio telescope might detect if they eavesdrop on the Earth. See also Fig.3 in subsection 2.3.
### Mobile towers Geo-location data
We created a mobile tower database drawn from the world's largest Open Database of GPS mobile towers OpenCellD2. OpenCelliD enables public access to this database via an Application Programming Interface (API). The OpenCellID database is published under an open content license with the intention of promoting free use and redistribution of the data. The data contains different parameters, but the ones that are useful for this study include mobile tower latitude, longitude and deployed technology (GSM, UMTS and LTE).
Footnote 2: [https://www.opencellid.org](https://www.opencellid.org)
The OpenCellID database contains more than \(30\) million data points and it is updated on a daily basis. The database is populated via voluntary crowd-sourcing reports collected automatically by registered users via various mobile phone applications. There are known issues with respect to the quality of the data, in particular the data source is not complete (especially in developing countries) and may contain errors (Lv et al., 2019; Ricciato et al., 2015). Nevertheless, this is considered the best and largest, publicly available mobile tower database and it has been used in many other scientific studies (Johnson et al., 2021; Ulm et al., 2015; Werner and Porczek, 2019). We adopt it here, recognising that it may be incomplete in terms of geographical coverage but that it represents the best information that is currently available.
To visualize geographically the mobile towers, we used the software Qgis - Qgis is free and Open source - enabling us to create, edit, visualize, and analyze geo-spatial information. Qgis supports different types of data, such as vector, raster, delimited text, mesh layers, and many others (QGIS Development Team, 2009; Kurt Menke et al., 2016).
Fig.3 shows the distribution of mobile towers on the Earth. The background map in purple is the Earth land coverage. A granular view of the mobile towers can be seen in Fig.4. This sample represents the distribution of mobile towers in the city of Lisbon, Portugal.
A closer look at the distribution of mobile towers across the Earth, reflects the most densely populated areas and cities around the world, and provides a good sense of the granular nature of this particular form of leakage radiation. Dhaka (Bangladesh) is the world's most densely populated city (2021), with \(36{,}941\) residents per square kilometre. The distribution of mobile towers in the city can be seen in Fig.5. There are a total of \(82{,}833\) mobile towers - this includes \(45{,}356\) GSM towers, \(35{,}076\) UMTS towers, and \(2{,}401\) LTE towers.
### Estimate of the total emitted power of mobile towers
Mobile towers don't work with constant output powers, the power that they transmit depends on what they need to achieve. On average, a mobile tower covering a large rural area will blast out more power than a small mobile tower in the city centre.
To estimate the total power emitted by mobile towers in great detail, would require the emission characteristics of each individual tower to be known. Due to the amount of data we had, performing beam tracking for each individual mobile tower is computationally very expensive. In order to circumvent this problem, we divided each continent into square grids of \(\sim 20\) degrees by \(20\) degrees (see fig.6). The grid size was effectively determined by the computing resource available to us. This approach reduced the number of mobile transmission beams that needed to be calculated but included enough granularity for the effects of the irregular geographical distribution of mobile towers to be retained in our results (see Section
Figure 1: Radiation Pattern of a typical Mobile Tower Antenna.
3). Fig.6 shows that some grid cells contain less than 2000 towers while others contains more than 200 thousand towers. Each central grid point became the equivalent location of the mobile tower transmitter, integrating the output of many towers into a single response. We calculated the coordinates (latitude and longitude) of the centre point of each grid - the centre of the grid encapsulated all the power of the towers within that grid. The maximum power is used as the peak of the beam's transmitter.
We calculated the contribution of each mobile tower technology deployed within each grid cell. The total power was then calculated based on the single tower Equivalent Radiated Power (ERP). We summed all the towers within each grid and multiplied by the power emitted by a single cell tower. Cellular facilities, by contrast, use a few hundred watts of EIRP per channel, depending on the purpose at any given time and the number of service providers co-located at any given tower (Levitt & Lai, 2010). For simplicity, we assumed for each type of mobile technology the following output power values: GSM 100 Watts, UMTS 100 Watts, and LTE 200 Watts.
A simplified model of a beam pattern of a mobile tower was adopted using a Gaussian function - we assigned all the beam patterns as omnidirectional in azimuth and of Gaussian shape in the elevation angle above the horizon. Note that our analysis does not take into consideration reflections from buildings, mountains or other structures that reflect radio waves.
For each of the grid centroids, we calculated the elevation and azimuth of an extraterrestrial observer (see section 3) for a particular
Figure 3: This map shows the geographic distribution of the mobile towers of planet Earth, represented by red dots. The map contains more than 30 million individual data points, most of which overlap at this resolution.
Figure 2: A histogram of total hits (signals likely to be associated with radio frequency interference) as a function of frequency (Smith et al., 2021).
day of the year (see for example, Fig.7). We used the elevation angle to calculate the gain of the Gaussian beam pattern with the amplitude being the sum of the power from the towers within the respective grid cell. Finally, we calculated the total power and plotted this against Greenwich Mean Sidereal Time (GMST).
Mobile towers emit the maximum power towards the horizon, therefore the extraterrestrial observer will detect the maximum power when the mobile towers are rising or setting, in other words, when the observer is on the local horizon of the mobile tower (see Fig.7 and Fig.8). Nevertheless, other locations also contribute to the overall detectable power - our calculations also take this aspect into account.
## 3 Results
Our sample of stars was chosen primarily to be located nearby and also in terms of which coordinates of the stars could receive maximum detectable leakage. We choose one star with a declination near the equator (Barnard star), one in the southern hemisphere (HD 95735), and another in the northern hemisphere (Alpha Centauri A). Northern stars will detect more leakage radiation than southern stars due to the amount of transmitters in the northern hemisphere illuminating these stars. Last but not least, we considered also systems with potentially habitable planets, which is the case of Barnard star (Ribas et al., 2018).
Following Sullivan et al. (1978), we first determined the radio power spectrum from the Earth using the Barnard star as an extraterrestrial location. This star is a red dwarf at a distance of \(\sim 6\) light-years from Earth in the constellation of Ophiuchus with a Right Ascension: \(17^{h}57^{m}48.49803^{s}\) and Declination: \(04^{\circ}41^{\prime}36.2072^{\circ}\) (J2000). It is the fourth-nearest-known individual star to the Sun.
We determined the total power separately for LTE, GSM, and UMTS mobile technologies. Our preliminary results demonstrate that for Barnard star, the peak power from mobile towers with LTE technology is in the order of \(\sim 3.2\) GW, out-powering the other 2 mobile technologies with peak powers of 1.3 GW for GSM and 2.7 GW for UMTS mobile towers.
From figure 9 we can see that the contribution of these power levels comes mainly from Western Europe, in particular the regions of France, Belgium, the United Kingdom, the north of Spain, Germany, and Denmark. East Asia follows, with the main contribution coming from China, Japan, North and South Korea, Vietnam, and Australia. We also have significant leakage from the West of North America. The peaks represent the times at which Barnard's star rises and sets from these locations. From the star's frame of reference, the peaks occur when either Western Europe or East Asia first comes into view
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Continent** & **upper-left** & **bottom-left** & **bottom-right** & **upper-right** \\ \hline Africa & (-25.3604, 37.3452) & (-25.3604, -46.9657) & (51.4170, -46.9657) & (51.4170, 37.3452) \\ \hline NA & (-179.1435, 83.6341) & (-179.1435, -0.3887) & (179.7809, -0.3887) & (179.7809, 83.6341) \\ \hline SA & (-109.4537, 15.7029) & (-109.4537, -55.9185) & (-28.8770, -55.9185) & (-28.8770, 15.7029) \\ \hline Asia & (25.6632, 55.4345) & (25.6632, -12.1999) & (153.9856,-12.1999) & (153.9856,55.4345) \\ \hline Europe & (-27.3746, 84.5116) & (-27.3746, 16.3208) & (179.9843, 16.3208) & (179.9843, 84.5116) \\ \hline Oceania & (-180.0, 20.5554) & (-180.0, -54.7504) & (180.0, -54.7504) & (180.0, 20.5554) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Polygon coordinates to create grids for the continents, each of these are pairs of (longitude, latitude). NA and SA represents North America and South America respectively.
Figure 4: The distribution of mobile towers in Lisbon, Portugal. This is a small sample of the OpenCellID database available on the global scale.
Figure 5: The distribution of the different types of mobile towers in the most densely populated city in the world, Dhaka (Banglades). The GSM, LTE and UMTS mobile transmitters are represented by red, yellow and purple dots respectively.
Figure 6: This image shows the towers within each grid cell for the African continent. Note that the grids are 20 degrees by 20 degrees. The centroids are calculated based on the area of the grid in the map.
Figure 7: The position of Barnard star over 24 hours GMST from one of the grid centres located in Japan (longitude = 132.786, latitude = 34.033).
or finally disappears around the limb of the Earth (the edge of the Earth's disk) as seen from the star.
The second hypothetical extraterrestrial observer that we chose was HD 95735, a red dwarf with Right Ascension: \(11^{h}03^{m}20.1948^{s}\) and Declination: \(+35^{\circ}58^{\prime}11.5761^{\prime}\)-(J2000). This star is located \(\sim 8.3\) light years away from the Earth (Gatewood, 1974). Fig.10 represents the power structure of leakage caused by LTE mobile technology. From Fig.10, we can see that the contribution of these power levels comes mainly from the East Coast of China, followed by the West and East Coast of North America. The peak power leaking in the direction of HD 95735 is \(\sim 4\) GW. For the total power from GSM mobile tower emissions, we see the maximum power is \(\sim 1.6\) GW, and for UMTS it is \(\sim 3.3\) GW.
The third hypothetical observer was placed at Alpha Centauri A, with coordinates (RA = \(14^{h}39^{m}36.4940^{s}\) and Dec = \(-60^{\circ}50^{\prime}02.3737^{\prime}\)) (J2000). This star is located at \(\sim 4.2\) light years away from Earth (Zhao et al., 2017) and we can see from figure 11 that the contribution of the peak power levels comes mainly from the west of Asia and Central Europe, with East Africa and Australia also making significant contributions. Note that the main contribution from Australia occurs at lower culmination when this circumpolar star reaches its minimum altitude. The peak power leaking in the direction of Alpha Centauri A is approximately 3.5 GW. The total power from GSM and UTMS mobile towers is \(\sim 1.4\) GW and \(\sim 2.9\) GW respectively.
## 4 Discussion
Our findings are similar to the results presented earlier by Sullivan et al. (1978), demonstrating that the Earth's radio leakage remains periodic, including now the contribution made by mobile communication towers. This is not unexpected, since the Earth rotates and the physical distribution of towers across the surface of the planet is non-uniform in nature. The distributed location of mobile towers on the Earth, leads to a very complex variation in the planet's radio leakage signature, and from our results, we can see that it is also very dependent on the location of the hypothetical observer. In particular, the maximum leakage emission would be detected from northern stars (e.g. see HD 95735) - this is due to the fact that most mobile towers are located in the northern hemisphere. We also note that for low declination stars only a few transmitters on the Earth illuminate the extraterrestrial observer. For example, for an observer with Dec = -89.2291 deg (e.g. HD 136509) (Gaia Collaboration, 2018) and below, no transmitter illuminates the observer, nevertheless, the observer would still receive some amount of radiation. The same is true for stars with positive declination with values close to 90 degrees.
Our study also draws attention to the fact, that the leakage signature of the Earth has evolved quite rapidly over relatively short time scales. The radio emission Sullivan et al. (1978) estimated from TV transmitters and radar systems was dominated by a few thousand, very powerful transmitters located in specific areas of the planet. We deal with millions of mobile towers, operating at higher frequencies and employing much more modest powers. But perhaps the most interesting difference is that these towers are much more geographically distributed than the TV transmitters Sullivan et al. (1978) considered. For the first time, our results (see Fig. 11) highlight the significant leakage contributions being made via the rise of developing countries in the continent of Africa, as well as by countries such as Japan, Vietnam, China.
In this study, we have focused on leakage radiation from mobile towers. We have not included the leakage emission from mobile handsets themselves. The emission from handsets varies significantly, depending on whether they are actively transmitting or not. When a mobile phone is active the operator's network controls the output power down to levels as low as 1 mW (Lonn et al., 2004). However, the sheer number of handsets active around the world at any given time, suggests this component should not be neglected. Since most mobiles are being used in densely populated areas, the bulk of handsets will be operating at fairly low power levels. We estimate the total background leakage emission from mobile handsets located around the Earth to be around an order of magnitude less powerful than the peak leakage produced by mobile towers presented here. Since mobile handsets radiate isotropically, they add a background component to the leakage that will be less variable than mobile towers since the latter are beamed towards the horizon. For the moment, we have not included mobile handsets in the analysis presented here. Our simulations therefore present a lower limit on the leakage radiation emanating from the Earth due to mobile communication systems.
We note that by analysing the flux variation of our planet as a function of time, it should be possible for an extraterrestrial civilisation to generate a simple model of our planet that reproduces regions that are dominated by land, vegetation, and oceans/ice.
### Detectability range
In order to test whether these signals can be detected by an external observer, we first assume that they posses the same radio observing capabilities as we do. The overall likelihood of detecting our signals from space depends on the frequency, transmitted power, bandwidth, the sensitivity of telescope, distance of the observer, the persistence of the leakage, etc. (Grimaldi and Marcy, 2018). For a given radio telescope, the minimum detectable flux density \(S_{min}\) is given by:
\[S_{min,wide}=SNR_{min}\frac{2kT_{sys}}{A_{eff}\sqrt{n_{pol}}A_{obs}\Delta\nu} \tag{1}\]
where \(S/N_{min}\) is the signal-to-noise threshold value,\(\Delta\nu_{obs}\) is the observing time, \(\Delta\nu\) is the bandwidth, and \(n_{pol}\) is the number of polarizations. The term \(2kT_{sys}/A_{eff}\) is also known as SEFD. The SEFD is expressed in Jy (\(1Jy=10^{-26}Wm^{-2}Hz^{-1}\)).
As a proxy to what our potential observer might have at their disposal, we will use in the first instance the Green Bank telescope
Figure 8: An extraterrestrial observer will detect the maximum radiation from a mobile tower as it rises or sets. This illustration shows a tower rising across the horizon.
(GBT). For the GBT at L-band, the SEFD is approximately 10 Jy (Enriquez et al., 2017).
The minimum EIRP that can be detected is then:
\[EIRP_{min}=4\pi d^{2}F_{min} \tag{2}\]
where \(d\) is the distance to the transmitter, and \(F_{min}\) is the minimum flux (in units of \(Wm^{-2}\)) that can be detected by the observing system. The minimum detectable flux \(F_{min}=S_{min}\delta v_{t}\), where \(S_{min}\) is minimum detectable flux density and \(\delta v_{t}\) is the bandwidth of our transmitted signal (Price et al., 2020). Assuming, \(SNR_{min}=7,n_{pol}=2,\delta v_{t}=20\) MHz, \(\Delta t_{obs}=5\) minutes, at the distance of 10 light-years, the \(EIRP_{min}\) can then be detectable is \(1.436\times 10^{13}W\).
For the Square Kilometre Array (SKA), the expected SEFD of SKA1-mid is \(\sim 1.55\) Jy (Pellegrini et al., 2021), and we can assume the full SKA (SKA Phase 2) would be a factor of 10 times better than that. These next generation radio telescopes would therefore be able to detect leakage radiation that is 1-2 orders of magnitude fainter than the GBT currently can. While these detection levels are still several orders of magnitude short of being able to detect our own peak leakage levels associated with current generation mobile systems, we might expect those leakage levels to increase in the future - for example, mobile towers for 5G communication systems are expected to operate at power-levels similar to 4G systems, these transmissions will occupy a much larger part of the radio spectrum in terms of the bandwidth they use. The flux density, measured by an extraterrestrial observer with a broad-band receiver system will therefore be significantly greater. Our results, also show that the regional distribution of mobile towers suggests that employing telescope integration times significantly greater than 5 minutes is also appropriate. Integration times of several hours are warranted from our plots presented in section 3, further increasing the sensitivity level by a factor of \(\sim 5\). Taking all of this into account, the full SKA should be able to detect levels of \(EIRP_{min}\sim 3\times 10^{11}W\) - not so far removed from the power levels associated with our current 4G peak leakage signature \(\sim 4\times 10^{9}\) W.
One aspect of detectability we have not yet considered is whether the leakage radiation from mobile towers can be distinguished against the powerful broad-band radio emission produced by the Sun. The Sun is a powerful source of radio waves over a wide range of different frequencies (Raulin et al., 2004; Raulin & Pacini, 2005; Tapping, 2013). The solar radiation in the radio region can vary significantly when the Sun is active. During active periods, the solar flux reaches 10000 or even 10000 solar flux units (sfu - corresponding to \(10^{-22}Wm^{-2}Hz^{-1}\)) at approximately 3 GHz. For an extraterrestrial observer, solar radio emission can be several orders of magnitude more powerful than the mobile tower emission (Muratore et al., 2022). So a single dish radio telescope operated by a distant civilisation would have a tough time distinguishing between the two components of emission using broadband radiometer measurements. An interferometer, however, with baselines of several thousands of km would be able to resolve the Earth and Sun spatially, out to distances of \(\sim 1\) kpc. Our hypothetical observer located at Barnard star, would spatially resolve the Sun and Earth with an L-band interferometer of baseline length 100 km. In this case an interferometer with sufficient sensitivity, would be able to detect a mobile leakage signal independently from the solar radio background which would also be resolved.
Figure 9: Total Power of LTE mobile tower’s leakage radiation plotted over a sidereal day in the direction of Barnard’s star.
### Short-term evolution of mobile tower leakage
It is expected that 5G mobile technology will account for over half of total mobile connections in developed regions of Asia and North America by 2025 (Statista, 2022b). Leakage power levels will increase but the services will also expand towards higher frequencies - in particular, 5G technology will operate in the following frequency bands: low-band (600, 800, and 900 MHz); mid-band (2.5, 3.5, and 3.7 - 4.2 GHz); and high-band (24, 26, 28, 37, 39, 42 and 47 GHz). Frequencies above 59 GHz are not yet licensed (Gultekin and Siegel, 2020). With the anticipated rapid expansion of 5G technology, the amount of wireless devices is also expected to increase - resulting in a high density of infrastructure and broadband emissions. Higher frequency emissions have shorter ranges, thus a higher density of mobile towers will be needed, and this will increase the overall mobile tower leakage, as well as change its spectral profile. The rules adopted by the FCC allow a 5G base station operating in the millimeter range to emit an effective radiated power of up to 30,000 watts per 100 MHz of spectrum 3. The nature of 5G leakage radiation will become clearer as it continues to be developed and deployed.
Footnote 3: shorturl.at/afKOP
## 5 Conclusions and future work
The main goal of the current study was to determine the power spectrum of mobile towers on Earth as observed by a hypothetical civilisation located at interstellar distances. Our findings show that the leakage radiation from mobile towers is variable in intensity and periodic in nature due to their non-uniform distribution on the Earth's surface and the rotation of our planet. We have simulated these effects in detail taking into account the position of mobile tower transmitters and the coordinates of the observer. Our results demonstrate that the maximum power generated by LTE mobile tower technology is of the order of 4 GW for HD 95735. By comparison, the UMTS mobile tower technology generates a power level of order 3.3 GW for HD 95735. The second most powerful emission leaks in the direction of Alpha Centauri A, is of order 3.5 GW, and corresponds also to LTE mobile technology.
In terms of detectability, we conclude that any nearby civilization located within 10 light years of Earth and equipped with a receiving system comparable to the GBT would not detect the Earth's mobile tower leakage. Reciprocally we conclude that any existing extraterrestrial civilization leaking radio signals from mobile towers at the same power levels would not be detected by the GBT. Next generation telescopes such as the SKA can do better but are still some way off detecting these low levels of leakage emission. However, mobile systems are in their infancy, and the future development of this technology (e.g. 5G systems and beyond) suggests that this component of the Earth's leakage will continue to increase in power over time. If the leakage can be detected, an extraterrestrial observer would be able to discern various details of the nature of our planet and the distribution of technology on its surface.
In the future, we plan to develop our model of mobile tower leakage to include these more powerful and broader band 5G emissions. In addition, we would like to update and add other sources of human-made radio emission to our model, including both military and civilian radar systems, Deep Space Network (DSN) transmissions, com
Figure 10: Total Power of LTE mobile tower’s leakage radiation plotted over a sidereal day in the direction of HD 95735 star.
munication satellites in both geostationary and low-earth-orbit, in particular the widely anticipated satellite constellations now planned e.g. Starlink and OneWeb (McDowell, 2020; Henri, 2020). Last but not least, we would like to validate our results by comparing our simulations with real data - this would include radio telescopes observing reflections from the Moon and satellites that monitor radio emissions from the Earth.
## Acknowledgements
Research reported is supported by a Newton Fund project, DARA (Development in Africa with Radio Astronomy), and awarded by the UK's Science and Technology Facilities Council (STFC) - grant reference ST/R001103/1.
This research made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration et al., 2013, 2018).
## Data Availability
The data underlying this article are available in [https://github.com/mirosaide/Qgis_selectbylocation.git](https://github.com/mirosaide/Qgis_selectbylocation.git) and [https://github.com/mirosaide/Mobile-Towers-Power-Calculation.git](https://github.com/mirosaide/Mobile-Towers-Power-Calculation.git), at [https://dx.doi.org/](https://dx.doi.org/)[doi]. The datasets were derived from sources in the public domain: OpenCelliD, at [https://www.opencelliid.org](https://www.opencelliid.org) and Natural Earth at [https://www.naturalearthdata.com/downloads/](https://www.naturalearthdata.com/downloads/).
|
2307.12137 | Bayesian Fractional Polynomial Approach to Quantile Regression and
Variable Selection with Application in the Analysis of Blood Pressure among
US Adults | Hypertension is a highly prevalent chronic medical condition and a strong
risk factor for cardiovascular disease (CVD), as it accounts for more than
$45\%$ of CVD. The relation between blood pressure (BP) and its risk factors
cannot be explored clearly by standard linear models. Although the fractional
polynomials (FPs) can act as a concise and accurate formula for examining
smooth relationships between response and predictors, modelling conditional
mean functions observes the partial view of a distribution of response
variable, as the distributions of many response variables such as BP measures
are typically skew. Then modelling 'average' BP may link to CVD but extremely
high BP could explore CVD insight deeply and precisely. So, existing mean-based
FP approaches for modelling the relationship between factors and BP cannot
answer key questions in need. Conditional quantile functions with FPs provide a
comprehensive relationship between the response variable and its predictors,
such as median and extremely high BP measures that may be often required in
practical data analysis generally. To the best of our knowledge, this is new in
the literature. Therefore, in this paper, we employ Bayesian variable selection
with quantile-dependent prior for the FP model to propose a Bayesian variable
selection with parametric nonlinear quantile regression model. The objective is
to examine a nonlinear relationship between BP measures and their risk factors
across median and upper quantile levels using data extracted from the 2007-2008
National Health and Nutrition Examination Survey (NHANES). The variable
selection in the model analysis identified that the nonlinear terms of
continuous variables (body mass index, age), and categorical variables
(ethnicity, gender and marital status) were selected as important predictors in
the model across all quantile levels. | Sanna Soomro, Keming Yu | 2023-07-22T18:07:30Z | http://arxiv.org/abs/2307.12137v1 | Bayesian Fractional Polynomial Approach to Quantile Regression and Variable Selection with Application in the Analysis of Blood Pressure among US Adults
###### Abstract
Hypertension is a highly prevalent chronic medical condition and a strong risk factor for cardiovascular disease (CVD), as it accounts for more than 45% of CVD. The relation between blood pressure (BP) and its risk factors cannot be explored clearly by standard linear models. Although the fractional polynomials (FPs) can act as a concise and accurate formula for examining smooth relationships between response and predictors, modelling conditional mean functions observes the partial view of a distribution of response variable, as the distributions of many response variables such as BP measures are typically skew. Then modelling 'average' BP may link to CVD but extremely high BP could explore CVD insight deeply and precisely. So, existing mean-based FP approaches for modelling the relationship between factors and BP cannot answer key questions in need. Conditional quantile functions with FPs provide a comprehensive relationship between the response variable and its predictors, such as median and extremely high BP measures that may be often required in practical data analysis generally. To the best of our knowledge, this is new in the literature. Therefore, in this paper, we employ Bayesian variable selection with quantile-dependent prior for the FP model to propose a Bayesian variable selection with parametric nonlinear quantile regression model. The objective is to examine a nonlinear relationship between BP measures and their risk factors across median and upper quantile levels using data extracted from the 2007-2008 National Health and Nutrition Examination Survey (NHANES). The variable selection in the model analysis identified that the nonlinear terms of continuous variables (body mass index, age), and categorical variables (ethnicity, gender and marital status) were selected as important predictors in the model across all quantile levels.
_Keywords:_ Bayesian Inference, Fractional Polynomials, Nonlinear Quantile Regression, Quantile Regression, Parametric Regression, Variable Selection
## 1 Introduction
Over the past three decades, the number of adults aged 30-79 with hypertension has increased from 648 million to 1.278 billion globally (Zhou et al. (2021)). Hypertension is a highly prevalent chronic medical condition and a strong modifiable risk factor for cardiovascular disease (CVD), as it attributes to more than 45% of cardiovascular disease and 51% of stroke deaths (World Health Organization (2013)). The risk of CVD in individuals rises sharply with increasing BP (Ettehad et al. (2016); Bundy et al. (2017); Prospective Studies Collaboration (2002); Navar et al. (2016); Clark et al. (2019)).
Continuous BP measurement has proven to be one of effective incident prevention. This implies that BP is the essential physiological indicator of human body. When the heart beats, it pumps blood to the arteries resulting in changes of BP during the process. When the heart contracts, BP in the vessels reaches its maximum, which is known as systolic BP (SBP). When the heart rests, BP reduces to its minimum, which is known as diastolic BP (DBP).
Linear regression and polynomial regression analyses have been used in assessing the association between BP and risk factors contributing to various diseases (Koh et al. (2022); Liu et al. (2022); Yeo et al. (2022)). It is evident that the polynomial regression models fit the data accurately in some research studies due to its adaptability of nonlinearity property but face high order polynomial approximation. The fractional polynomials (FPs) proposed by Royston and Altman (1994) act as a concise and accurate formulae for examining smooth relationships between response and predictors, and a compromise between precision and generalisability. FPs are parametric in nature and then intuitive for the interpretation of the analysis results. FP approach has clearly established a role in the nonlinear parametric methodology especially with application by clinicians from various research fields, such as obstetrics and gynecology (Tilling et al. (2014)), gene expression studies in clinical genetics (Tan et al. (2011)) and cognitive function of children (Ryoo et al. (2017)), and other medical applications (Wong et al. (2011); Ravaghi et al. (2020); Frangou et al. (2021) and among others).
However, modelling conditional mean functions observes the partial view of a distribution of response variable, as the distributions of many response variables such as the BP measures are typically skew. Then 'average' BP may link to CVD but extremely high BP could explore CVD insight deeply and precisely. So, existing mean-based FP approaches for modelling the relationship between factors and BP cannot answer key questions in need. It is attractive to model conditional quantile functions with FPs that accommodates skewness very easily. Quantile regression, introduced by Koenker and Bassett (1978), provides comprehensive relationship between the response variable and its predictors, such as median and extremely high BP measures may be often required in practical data analysis generally.
Zhan et al. (2021) suggested quantile regression with FP as a suitable approach for an application, such as age-specific reference values of discrete scales, in terms of model consistency, computational cost and robustness. This approach is also used to derive reference curves and reference intervals in several applications (Chitty and Altman (2003); Bell et al. (2010); Bedogni et al. (2012); Kroon et al. (2017); Casati et al. (2019); Cai et al. (2020); Loef et al. (2020)), which allow quantiles to be estimated as a function of covariates without requiring parametric distributional assumptions. This is essential for data that do not assume normality, linearity and constant variance. Recently, reasonable amount of nonlinear quantile regression analyses have been conducted in medical data analysis (Maidman and Wang (2018); Huang et al. (2023); Wu et al. (2023) and among others).
However, Bayesian approach to quantile regression has advantages over the frequentist approach, as it can lead to exact inference in estimating the influence of risk factors on the upper quantiles of the conditional distribution of BP compared to the asymptotic inference of the frequentist approach (Yu et al. (2005)). It also provides estimation that incorporates parameter uncertainty fully (Yu and Moyeed (2001); Yu et al. (2005)). Some comparison studies have been conducted for both Bayesian and
frequentist approaches, such as the analysis of risk factors for female CVD patients in Malaysia (Juhan et al. (2020)) and the analysis of risk factors of hypertension in South Africa (Kuhudzai et al. (2022)). The former revealed that the Bayesian approach has smaller standard errors than that of the frequentist approach. The latter also revealed that credible intervals of the Bayesian approach are narrower than confidence intervals of the frequentist approach. These findings suggest that the Bayesian approach provides more precise estimates than the frequentist approach.
Variable selection in Bayesian quantile regression has been widely studied in the literature (Li et al. (2010); Alhamzawi et al. (2012); Alhamzawi and Yu (2013a); Chen et al. (2013); Adlouni et al. (2018); Alhamzawi et al. (2019); Dao et al. (2022) and among others). It plays an important role in building a multiple regression model, provides regularisation for good estimation of effects, and identifies important variables. Sabanes Bove and Held (2011) combine variable selection and 'parsimonious parametric modelling' of Royston and Altman (1994) to formulate a Bayesian multivariate FP model with variable selection that efficiently selects best fitted FP model via stochastic search algorithm. However, In present, no research studies have been conducted for variable selection in Bayesian parametric nonlinear quantile regression for medical application even though there is a limited amount of studies in case of non-regularised models, such as mixed effect models (Wang (2012); Yu and Yu (2023)).
Therefore, in this paper, we explore a new quantile regression model using FPs and employ Bayesian variable selection with quantile-dependent prior for a more accurate representation of the risk factors on BP measures. The three-stage computational scheme of Dao et al. (2022) is employed as a variable selection method due to its fast convergence rate, low approximation error and guaranteed posterior consistency under model misspecification. So, we propose a Bayesian variable selection with nonlinear quantile regression model to assess how body mass index (BMI) among the United States (US) influences BP measures, including SBP and DBP. The objective of this paper is to examine a nonlinear relationship between BP measures and their risk factors across median and upper quantile levels. The dataset used in this paper is the 2007-2008 National Health and Nutrition Examination Survey (NHANES), including the information on BP measurements, body measures and sociodemographic questionnaires.
The remainder of this paper is as follows. Section 2 presents the concept of FPs (Royston and Altman (1994)), quantile regression (Koenker and Bassett (1978)) and Bayesian variable selection with quantile-dependent prior (Dao et al. (2022)). The details of the NHANES 2007-2008 dataset used for the analysis are provided in Section 3. Section 4 applies the proposed method to the analysis of the NHANES 2007-2008 dataset, performs comparative analysis with two quantile regression methods and provides all the findings. Section 5 concludes this paper.
## 2 Methodology
Regression analysis is a technique that quantifies the relationship between a response variable and predictors. Quantile regression, introduced by Koenker and Bassett (1978), is a method to estimate the quantiles of a conditional distribution of a response variable and such it permits a more accurate portrayal of the relationship between the response variable and predictors. Unlike linear regression analysis,
quantile regression analysis gives a better idea about distribution of the data because the latter is robust to outliers.
### Quantile Regression
Let \(\tau\) be the proportion of a sample having data points below the quantile in \(\tau\). Given a dataset, \(\{x_{i},y_{i}\}_{i=1}^{n}\) and fixed \(\tau\), the \(\tau^{th}\) quantile regression model is represented as
\[y_{i}=x_{i}^{T}\beta(\tau)+\epsilon(\tau)_{i}\,,\quad i=1,\ldots,n\,, \tag{1}\]
where \(\tau\) is in the range between 0 and 1, and \(\beta(\tau)\) is the vector of unknown parameters of interest and \(\epsilon(\tau)\) is the model error term for the \(\tau^{th}\) quantile. For the sake of notation simplification, we omit \(\tau\) from these parameters.
We wish to estimate the unknown parameters, \(\beta\) as \(\hat{\beta}\) for each \(\tau^{th}\) quantile, which can be done by minimising the check function over \(\beta\):
\[\sum_{i=1}^{n}\rho_{\tau}(y_{i}-x_{i}^{T}\beta)\,, \tag{2}\]
with the check function \(\rho_{\tau}(\Delta)=\Delta\left[\tau\cdot\mathbb{I}_{\Delta\geq 0}-(1-\tau) \cdot\mathbb{I}_{\Delta<0}\right]\) where \(\mathbb{I}_{\Delta\geq 0}\) represents the value 1 if \(\Delta\) belongs to the set \([0,\infty)\), and the value 0 otherwise.
Minimising (2) is same as maximising a likelihood function. An asymmetric Laplace distribution (ALD) is employed, which is the common choice for the quantile regression analysis (Yu and Moyeed (2001); Yu et al. (2003)). We assume that \(\epsilon_{i}\sim\mathcal{AL}(0,\sigma,\tau),i=1,\ldots,n\), where the \(\mathcal{AL}\) is the ALD with its density
\[f_{AL}(\epsilon_{i})=\frac{\tau(1-\tau)}{\sigma}\exp\left\{-\frac{\rho_{\tau} (\epsilon_{i})}{\sigma}\right\}\,.\]
Here, \(\rho_{\tau}(\epsilon_{i})\) denotes the usual check loss function of Koenker and Bassett (1978).
We are interested in selecting a subset of important predictors which has adequate explanatory and predictive capability. One of the common procedures for simultaneously facilitating the parameter estimation and variable selection is to impose penalty function on the likelihood to arrive at the penalised loss function,
\[\sum_{i=1}^{n}\rho_{\tau}(y_{i}-x_{i}^{T}\beta)+P(\beta,\delta)\,, \tag{3}\]
which is minimised to obtain the \(\tau^{th}\) quantile regression estimator. Here, \(P(\beta,\delta)\) is a regularisation penalty function and \(\delta\) is a penalty parameter that controls the level of sparsity. Typically, Bayesian regularised quaantile regression is formulated through the relationship between the check function and the ALD.
Bayesian inference is one of the most popular approaches for the regression analysis since it provides with an entire posterior distribution of a parameter of interest as well as incorporation of parameter uncertainty and prior information about data. So, Bayesian analysis is preferable over frequentist analysis.
By using the identity of Andrews and Mallows (1974),
\[\exp(-|ab|)=\int_{0}^{\infty}\frac{a}{\sqrt{2\pi v}}\exp\left\{-\frac{1}{2}(a^{2}v +b^{2}v^{-1})\right\}d\nu\,,\]
for any \(a,b>0\), letting \(a=1/\sqrt{2\sigma}\ \&\ b=\epsilon/\sqrt{2\sigma}\) and multiplying a factor of \(\exp(-(2\tau-1)\epsilon/2\sigma)\), to express the probability density function (pdf) of the ALD errors as its scale mixture of Normals (SMN) representation,
\[f_{AL}(\epsilon_{i})=\int_{0}^{\infty}\frac{1}{\sqrt{4\pi\sigma^{3}v_{i}}}\exp \left\{-\frac{(\epsilon_{i}-(1-2\tau)v_{i})^{2}}{4\sigma v_{i}}-\frac{\tau(1- \tau)v_{i}}{\sigma}\right\}dv_{i}\,,\]
(Hideo and Kobayashi (2011)). This representation can be utilised to enable facilitation of Gibbs sampling algorithms (Kozubowski and Podgorski (2001); Geraci and Bottai (2007); Hideo and Kobayashi (2011); Chen et al. (2013)).
Rather than the standard linear model, we will be using the FP model to develop the nonlinear model under Bayesian quantile regression and variable selection.
### Fractional Polynomials
Box and Tidwell (1962) introduced the transformation now known as the Box-Tidwell transformation,
\[x^{(a)}=\left\{\begin{array}{ll}x^{a},&\mbox{if}\quad a\neq 0\,,\\ \log(x),&\mbox{if}\quad a=0\,,\end{array}\right.\]
where \(a\) is a real number. Royston and Altman (1997) extend the classical polynomials to a class which they called FPs.
An FP of degree \(m\) with powers \(p_{1}\leq\ldots\leq p_{m}\) and respective coefficients \(\alpha_{1},\ldots,\alpha_{m}\) is
\[f^{m}(x;\mathbf{\alpha},\mathbf{p})=\sum_{j=1}^{m}\alpha_{j }h_{j}(x)\,,\]
where \(h_{0}(x)=1\) and
\[h_{j}(x)=\left\{\begin{array}{ll}x^{(p_{j})},&\mbox{if $p_{j}\neq p_{j-1}$}\,,\\ h_{j-1}(x)\log(x),&\mbox{if $p_{j}=p_{j-1}$}\,,\end{array}\right. \tag{4}\]
where \(j=1\ldots,m\). Note that the definition \(h_{j}(x)\) allows the repeated powers. The bracket around the exponent denote the Box-Tidwell transformation (4). For \(m\leq 3\), Royston and Altman (1994) constrained the set of possible powers \(p_{j}\) to the set
\[{\cal S}=\left\{-2,-1,-\frac{1}{2},0,\frac{1}{2},1,2,3\right\}\,, \tag{5}\]
which encompasses the classical polynomial powers \(1,2,3\) but also offers square roots and reciprocals. Royston and Sauerbrei (2008) argue that this set is sufficient to approximate all powers in internals
\([-2,3]\). The simple example of the FP model is as follows. An FP with \(m=3\) powers and its power vector \(\mathbf{p}=(p_{1},p_{2},p_{3})=\left(-\frac{1}{2},2,2\right)\) would be
\[f^{3}(x;\mathbf{\alpha},\mathbf{p})=\alpha_{1}x^{-1/2}+\alpha_{2}x^{2}+\alpha_{3}x^{2} \log(x)\,,\]
where the last term reflects the repeated power 2.
Generalisation to the case of multiple predictors:
\[\eta(\mathbf{x})=\sum_{l=1}^{k}f_{l}^{m_{l}}(x_{l};\alpha_{l},p_{l})=\sum_{l=1}^{k }\sum_{j=1}^{m_{l}}\alpha_{lj}h_{lj}(x_{l})\,. \tag{6}\]
This is called the multiple FP model. Suppose we continue examining \(k\) continuous predictors \(x_{1},\ldots,x_{k}\) and content themselves with a maximum degrees of \(m_{max}\leq 3\) for each \(f_{l}^{m_{l}}\), for instance, \(0\leq m_{l}\leq m_{max}\) for \(l=1,\ldots,k\), where \(m_{l}=0\) denotes the omission of \(x_{l}\) from the model. From the powers set \(\mathcal{S}\), \(m_{l}\) powers are chosen, which need not be different due to the inclusion of logarithmic terms for repeated powers (4), we now employ the \(\tau^{th}\) nonlinear quantile regression with the SMN representation of the ALD errors,
\[\mathbf{y}=\mathbf{B}\mathbf{\beta}+\theta_{1}\mathbf{v}+\sqrt{\theta_{2}\mathbf{v}\sigma^{2}}\bm {z}\,, \tag{7}\]
where the \((n\times D)\)-matrix \(\mathbf{B}\) is a function of predictors \(x_{l}\) of the \(i^{th}\) observations (\(i=1,\ldots,n\)), \(\mathbf{v}=(v_{1},\ldots,v_{n})^{T}\) is a vector of exponential random variables with a rate of \(\frac{\tau(1-\tau)}{\sigma}\), \(\mathbf{z}=(z_{1},\ldots,z_{n})^{T}\) is a vector of standard Normal random variables and \(z_{i}\perp\!\!\!\perp v_{i}\) for \(i=1,\ldots,n\), \(\theta_{1}=\frac{1-2\tau}{\tau(1-\tau)}\) and \(\theta_{2}=\frac{2}{\tau(1-\tau)}\). Each entry of matrix \(\mathbf{B}\) is a vector, \(\mathbf{B}_{id}=\mathbf{B}(x_{id})=(\alpha_{l1}h_{l1}(x_{il}),\ldots,\alpha_{lm_{l}}h _{lml}(x_{il}))^{T}\), for \(i=1,\ldots,n\), \(l=1,\ldots,k\) and \(d=1,\ldots,D\).
A special way of defining the matrix \(\mathbf{B}\) is through the use of FPs. In this case, the basis function \(B(x_{l})\) is chosen as the transformation \(h_{lj}\) in (6) (\(j=1,\ldots,m_{l}\)) and the unknown parameter \(\mathbf{\beta}=(\mathbf{\alpha}_{1},\ldots,\mathbf{\alpha}_{k})^{T}\), where \(\mathbf{\alpha}_{l}=(\alpha_{l1},\ldots,\alpha_{lm_{l}})\) for \(l=1,\ldots,k\). The transformation \(h_{j}\) are determined by the power vector \(\mathbf{p}_{1},\ldots,\mathbf{p}_{k}\) through their definition (4). Note that the \(\mathbf{p}_{l}\) is empty if the predictor \(x_{l}\) is not included in the model (\(m_{l}=0\)).
### Bayesian Approach and Variable Selection
Given the model in (7), the likelihood function conditional on \(\mathbf{\beta},\sigma,\mathbf{v}=(v_{1},\ldots,v_{n})^{T}\) can be written as
\[f(\mathbf{y}|\mathbf{\beta},\sigma,\mathbf{v},\mathbf{B})=\prod_{i=1}^{n}\frac{1}{\sqrt{4\pi \sigma^{3}v_{i}}}\exp\left\{-\frac{(y_{i}-\mathbf{B}(x_{i})^{T}\mathbf{\beta}-(1-2\tau )v_{i})^{2}}{4\sigma v_{i}}-\frac{\tau(1-\tau)v_{i}}{\sigma}\right\}\,.\]
We employ the three-stage algorithm of Dao et al. (2022) for Bayesian nonlinear quantile regression with variable selection. It can be summarised, as follows.
The first stage is the expectation-maximisation algorithm consisting of two main steps: the E-step and the M-step. Dempster et al. (1977) proposed the EM algorithm, which is a statistical simulation
method and it aims to solve the complex data analysis problem with missing data.
Suppose the complete data \((\mathbf{y},\mathbf{v})\) is composed of the observed data \(\mathbf{y}=(y_{1},\ldots,y_{n})^{T}\) and missing data \(\mathbf{v}=(v_{1},\ldots,v_{n})^{T}\), whereas \(\mathbf{B}(x_{i})\), \(i=1,\ldots,n\), are treated as a function of fixed predictors. Maximum likelihood estimates (MLE) can be obtained by maximising log-likelihood function \(\log f(\mathbf{\beta},\sigma|\mathbf{y},\mathbf{v})\) of the complete data. EM algorithm has the following two steps: Expectation step (E step) and Maximum step (M step).
[E step] Given initial values of \(\mathbf{\beta}^{(0)}\) and \(\sigma^{(0)}\), we denote \(\mathbf{\beta}^{(q-1)}\) and \(\sigma^{(q-1)}\) as the \((q-1)^{th}\) iteration value of parameters \(\mathbf{\beta}\) and \(\sigma\) in the EM algorithm, and we define the mathematical expectation of the complete data as a Q-function
\[Q(\mathbf{\beta},\sigma|\mathbf{y},\mathbf{\beta}^{(q-1)},\sigma^{(q-1)})=\mathbb{E}_{\bm {y},\mathbf{\beta}^{(q-1)},\sigma^{(q-1)}}[\log f(\mathbf{\beta},\sigma|\mathbf{y},\mathbf{v}) ]\,.\]
[M step] We obtain the updated values of \(\mathbf{\beta}^{(q)}\) and \(\sigma^{(q)}\) by maximising \(Q(\mathbf{\beta},\sigma|\mathbf{y},\mathbf{\beta}^{(q-1)},\sigma^{(q-1)})\) over parameters \(\mathbf{\beta}\) and \(\sigma\):
\[\mathbf{\beta}^{(q)}=(\mathbf{B}^{T}\mathbf{W}^{(q-1)}\mathbf{B})^{-1}\mathbf{B}^{T}\mathbf{W}^{(q-1)} (\mathbf{y}-\theta_{1}\mathbf{\Delta}\mathbf{3})\,,\]
where \(\mathbf{\Delta}\mathbf{3}=\left(\left|y_{1}-\mathbf{B}(x_{1})^{T}\mathbf{\beta}^{(q-1)} \right|,\ldots,\left|y_{n}-\mathbf{B}(x_{n})^{T}\mathbf{\beta}\right|^{(q-1)}\right)^ {T}\) and \(\mathbf{W}^{(q-1)}=\text{diag}(1/\Delta 3_{1},\ldots,1/\Delta 3_{n})\), and
\[\sigma^{(q)}=\frac{1}{2(3n+2)}\left\{\sum_{i=1}^{n}\Delta 2_{i}+\sum_{i=1}^{n} \frac{(y_{i}-\mathbf{B}(x_{i})^{T}\mathbf{\beta}^{(q)})^{2}}{\Delta 3_{i}}-2\theta_{1}\sum_{i=1}^{n}(y_{i}-\mathbf{B} (x_{i})^{T}\mathbf{\beta}^{(q)})\right\}\,,\]
where \(\Delta 2_{i}=\left|y_{i}-\mathbf{B}(x_{i})^{T}\mathbf{\beta}^{(q-1)}\right|+2\sigma^ {(q-1)}\) for \(i=1,\ldots,n\).
Repeat E-step and M-step until it meets the required condition, then the final iteration values of the EM algorithm are set as the posterior modes of \(\mathbf{\beta}\) and \(\sigma\), denoted by \(\tilde{\mathbf{\beta}}\) and \(\tilde{\sigma}\), respectively.
The second stage is the Gibbs sampling algorithm. The quantile-specific Zellner's \(g\)-prior (Alhamzawi and Yu (2013b)) is used for the prior specification and it is given by
\[\mathbf{\beta}|\sigma,\mathbf{V},\mathbf{B}\sim N\left(0,2\sigma g\mathbf{\Sigma}_{v}^{-1} \right)\quad\text{and}\quad p(\sigma)\propto\frac{1}{\sigma}\,, \tag{8}\]
where \(N(\cdot)\) is the multivariable Normal distribution, \(g\) is a scaling factor, \(\mathbf{V}=\text{diag}(1/v_{1},\ldots,1/v_{n})\) and \(\mathbf{\Sigma}_{v}=\mathbf{B}^{T}\mathbf{V}\mathbf{B}\). This prior specification has an advantage, as it contains information that is dependent upon the quantile levels, which increases posterior inference accuracy.
Given the posterior modes, \(\tilde{\mathbf{\beta}}\) and \(\tilde{\sigma}\) as the starting value, we denote \(\mathbf{\beta}^{(r-1)}\) and \(\sigma^{(r-1)}\) as the \((r-1)th\) iteration value of parameters \(\mathbf{\beta}\) and \(\sigma\) in the Gibbs sampling algorithm.
* Sample \(v_{i}^{(r)}\) from \[p(v_{i})\sim GIG\left(0,\frac{1}{2\sigma},\frac{(y_{i}-\mathbf{B}(x_{i})^{T}\mathbf{ \beta})^{2}+\frac{1}{g}\mathbf{\beta}^{T}\mathbf{B}(x_{i})\mathbf{B}(x_{i})^{T}\mathbf{\beta}}{2 \sigma}\right),\] based on \(\mathbf{\beta}^{(r-1)}\) and \(\sigma^{(r-1)}\) for \(i=1,\ldots,n\) and \(GIG(0,c,d)\) is the generalised inverse Gaussian with its density \[f_{\rm GIG}(v)=\frac{1}{2K_{0}(\sqrt{cd})}v^{-1}\exp\left(-\frac{1}{2}(cv+dv^{ -1})\right),\quad v>0\,,\] where \(K(\cdot)\) is the modified Bessel function of the third kind (Barndorff-Nielsen and Shephard (2001)).
* Sample \(\sigma^{(r)}\) from \[p(\sigma|\mathbf{y},\mathbf{v}^{(r)})\sim IG\left(\frac{3n}{2},\frac{1}{4}(\mathbf{y}- \theta_{1}\mathbf{v})^{T}\mathbf{V}\mathbf{H}_{v}(\mathbf{y}-\theta_{1}\mathbf{v})+\frac{2}{\theta _{2}}\sum_{i=1}^{n}v_{i}\right),\] where \(IG(\cdot)\) is the inverse Gamma distribution, \(\mathbf{H}_{v}=\mathbf{I}_{n}-\frac{g}{g+1}\mathbf{B}\mathbf{\Sigma}_{v}^{-1}\mathbf{B}^{T}\mathbf{V}\).
* Sample \(\mathbf{\beta}^{(r)}\) from \[p(\mathbf{\beta}|\mathbf{y},\mathbf{v}^{(r)},\sigma^{(r)})\sim N\left(\frac{g}{g+1}\mathbf{ \Sigma}_{v}^{-1}\mathbf{B}^{T}\mathbf{V}(\mathbf{y}-\theta_{1}\mathbf{v}),\frac{2\sigma g}{g+1 }\mathbf{\Sigma}_{v}^{-1}\right)\,.\]
* Calculate the important weights \[w^{(r)}=\frac{p(\mathbf{\beta}^{(r)},\sigma^{(r)},\mathbf{v}^{(r)}|\mathbf{y})}{p(\mathbf{ \beta}^{(r)}|\sigma^{(r)},\mathbf{b}^{(r)},\mathbf{y})p(\sigma^{(r)}|\mathbf{v}^{(r)},\mathbf{ y})p(\mathbf{v}^{(r)})}\,,\] based on \(\mathbf{v}^{(r)},\sigma^{(r)},\mathbf{\beta}^{(r)}\). This is to adjust for the GIG approximation of the marginal posterior of \(\mathbf{v}\) given \(\mathbf{y}\), which is given by its unnormalised density function \[\pi(\mathbf{v}|\mathbf{y})\propto\frac{p(\mathbf{v}|\tilde{\mathbf{\beta}},\tilde{\sigma},\mathbf{ y})}{p(\tilde{\mathbf{\beta}}|\mathbf{y},\mathbf{v}.\tilde{\sigma})p(\tilde{\sigma}|\mathbf{y},\mathbf{v})}\,,\] where \(p(\mathbf{v}|\tilde{\mathbf{\beta}},\tilde{\sigma},\mathbf{y})\) is an importance sampling density in the importance sampling algorithm. The importance weights will be used to determine the acceptance probability of each \(\{\mathbf{\beta}^{(r)},\sigma^{(r)},\mathbf{v}^{(r)}\}\).
The algorithm iterates until it reaches the final MCMC iteration indexed at R and discard the burn-in period.
Finally, the third stage is the important re-weighting step. The \(S\) samples are drawn from the importance weights without replacement where \(S<R\) is the number of importance weighting steps. A random indicator vector \(\mathbf{\gamma}=(\gamma_{1},\ldots,\gamma_{D})^{T}\) is introduced to the nonlinear model
\[\mathbf{M}_{\mathbf{\gamma}}:\mathbf{y}=\mathbf{B}_{\mathbf{\gamma}}\mathbf{\beta}+\mathbf{\epsilon}\,,\]
where \(\mathbf{B_{\gamma}}\) is the \((n\times D_{\mathbf{\gamma}})\) matrix consisting of important predictors and \(\mathbf{\beta_{\gamma}}\) of length \(D_{\mathbf{\gamma}}\) is the non-zero parameter vector. The same prior specification in (8) is employed along with a prior on \(\gamma_{d}\), \(d=1,\ldots,D\), and a beta prior on \(\pi\):
\[p(\mathbf{\gamma}|\pi)\propto\pi^{\sum_{d=1}^{D}\gamma_{d}}(1-\pi)^{D-\sum_{d=1}^{ D}\gamma_{d}}\quad\text{and}\quad p(\pi)\sim\text{Beta}\left(\frac{1}{2},\frac{1}{ 2}\right)\,,\]
where \(\pi\in[0,1]\) is the prior probability of randomly including predictor in the model. Then \(\pi\) is marginalised out from \(p(\mathbf{\gamma}|\pi)\) resulting as
\[p(\mathbf{\gamma})\propto\text{Beta}\left(\sum_{d=1}^{D}\gamma_{d}+\frac{1}{2},D- \sum_{d=1}^{D}\gamma_{d}+\frac{1}{2}\right)\,.\]
The marginal likelihood of \(\mathbf{y}\) under the model \(\mathbf{M_{\gamma}}\) is then obtained by integrating out \(\mathbf{\beta}\) and \(\sigma\) resulting as
\[p(\mathbf{y}|\mathbf{\gamma},\mathbf{v})\sim t_{2n}\left((1-2\tau)\mathbf{v},\frac{4\sum_{i=1 }^{n}v_{i}}{\sigma\theta_{2}}\left(\mathbf{V}-\frac{g}{g+1}\mathbf{V}\mathbf{B_{\gamma}}\bm {\Sigma}_{v}(\mathbf{\gamma})^{-1}\mathbf{B_{\gamma}}^{T}\mathbf{V}\right)^{-1}\right)\,,\]
where \(t_{2n}(\cdot)\) is the multivariate Student t-distribution with \(2n\) degrees of freedom. The posterior probability of \(\mathbf{M_{\gamma}}\) is therefore given by \(p(\mathbf{\gamma}|\mathbf{y},\mathbf{v})\propto p(\mathbf{y}|\mathbf{\gamma},\mathbf{v})p(\mathbf{\gamma})\). Lastly, the independent samples of \(\mathbf{v}\) from the second stage algorithm are drawn based on the \(S\) samples and the important re-weighting step is iterated until the \(S\) samples of \(\mathbf{\gamma}\) are obtained. Then the posterior inclusion probability is estimated, as follows
\[\hat{p}(\gamma_{d}=1|\mathbf{y},\mathbf{v})=\frac{1}{\tilde{S}}\sum_{s=1}^{\tilde{S}} \gamma_{d}^{(s)}\,,\quad d=1,\ldots,D\,,\]
where \(\tilde{S}\) is the number of iterations after discarding the burn-in period.
## 3 Data Preparation and Data Analysis
This study is based on the data of the National Health and Nutrition Examination Survey (NHANES) during 2007-2008. The survey conducted by the National Center for Health Statistics of the Centers for Disease Control and Prevention used a complex, stratified, multistage sampling design to select a representative sample of noninstitutionalized population in the United Status civilians to participate in a series of comprehensive health-related interviews and examinations. In total, 12,943 people participated in NHANES 2007-2008.
The study variables included SBP and DBP as the response variables. The BP measurements were taken as follows. After a resting period of 5 minutes in a sitting position and determination of maximal inflation level, three consecutive BP readings were recorded. A fourth reading was recorded if a BP measurement is interrupted or incomplete. All the results were taken in Mobile Examination Center. The BP measurements are essential for hypertension screening and disease management, since hypertension
is an important risk factor for cardiovascular and renal disease. Then in this study, SBP and DBP were selected as response variables where each was averaged over the second and third readings. Predictor variables were BMI, age, ethnicity, gender and marital status.
We initially included 9,762 participants who have completed both BP and body measure examinations in the study. From 9,762 participants, we exclude those who had not underwent examinations. Then among the remaining 4,612 participants, we further excluded those who refused to reveal their marital status. Finally, 4,609 participants were included for analysis in this study.
The NHANES protocols were approved by the National Center for Health Statistics research ethics review boards, and informed consent was obtained from all participants. The research adhered to the tenets of the Declaration of Helsinki.
The R version 4.2.2 was used to conduct both statistical and Bayesian analyses. Both 'quantreg' and 'Brq' R packages was employed to fit the frequentist and Bayesian approaches of the quantile regression model with FPs, respectively. The source R code was provided from the main author to fit the Bayesian quantile regression with variable selection and FPs via the three-stage algorithm.
This study considers two quantile models at the \(50^{th}\), \(75^{th}\) and \(95^{th}\) percentiles. When modelling hypertension, it is preferable to model both median and extremely high values of SBP and DBP, which corresponds to the median and upper distributions of SBP and DBP, respectively (Kuhudzai et al. (2022)). The following two quantile models will be used for the analysis for the fixed \(\tau\) value:
\[\text{SBP}_{i}=\text{BMI}_{i}\beta_{1}+\text{BMI}^{0.5}\beta_{2 }+\text{Age}_{i}\beta_{3}+\text{Age}_{i}^{0.5}\beta_{4}+\text{Ethnicity}_{i} \beta_{5}+\text{Gender}_{i}\beta_{6}+\text{MaritalStatus}_{i}\beta_{7}\,,\] \[\text{DBP}_{i}=\text{BMI}_{i}\beta_{1}+\text{BMI}^{0.5}\beta_{2 }+\text{Age}_{i}\beta_{3}+\text{Age}_{i}^{0.5}\beta_{4}+\text{Ethnicity}_{i} \beta_{5}+\text{Gender}_{i}\beta_{6}+\text{MaritalStatus}_{i}\beta_{7}\,,\]
for \(i=1,\ldots,4609\).
The power of 0.5 was chosen for continuous variables, including BMI and age. The remaining variables were linear because they are categorical. Similar fractional polynomial models were employed to model BP within the linear regression framework (Dong et al. (2016), Takagi and Umemoto (2013), Thompson et al. (2009) and among others).
## 4 Results
In this section, both descriptive and model analyses are provided for the NHANES 2007-2008 dataset using the proposed model. To evaluate the performance of the proposed model, we included two existing methods, including quantile regression and Bayesian quantile regression, with FP model for a fair comparative analysis. The model comparison is discussed outlining the advantages of the proposed model over these two methods. All the results are provided in this section through tables and figures for each regression analysis.
\begin{table}
\begin{tabular}{l l r r r} \hline & & Normal BP & Pre- & Hypertension (\(\geq\) 140 mmHg) \\ & & (i 120 mmHg) & Hypertension (120-139 mmHg) & \\ BMI & Underweight & 37 (56.92\%) & 16 (24.62\%) & 12 (18.46\%) \\ Healthy & 734 (60.31\%) & 343 (28.18\%) & 140 (11.50\%) \\ Overweight & 781 (49.49\%) & 565 (35.80\%) & 232 (14.70\%) \\ Obese & 415 (41.71\%) & 414 (41.61\%) & 166 (16.68\%) \\ Very obese & 201 (42.68\%) & 187 (39.70\%) & 83 (17.62\%) \\ \multicolumn{2}{c}{Morbidly obese} & 106 (37.46\%) & 116 (40.99\%) & 61 (21.55\%) \\ P-value (Cramer’s V value & & P-value ¡ 0.01 (0.1106) & \\ Age & 20-29 years & 493 (73.36\%) & 164 (24.40\%) & 15 (2.23\%) \\
30-39 years & 543 (65.66\%) & 251 (30.35\%) & 33 (3.99\%) \\
40-49 years & 460 (55.89\%) & 285 (34.63\%) & 78 (9.48\%) \\ \multicolumn{2}{c}{\(\downarrow\)50 years} & 778 (34.02\%) & 941 (41.15\%) & 568 (24.84\%) \\ & & & P-value ¡ 0.01 (0.2535) & \\ Ethnicity & Mexican American & 456 (54.29\%) & 279 (33.21\%) & 105 (12.50\%) \\ & Other Hispanic & 286 (53.16\%) & 186 (34.57\%) & 66 (12.27\%) \\ & Non-Hispanic white & 1006 (47.61\%) & 793 (37.53\%) & 314 (14.86\%) \\ & Non-Hispanic black & 425 (45.31\%) & 324 (34.54\%) & 189 (20.15\%) \\ & Other non-Hispanic race & 101 (56.11\%) & 59 (32.78\%) & 20 (11.11\%) \\ & & & P-value ¡ 0.01 (0.0665) & \\ Gender & Male & 999 (43.28\%) & 957 (41.46\%) & 352 (15.25\%) \\ & Female & 1275 (55.41\%) & 684 (29.73\%) & 342 (14.86\%) \\ & & & P-value ¡ 0.01 (0.1310) & \\ Marital & Married & 1219 (48.39\%) & 927 (36.80\%) & 373 (14.81\%) \\ Status & Widowed & 84 (30.11\%) & 103 (36.92\%) & 92 (32.97\%) \\ & Divorced & 226 (44.14\%) & 182 (35.55\%) & 104 (20.31\%) \\ & Separated & 89 (52.05\%) & 57 (33.33\%) & 25 (14.62\%) \\ & Never married & 468 (58.87\%) & 256 (32.20\%) & 71 (8.93\%) \\ & Living with partner & 188 (56.46\%) & 116 (34.83\%) & 29 (8.71\%) \\ & & & P-value ¡ 0.01 (0.1251) & \\ \hline \end{tabular}
\end{table}
Table 1: SBP among United Status Adults by BMI and Sociodemographic Characteristics.
\begin{table}
\begin{tabular}{l l r r r} \hline \hline & & Normal BP & Pre- & Hypertension (\(\geq\) 90 mmHg) \\ & & (i 80 mmHg) & Hypertension (80-89 mmHg) & \\ BMI & Underweight & 49 (75.38\%) & 12 (18.46\%) & 4 (6.15\%) \\ Healthy & 1025 (84.22\%) & 148 (12.16\%) & 44 (3.62\%) \\ Overweight & 1265 (80.16\%) & 243 (15.40\%) & 70 (4.44\%) \\ Obese & 772 (77.59\%) & 168 (16.88\%) & 55 (5.53\%) \\ Very obese & 356 (75.58\%) & 78 (16.56\%) & 37 (7.86\%) \\ Morbidly obese & 217 (76.68\%) & 47 (16.61\%) & 19 (6.71\%) \\ P-value (Cramer’s V value & & P-value ¡ 0.01 (0.0587) & \\ Age & 20-29 years & 619 (92.11\%) & 47 (6.99\%) & 6 (0.89\%) \\
30-39 years & 681 (82.35\%) & 118 (14.27\%) & 28 (3.39\%) \\
40-49 years & 584 (70.96\%) & 173 (21.02\%) & 66 (8.02\%) \\ \(\acute{\downarrow}\)50 years & 1800 (78.71\%) & 358 (15.65\%) & 129 (5.64\%) \\ & & & P-value ¡ 0.01 (0.1118) & \\ Ethnicity & Mexican American & 699 (83.21\%) & 116 (13.81\%) & 25 (2.98\%) \\ Other Hispanic & 444 (82.53\%) & 70 (13.01\%) & 24 (4.46\%) \\ Non-Hispanic white & 1687 (79.84\%) & 327 (15.48\%) & 99 (4.69\%) \\ Non-Hispanic black & 711 (75.80\%) & 154 (16.42\%) & 73 (7.78\%) \\ Other non-Hispanic race & 143 (79.44\%) & 29 (16.11\%) & 8 (4.44\%) \\ & & & P-value ¡ 0.01 (0.0569) & \\ Gender & Male & 1732 (75.04\%) & 423 (18.33\%) & 153 (6.63\%) \\ Female & 1952 (84.83\%) & 273 (11.86\%) & 76 (3.30\%) \\ & & & P-value ¡ 0.01 (0.1244) & \\ Marital & Married & 2017 (80.07\%) & 385 (15.28\%) & 117 (4.64\%) \\ Status & Widowed & 231 (82.80\%) & 38 (13.62\%) & 10 (3.58\%) \\ Divorced & 386 (75.39\%) & 87 (16.99\%) & 39 (7.62\%) \\ Separated & 133 (77.78\%) & 26 (15.20\%) & 12 (7.02\%) \\ Never married & 656 (82.52\%) & 103 (12.96\%) & 36 (4.53\%) \\ Living with partner & 261 (78.38\%) & 57 (17.12\%) & 15 (4.50\%) \\ & & & P-value = 0.0516 (0.0444) & \\ \hline \hline \end{tabular}
\end{table}
Table 2: DBP among United Status Adults by BMI and Sociodemographic Characteristics.
\begin{table}
\begin{tabular}{l r r r} \hline Quantile Regression & & & \\ \hline & \(\tau\) & 0.50 & 0.75 & 0.95 \\ BMI & -2.856 (-3.278, -2.280) & -2.198 (-3.040, -1.715) & -2.024 (-3.141, -0.798) \\ BMI\({}^{0.5}\) & 36.085 (29.932, 40.529) & 29.210 (23.907, 38.130) & 29.113 (15.239, 42.302) \\ Age & 0.510 (0.130, 0.785) & 0.317 (-0.003, 0.885) & 0.710 (-0.220, 1.630) \\ Age\({}^{0.5}\) & -1.758 (-5.430, 3.339) & 3.297 (-4.116, 7.654) & 2.300 (-9.906, 14.672) \\ Ethnicity & 0.626 (0.154, 1.040) & 0.995 (0.366, 1.495) & 1.214 (0.199, 2.642) \\ Gender & -4.323 (-5.302, -3.512) & -3.813 (-5.231, -2.506) & -3.278 (-6.147, -0.762) \\ Marital Status & 0.894 (0.612, 1.155) & 1.327 (0.916, 1.746) & 1.400 (0.650, 2.037) \\ \hline \multicolumn{4}{l}{Bayesian Quantile} & & \\ Regression & & & \\ \hline & \(\tau\) & 0.50 & 0.75 & 0.95 \\ BMI & -2.818 (-3.208, -2.447) & -2.255 (-2.669, -1.889) & -2.120 (-2.603, -1.685) \\ BMI\({}^{0.5}\) & 35.628 (31.653, 39.794) & 29.825 (25.763, 34.419) & 30.191 (25.146, 35.809) \\ Age & 0.484 (0.233, 0.734) & 0.364 (0.103, 0.664) & 0.768 (0.428, 1.142) \\ Age\({}^{0.5}\) & -1.366 (-4.737, 2.002) & 2.735 (-1.237, 6.249) & 1.446 (-3.550, 6.077) \\ Ethnicity & 0.640 (0.288, 0.979) & 0.957 (0.561, 1.359) & 1.341 (0.839, 1.829) \\ Gender & -4.376 (-5.138, -3.645) & -3.809 (-4.784, -2.823) & -3.346 (-4.397, -2.190) \\ Marital Status & 0.888 (0.656, 1.125) & 1.347 (1.055, 1.637) & 1.354 (1.041, 1.649) \\ \hline \multicolumn{4}{l}{Bayesian Quantile} & & \\ Regression Fractional & & & \\ Polynomials \& & & \\ Variable Selection & & & \\ \hline & \(\tau\) & 0.50 & 0.75 & 0.95 \\ BMI & -2.812 (-3.164, -2.468) & -2.581 (-2.974, -2.168) & -2.426 (-2.813, -2.027) \\ BMI\({}^{0.5}\) & 35.547 (31.789, 39.269) & 33.335 (28.817, 37.747) & 33.335 (28.815, 37.784) \\ Age & 0.459 (0.226, 0.680) & 0.537 (0.274, 0.806) & 0.945 (0.643, 1.256) \\ Age\({}^{0.5}\) & -1.129 (-4.197, 2.029) & -0.051 (-3.717, 3.536) & -1.382 (-5.473, 2.680) \\ Ethnicity & 0.571 (0.258, 0.898) & 0.843 (0.484, 1.212) & 1.152 (0.753, 1.616) \\ Gender & -4.577 (-5.300, -3.899) & -4.291 (-5.053, -3.518) & -4.343 (-5.301, -3.351) \\ Marital Status & 0.828 (0.632, 1.033) & 1.139 (0.893, 1.381) & 1.331 (1.052, 1.617) \\ \hline \end{tabular}
\end{table}
Table 3: One Frequentist and Two Bayesian Quantile Regression Analyses for Relationship between SBP and Risk Factors.
\begin{table}
\begin{tabular}{l r r r} \hline \hline Quantile Regression & & & \\ \hline \(\tau\) & 0.50 & 0.75 & 0.95 \\ BMI & 1.174 (0.705, 1.496) & 0.761 (0.507, 1.096) & 0.582 (0.022, 1.572) \\ BMI\({}^{0.5}\) & -12.200 (-15.675, -7.071) & -7.179 (-10.821, -4.242) & -3.995 (-13.869, 2.247) \\ Age & -2.266 (-2.477, -1.979) & -2.018 (-2.252, -1.832) & -1.852 (-2.418, -1.418) \\ Age\({}^{0.5}\) & 31.329 (27.308, 34.170) & 28.298 (25.758, 31.451) & 26.918 (21.199, 34.557) \\ Ethnicity & 0.561 (0.203, 0.841) & 0.712 (0.411, 1.030) & 1.264 (0.345, 2.013) \\ Gender & -3.345 (-4.160, -2.651) & -3.619 (-4.337, -2.976) & -4.592 (-5.769, -3.047) \\ Marital Status & 0.210 (-0.041, 0.448) & 0.368 (0.171, 0.549) & 0.466 (0.143, 0.934) \\ \hline Bayesian Quantile & & & \\ Regression & & & \\ Regression & & & \\ \hline \(\tau\) & 0.50 & 0.75 & 0.95 \\ BMI & 1.153 (0.836, 1.433) & 0.798 (0.539, 1.056) & 0.656 (0.345, 0.974) \\ BMI\({}^{0.5}\) & -11.923 (-15.007, -8.505) & -7.554 (-10.406, -4.748) & -4.624 (-7.981, -1.332) \\ Age & -2.253 (-2.431, -2.058) & -2.040 (-2.224, -1.863) & -1.870 (-2.064, -1.663) \\ Age\({}^{0.5}\) & 31.131 (28.434, 33.566) & 28.594 (26.243, 31.077) & 27.176 (24.467, 29.773) \\ Ethnicity & 0.536 (0.291, 0.777) & 0.706 (0.455, 0.966) & 1.328 (0.981, 1.667) \\ Gender & -3.391 (-3.999, -2.778) & -3.635 (-4.169, -3.109) & -4.498 (-5.086, -3.924) \\ Marital Status & 0.220 (0.030, 0.408) & 0.374 (0.222, 0.533) & 0.484 (0.304, 0.667) \\ \hline \hline Bayesian Quantile & & & \\ Regression Fractional & & & \\ Polynomials \& & & \\ Variable Selection & & & \\ \hline \(\tau\) & 0.50 & 0.75 & 0.95 \\ BMI & 1.101 (0.823, 1.381) & 0.808 (0.568, 1.041) & 0.874 (0.584, 1.147) \\ BMI\({}^{0.5}\) & -11.299 (-14.374, -8.289) & -7.620 (-10.207, -4.940) & -7.217 (-10.158, -4.080) \\ Age & -2.217 (-2.397, -2.033) & -2.031 (-2.203, -1.867) & -2.018 (-2.206, -1.821) \\ Age\({}^{0.5}\) & 30.603 (28.089, 33.030) & 28.381 (26.127, 30.639) & 29.063 (26.415, 31.577) \\ Ethnicity & 0.505 (0.278, 0.727) & 0.630 (0.391, 0.868) & 1.043 (0.747, 1.319) \\ Gender & -3.401 (-3.934, -2.888) & -3.733 (-4.219, -3.233) & -4.436 (-5.032, -3.827) \\ Marital Status & 0.193 (0.033, 0.347) & 0.371 (0.222, 0.523) & 0.454 (0.270, 0.628) \\ \hline \hline \end{tabular}
\end{table}
Table 4: One Frequentist and Two Bayesian Quantile Regression Analyses for Relationship between DBP and Risk Factors.
### Descriptive Analysis
For this analysis, continuous variables were collapsed into categorical variables, including SBP, DBP, BMI and age. According to the guidelines of Whelton et al. (2018), the BP variables are divided into three groups: normal (\(<120\) mmHg for SBP, \(<80\) mmHg for DBP), pre-hypertension (\(120-139\) mmHg for SBP, \(80-89\) mmHg for DBP) and hypertension (\(\geq 140\) mmHg for SBP, \(\geq 90\) mmHg for DBP). The BMI variable is also divided into six groups: underweight (\(<18.5\)), healthy (\(18.5-24.9\)), overweight (\(25-29.9\)), obese (\(30-34.9\)), very obese (\(35-39.9\)) and morbidly obese (\(\geq 40\)) (Centers for Disease Control and Prevention (2022)).
Table 1-2 present SBP and DBP proportions among US adults by demographic and lifestyle characteristics, including BMI, age, ethnicity, gender and marital status. The Cramer's V value is used to measure the magnitude of the association between SBP, DBP, sociodemographic characteristics and BMI of the participants. Their values with p-values are also presented in Table 1-2 and compared with with guidelines given by Rea and Parker (2014): 0.00 to under 0.10 = very weak association, 0.10 to under 0.20 = weak association, 0.20 to under 0.40 = moderate association and 0.40 and above = strong association.
It is evident from Table 1-2 that hypertension was more prevalent in underweight, very obese and morbidly obese participants for both BP measures where the very obese and morbidly obese had the highest prevalence for DBP and SBP measures, respectively. The same trend was observed on the proportions of elevated BP for DBP measure. It was clear that healthy participants had the highest
Figure 1: Trace, density and autocorrelation plots for the risk factors of SBP at three quantile levels (\(\tau=0.5,0.75,0.95\)) under the Bayesian quantile regression model with FPs.
prevalence of normal BP for both BP measures.
Concerning age, the prevalence of both elevated BP and hypertension increased with age, with the 40-49 years age group having the highest proportions for DBP measure and the 50 years and above age group for SBP measure. In regards to ethnicity, the non-Hispanic Black participants had the highest prevalence of hypertension compared to other race for both BP measures.
Table 1-2 also showed that men had the highest prevalence of both elevated BP and hypertension for both BP measures. Participants who were separated or divorced and those who became widowed had the highest prevalence of hypertension for DBP and SBP measures, respectively.
Lastly, at the 1% significance level, Table 1-2 exhibited very weak to weak associations between BP measures, BMI and sociodemographic characteristics among the US adults. However, there is a moderate association between SBP measure and age. There is no statistically significant association between DBP measure and marital status at the 5% level.
### Model Analysis
Table 3-4 provides the coefficients for predictors relating to SBP and DBP responses for three quantile regression models with FPs at three quantile levels (\(\tau=0.50,0.75,0.95\)), including one frequentist and two Bayesian approaches with one using variable selection. Bayesian parameters are obtained via posterior men. The 95% confidence intervals are provided for the frequentist approach, whilst the 95% credible intervals are provided for the Bayesian approaches. Either 95% confidence interval or 95% credible interval indicates that the user is 95% confident that the population mean is within the interval.
Figure 2: Trace, density and autocorrelation plots for the risk factors of DBP at three quantile levels (\(\tau=0.5,0.75,0.95\)) under the Bayesian quantile regression model with FPs.
We denote the frequentist approach as the QR-FP model, and two Bayesian approaches as the BQR-FP and BQRVS-FP models where the latter uses variable selection.
For the BQR-FP model, the algorithm was implemented for 10,000 MCMC iterations and 1,000 MCMC iterations were discarded as a burn-in period. For the BQRVS-FP model, the first stage algorithm ran for 1,000 EM iterations and repeated for 2 replications. Then 5,000 MCMC iterations were drawn for the second stage algorithm while discarding 2,500 MCMC iterations as a burn-in period. Finally, the last algorithm ran for 1,250 important re-weighting steps of which 500 steps were discarded as a burn-in period. The value of \(g\) is selected as 1,000 for all implementations of the variable selection model.
It is evident from Table 3 that all the risk factors except both linear and nonlinear terms of age were found to have statistically significant associations with SBP across the two upper quantile levels according to their 95% confidence intervals containing no zero value under the QR-FP model. Looking at the median level, the linear term had association with SBP under the same approach. When looking at the BQR-FP and BQRVS-FP models, only the nonlinear term of age did not have a statistically significant association for all quantile levels. On the other hand, Table 4 observed that all the risk factors including nonlinear terms had statistically significant associations with DBP across all quantile levels for all model approaches. When looking at the median level under the QR-FP model, it revealed that the marital status did not have statistically significant association.
Table 3 also observed that the BMI, nonlinear term of age and gender have negative associations with SBP, whilst the nonlinear term of BMI, age and gender have negative associations with DBP from
Figure 3: Trace, density and autocorrelation plots for the risk factors of SBP at three quantile levels (\(\tau=0.5,0.75,0.95\)) under the Bayesian quantile regression model with FPs and variable selection.
Figure 4: Trace, density and autocorrelation plots for the risk factors of DBP at three quantile levels (\(\tau=0.5,0.75,0.95\)) under the Bayesian quantile regression model with FPs and variable selection.
Table 4 for all three model approaches. Under SBP model, the coefficients of BMI, ethnicity, gender and marital status increased when the quantile levels increased. The same trend was observed for the coefficients of BMI’s nonlinear term, age, ethnicity and marital status under DBP model. Observing the coefficient of age’s nonlinear term, all models saw the reverse U-shaped trend under SBP model and on other hand, both QR-FP and BQR-FP models had decreasing trend and the BQRVS-FP had U-shaped trend under DBP model. Increasingly, the coefficient of BMI’s nonlinear term under SBP model followed the decreasing trend for the QR-FP model, the U-shaped trend for the BQR-FP model and the square-root trend for the BQRVS-FP model.
Convergence of both Bayesian approaches was assessed using the trace plots, the density plots and autocorrelation plots. This is essential to perform various diagnostic tools for the assessing the convergence (Sinharay (2003)). The convergence diagnostics are useful to check stationarity of the Markov chain or good chain mixing and to verify the accuracy of the posterior estimates (Lesaffre and Lawson (2012)). The trace plot is in form of a time series plot indicating whether it reaches stationarity or not. The density plot represents the stationary distribution of posterior samples approximating the posterior distribution of interest. The autocorrelation plot reports the correlation of posterior samples at each chain step with previous estimates that same variable, lagged by sample number of iterations. Decreasing trend indicates that the stationary distribution is more random and less dependent on initial values in the chain (Hamra et al. (2013)).
Figure 1-2 present the trace, density and autocorrelation plots for each risk factor of SBP and DBP,
respectively under the BQR-FP model. When looking at the trace plots across all the quantile levels, they exhibit stationarity due to relatively constant mean and variance of each plot. Thus, they show the good Markov chain mixing rate. When looking at the density plots across all the quantile levels, they reflect a smooth distribution with one peak at the mode of the distribution indicating a good convergence. It is also shown from Figure 1-2 that each risk factor of SBP and DBP across all the quantile levels has increasingly random stationary posterior distribution although at the \(95^{th}\) percentile, the trend has a slower decreasing rate.
Figure 3-4 also present the trace, density and autocorrelation plots for each risk factor of SBP and DBP, respectively under the BQRVS-FP model. All the plots show stationarity, good Markov chain mixing rate and good convergence. Each autocorrelation plot indicated that their stationary distribution became random and less correlated with the initial values at a faster rate.
Figure 5-6 provide the BQRVS-FP model determined by risk factors of SBP and DBP having the marginal inclusion probability (MIP) of at least 0.9, respectively. The risk factors selected lie above the cutoff of 0.9 of MIP. Across all the quantile levels for both SBP and DBP models, the same important risk factors were consistently selected where the DBP model selected all the risk factors including the nonlinear terms except marital status at the median level. The SBP model selected all except the nonlinear term of age. This mostly agreed with findings on 95% credible intervals from Table 3-4.
Figure 5: The selected predictors and cutoff thresholds (dashed lines) of the NHANES dataset for the SBP model via the BQRVS-FP approach at \(\tau=0.50\), \(\tau=0.75\) and \(\tau=0.95\).
### Model Comparison
Observing at the 95% confidence intervals of frequentist approach and the 95% credible intervals of two Bayesian approaches from Table 3-4, the BQRVS-FP model has tighter intervals compared to the QR-FP model having wider intervals.
Another finding is from the diagnostic plots that the autocorrelation plots of BQRVS-FP model have a faster decreasing trend rate across all the quantile levels, whereas those of the BQR-FP model have a slower rate. This is evident that the BQRVS-FP model has more random stationary posterior distributions of interest.
When looking at Table 3-4 and Figure 5-6, the BQRVS-FP model selected the important predictors coinciding with statistically significant associations between SBP, DBP and their risk factors based on their 95% credible intervals.
These findings suggest that the Bayesian variable selection approach to quantile regression model with FPs obtained more precise estimates than the frequentist and Bayesian approaches. The nonlinear terms were selected as important variables in both SBP and DBP models indicating that FP model was necessary to examine the nonlinear relationship between SBP, DBP and risk factors.
## 5 Conclusion
In this paper, we conducted the data analysis of the impact of body mass index (BMI) on the blood pressure (BP) measures, including systolic and diastolic BP using data extracted from the 2007-2008
Figure 6: The selected predictors and cutoff thresholds (dashed lines) of the NHANES dataset for the DBP model via the BQRVS-FP approach at \(\tau=0.50\), \(\tau=0.75\) and \(\tau=0.95\).
National Health and Nutrition Examination Survey (NHANES). The descriptive analysis showed that the prevalence of hypertension increases by age and the hypertension is highly prevalent among very obese and morbidly obese participants. In particular, it is more prevalent in men than women. Moreover, there is a statistically significant moderate association between SBP and age based on the Cramer's V value, whilst the remaining associations were weaker for both BP measures. However, there is no association between DBP and marital status.
The analysis motivates a new Bayesian nonlinear quantile regression model under fractional polynomials (FPs) and variable selection with quantile-dependent prior where the quantile regression analysis investigates how the relationships differ across the median and upper quantile levels. The use of FPs allows for the relationships to be nonlinear parameterically. The variable selection investigates for important predictors that contribute to the nonlinear relationships via the Bayesian paradigm. The model analysis suggested that the proposed model provides better estimates because in comparison of two methods, the frequentist and Bayesian approaches of quantile regression model, the 95% credible intervals were narrower and the autocorrelation plots have faster decreasing rate of correlated posterior samples. The analysis of the data showed that nonlinear relations do exist because the proposed model identified the nonlinear terms of continuous variables, including BMI and age as important predictors in the model across all the quantile levels. On the other hand, the nonlinear term of age is not selected under the SBP model. The marital status is not selected as an important risk factor for the DBP model at the median level. This agreed with findings of both descriptive and model analyses. Moreover, the data analysis suggested that the quantile based FP approaches have goodness of fit comparing to mean-based FP approaches. Thus, the importance of the nonlinear quantile model with FPs is significant for modelling of BP measures.
## 6 Acknowledgments
This work is supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant 2295266 for the Brunel University London for Doctoral Training.
|
2305.03643 | Isoperimetric sets in nonnegative scalar curvature and their role
through various concepts of mass | We review some recent results about the relations among isoperimetric sets,
Penrose inequalities and related concepts in the analysis of $3$-manifolds of
nonnegative scalar curvature. We also show that if the isoperimetric sets of
big volume have connected boundaries, the equivalence among suitable notions of
mass hold. | Luca Benatti, Mattia Fogagnolo | 2023-05-05T16:02:31Z | http://arxiv.org/abs/2305.03643v2 | # Isoperimetric sets in nonnegative scalar curvature and their role through various concepts of mass
###### Abstract.
We review some recent results about the relations among isoperimetric sets, Penrose inequalities and related concepts in the analysis of \(3\)-manifolds of nonnegative scalar curvature. We also show that if the isoperimetric sets of big volume have connected boundaries, the equivalence among suitable notions of mass hold.
MSC (2020): 53C21, 53E10, 83C99, 49J45.
Keywords: isoperimetric sets, Penrose inequality, positive mass theorem, isoperimetric mass, isocapacitary mass, inverse mean curvature flow, nonnegative scalar curvature.
## 1. Introduction
The topics we will deal with will gravitate around isoperimetric properties of Riemannian manifolds of dimension \(3\) with nonnegative scalar curvature that are Asymptotically Flat in some suitable sense, usually endowed with a closed, minimal and outermost boundary. With the latter adjective, we indicate that no other closed, minimal surface exists enclosing \(\partial M\). We will occasionally refer to boundaries with these properties with the word _horizon_, or _horizon boundary_.
One of the classical results in this class of manifold is the Riemannian Penrose Inequality. Leaving all the discussion and the details to the main body of the work, we just point out that Penrose inequalities read as bounds from above of the area of the minimal boundary of \((M,g)\) in terms of suitable global geometric invariants, which are interpreted as physical global _masses_. In [1] we show an isoperimetric version of this inequality holding in a very large class of manifolds. We will review on such result focusing, focusing on its relation with the isoperimetric sets and on the techniques that led to its existence in this context [12]. We will also deal with the equivalence of various, apparently very different, notions of mass. Particular attention will be put in being as sharp as possible in the decay requirements for the asymptotically flat condition on manifolds considered.
In Section 2, we review the main properties of the Inverse Mean Curvature Flow we are going to employ, mainly obtained by Huisken-Ilmanen [13]. In Section 3, we review and detail the beautiful proof of the existence of isoperimetric sets of any volume in nonnegative scalar curved \(3\)-manifolds, obtained by Carlotto-Chodosch-Eichmair [12] after the fundamental insight of Shi [14]. In doing so, we sensibly weaken the decay assumptions on the metric to \(\mathscr{C}^{0}\)-Asymptotic Flatness. In Section 4, after having discussed Huisken's notion of Isoperimetric mass [15], we review the proof of the related Isoperimetric Penrose inequality obtained in collaboration with Mazizieri [1]. In Section 5, we analyze the relations with other notions of mass, most notably the classical ADM mass [1]. This also gives us the occasion for discussing the physical relevance of the concepts involved. The other notions of mass that will be taken into account are the Isocapacitary masses [16, 1]. We show, as partly new results, relations among the connectedness of isoperimetric sets of large volume and the equivalence of such notions of masses. We conclude with Section 6, where we gather various
directions of research that may by undertaken in connection with the topics studied in this paper.
### Acknowledgements
Part of this work has been carried out during the authors' attendance to the _Thematic Program on Nonsmooth Riemannian and Lorentzian Geometry_ that took place at the Fields Institute in Toronto. The authors warmly thank the staff, the organizers and the colleagues for the wonderful atmosphere and the excellent working conditions set up there. L.B. is supported by the European Research Council's (ERC) project n.853404 ERC VaReg - _Variational approach to the regularity of the free boundaries_, financed by the program Horizon 2020, by PRA_2022_11 and by PRA_2022_14. F.M. has been supported by the European Union - NextGenerationEU and by the University of Padova under the 2021 STARS Grants@Unipd programme "QuASAR". The authors are members of Gruppo Nazionale per l'Analisi Matematica, la Probabilita e le loro Applicazioni (GNAMPA), which is part of the Istituto Nazionale di Alta Matematica (INdAM), and are partially funded by the GNAMPA project "Problemi al bordo e applicazioni geometriche".
The authors are grateful to G. Antonelli, S. Hirsch, L. Mazzieri, F. Oronzio and M. Pozzetta for countless, precious and pleasureful discussions on topics related to this work. Moreover, M. F. thanks A. Pluda, V. Franceschi and G. Saracco for having organized, with the support of INdAM, the wonderful workshop "Anisotropic Isoperimetric Problems & Related Topics" in Rome. This paper rises as a conference proceeding of that event.
## 2. The Inverse Mean Curvature Flow and the Hawking mass
A fundamental tool to understand some intimate geometric properties of 3-manifolds with nonnegative scalar curvature is the evolution of surfaces through the inverse of the mean curvature. It evolves an immersion \(F_{0}:\Sigma\hookrightarrow M\) through
\[\frac{\partial}{\partial t}F_{t}(\Sigma)=\frac{1}{\mathrm{H}_{t}}\nu_{t}, \tag{2.1}\]
where \(\nu_{t}\) is the normal pointing towards infinity and \(\mathrm{H}_{t}\) the mean curvature. It is clear that, at least at points where the mean-curvature tends to zero, such flow must develop singularities1. Representing \(F_{t}(\Sigma)\) as a level set \(\{w=t\}\), it is readily checked that then
Footnote 1: Singularities happen in this case only [10, Corollary 2.3]
\[\mathrm{div}\left(\frac{\mathrm{D}w}{|\mathrm{D}w|}\right)=|\mathrm{D}w|.\]
Indeed, the left-hand side coincides with the mean curvature of the level sets of \(w\), while the right-hand side with the inverse of the velocity of the level set flow.
Building on this, Huisken-Ilmanen [10] were able to define and fully describe a weak notion of Inverse Mean Curvature Flow, which is a weak (in a suitable, original sense) solution to (2.1) starting at \(\Sigma=\partial\Omega\), and crucially prescribing a procedure to _jump_ through singularities. Namely, for any closed \(\Omega\subset M\) with smooth boundary homologous to \(\partial M\), it consists in a _proper_ function \(w\in\mathrm{Lip}_{\mathrm{loc}}(M\smallsetminus\mathrm{Int}\,\Omega)\) solving
\[\left\{\begin{aligned} \mathrm{div}\left(\frac{\mathrm{D}w}{| \mathrm{D}w|}\right)&=\,|\mathrm{D}w|\qquad\text{ on }M\smallsetminus\Omega,\\ w&=\,0\qquad\qquad\text{ on }\partial\Omega,\\ w&\rightarrow+\infty\qquad\quad\text{ as }\mathrm{d}(x,o) \rightarrow+\infty.\end{aligned}\right. \tag{2.2}\]
The set \(\Omega\) being closed allows us to consider also the case \(\partial\Omega=\partial M\).
Leaving the rigorous definition of weak solution to (2.2) to the original source [11], we just heuristically describe what happens when a singular time is next. As long as \(\{w=t\}\) does not _fatten_, meaning that it does not develop positive full Lebesgue measure, the weak flow consists in a foliation of \(\mathscr{C}^{1,\alpha}\)-hypersurfaces dictated by the inverse of a suitable \(L^{2}\)-weak version of mean curvature. In this case, it turns out that \(|\partial\Omega_{t}|\), \(|\Omega_{t}|\) and also \(\int_{\partial\Omega_{t}}\mathrm{H}^{2}\) are continuous, where \(\Omega_{t}=\{w\leq t\}\). The fattening of a level set at a time \(\overline{t}\) (jump time) is a jump from \(\big{\{}w<\overline{t}\big{\}}\) to its _strictly outward minimizing hull_, whihc is defined as the set \(E_{t}\supseteq\big{\{}w<\overline{t}\big{\}}\) of maximal volume among those minimizing the perimeter from the outside. Jump times are exactly those when \(|E_{t}|-|\{w<t\}|>0\) but \(|\partial E_{t}|=|\partial\{w<t\}|\). The (closed) hull \(E_{t}\) will be given by \(\{w\leq t\}\). A precise study of minimizing hulls (also in connection with IMCF) is performed in [11], after [10].
A first decisive result of Huisken-Ilmanen's work [11] is a characterization of the existence of the proper, global weak flow with the mere existence of a global subsolution. We are not describing this result in its full generality, but we just point out its application to \(\mathscr{C}^{0}\)-Asymptotically Flat Riemannian \(3\)-manifold, together with some basic properties we are going to explicitly need. The statement will follow the precise definition of \(\mathscr{C}^{k}_{\tau}\)-Asymptotic Flatness.
**Definition 2.1** (Asymptotically Flat Riemannian manifolds).: _A Riemannian \(3\)-manifold \((M,g)\) with (possibly empty) boundary is said to be \(\mathscr{C}^{k}_{\tau}\)-Asymptotically Flat, with \(k\in\mathbb{N}\) and \(\tau\geq 0\), if the following conditions are satisfied._
1. _There exists a compact set_ \(K\subseteq M\) _such that_ \(M\smallsetminus K\) _is differmorphic to_ \(\mathbb{R}^{3}\smallsetminus\{|x|\leq R\}\)_, through a map_ \((x^{1},x^{2},x^{3})\) _whose component are called_ asymptotically flat coordinates_._
2. _In the chart_ \((M\smallsetminus K,(x^{1},x^{2},x^{3}))\) _the metric tensor is expressed as_ \[g=g_{ij}\mathrm{d}x^{i}\otimes\mathrm{d}x^{j}=(\delta_{ij}+\eta_{ij})\mathrm{ d}x^{i}\otimes\mathrm{d}x^{j}\] _with_ \[\sum_{i,j=1}^{3}\sum_{|\beta|=0}^{k}|x|^{|\beta|+\tau}|\partial_{\beta}\eta_{ ij}|=O(1)\text{ ($=o(1)$ resp.)}\qquad\qquad\text{ as }|x|\to+\infty.\]
_We will denote the \(\mathscr{C}^{1}_{0}\)-Asymptotically Flat condition simply with \(\mathscr{C}^{1}\)-Asymptotically Flat._
The following statement substantially gathers [10, Proposition 3.2] and [11, Connectedness Lemma 4.2], see also [1, p.9-10] and [12, Lemma 2.1] for the connectedness part.
**Theorem 2.2** (Existence and basic properties of the weak IMCF).: _Let \((M,g)\) be a Riemannian \(3\)-manifold possibly with boundary. Suppose that \((M,g)\) is \(\mathscr{C}^{0}\)-Asymptotically Flat. Then, for any closed \(\Omega\) homologous to \(\partial M\) with smooth boundary there exists a weak solution \(w\) to (2.2). If \(\partial\Omega\) is connected, and \(H_{2}(M,\partial M,\mathbb{Z})=\{0\}\), then \(\partial\{w\leq t\}\) is connected for any \(t\in[0,+\infty)\)._
### The monotonicity of the Hawking mass
The weak Inverse Mean Curvature Flow is key in the analysis of nonnegative scalar curvature in three dimensions due to the monotonicity of the _Hawking mass_,
\[\mathfrak{m}_{H}(\partial\Omega)=\frac{|\partial\Omega|^{\frac{1}{2}}}{16\pi^ {\frac{3}{2}}}\left(4\pi-\int_{\partial\Omega}\frac{\mathrm{H}^{2}}{4}\, \mathrm{d}\sigma\right) \tag{2.3}\]
along this evolution by level sets. This quantity has been substantially conceived in [14], while Geroch [1] showed it to be nondecreasing along the smooth IMCF, and devised it as a tool to provide the Positive Mass Theorem. We are discussing positive mass results below, in connection also with isoperimetry.
The computation along the smooth classical evolution of _connected_ hypersurfaces is straightforward. It relies on classical evolution equations and on the Gauss-Bonnet Theorem, which is the reason why connectedness is needed along the evolution. The monotonicity calculation reads as follows. Denote \(\Sigma_{t}=F_{t}(\Sigma)\) and \(\mathrm{d}\sigma_{t}\) the area measure on \(\Sigma_{t}\). We are just employing well-known evolution equations (see e.g. [10, Theorem 3.2]). We have
\[\frac{\mathrm{d}}{\mathrm{d}t}|\Sigma_{t}|=\int_{\Sigma_{t}}\frac{\partial}{ \partial t}(\mathrm{d}\sigma_{t})=\int_{\Sigma}\,\mathrm{d}\sigma_{t}=|\Sigma_ {t}|, \tag{2.4}\]
immediately implying \(|F_{t}(\Sigma)|=\mathrm{e}^{t}\,|\Sigma|\). Hence, we get
\[(16\pi)^{\frac{3}{2}}\frac{\mathrm{d}}{\mathrm{d}t}\mathfrak{m}_{ H}(\Sigma_{t}) =|\Sigma_{t}|^{\frac{1}{2}}\left(8\pi-\int_{\Sigma_{t}}\frac{ \mathrm{H}_{t}^{2}}{2}\,\mathrm{d}\sigma_{t}-\frac{\mathrm{d}}{\mathrm{d}t} \int_{\Sigma_{t}}\mathrm{H}_{t}^{2}\,\,\mathrm{d}\sigma_{t}\right)\] \[=|\Sigma_{t}|^{\frac{1}{2}}\left(8\pi-\int_{\Sigma_{t}}\frac{3\, \mathrm{H}_{t}^{2}}{2}\,\mathrm{d}\sigma_{t}-2\int_{\Sigma_{t}}\mathrm{H}_{t} \,\frac{\partial H_{t}}{\partial t}\,\mathrm{d}\sigma_{t}\right)\] \[=|\Sigma_{t}|^{\frac{1}{2}}\left(8\pi-\int_{\Sigma_{t}}\frac{3\, \mathrm{H}_{t}^{2}}{2}\,\mathrm{d}\sigma_{t}+2\int_{\Sigma_{t}}\mathrm{H}_{t} \left(\Delta_{\Sigma_{t}}\frac{1}{\mathrm{H}_{t}}+\frac{|\mathrm{h}_{t}|^{2}+ \mathrm{Ric}(\nu_{t},\nu_{t})}{\mathrm{H}_{t}}\right)\,\mathrm{d}\sigma_{t} \right),\]
where \(\mathrm{h}_{t}\) is the second fundamental form of \(\Sigma_{t}\). Integrating by parts and using the classical (traced) Guass-Codazzi equation, we are left with
\[(16\pi)^{\frac{3}{2}}\frac{\mathrm{d}}{\mathrm{d}t}\mathfrak{m}_{ H}(\Sigma_{t}) =|\Sigma_{t}|^{\frac{1}{2}}\left(8\pi-\int_{\Sigma_{t}}\frac{3\, \mathrm{H}_{t}^{2}}{2}\,\mathrm{d}\sigma_{t}+\int_{\Sigma_{t}}2\frac{|\nabla_ {\Sigma_{t}}\,\mathrm{H}_{t}|^{2}}{\mathrm{H}_{t}^{2}}+\mathrm{R}-\mathrm{R} ^{\Sigma_{t}}+|\mathrm{h}_{t}|^{2}+\mathrm{H}_{t}^{2}\,\,\mathrm{d}\sigma_{t}\right)\] \[=|\Sigma_{t}|^{\frac{1}{2}}\left(8\pi-\int_{\Sigma_{t}}\mathrm{R} ^{\Sigma_{t}}\,\mathrm{d}\sigma_{t}+\int_{\Sigma_{t}}2\frac{|\nabla_{\Sigma_{ t}}\,\mathrm{H}_{t}|^{2}}{\mathrm{H}_{t}^{2}}+\mathrm{R}+\left|\mathring{\mathrm{h} }_{t}\right|^{2}\mathrm{d}\sigma_{t}\right),\]
where \(\mathring{\mathrm{h}}_{t}\) is the trace-less second fundamental form of \(\Sigma_{t}\). Coupling \(\mathrm{R}\geq 0\) with the Gauss-Bonnet theorem applied to the connected surface \(\Sigma_{t}\), reading
\[\int_{\Sigma_{t}}\mathrm{R}^{\Sigma_{t}}\,\mathrm{d}\sigma_{t}=4\pi\chi( \Sigma_{t})\leq 8\pi, \tag{2.5}\]
ensures that the deriviative of the Hawking mass is nonnegative.
The analysis of the monotonicity in the general, weak formulation constitutes a central part of Huisken-Ilmanen's work. However, once accepted the heuristic description of the level set flow given above, one realizes that, at jump times, part of the evolving boundary should be replaced by a piece of minimal surface, so that the \(L^{2}\)-norm of the mean curvature can only decrease, while the area is continuous. At jump times, hence, the Hawking mass should only increase. This is in fact what happens.
**Theorem 2.3** (Geroch Monotonicity along the weak IMCF [11]).: _Let \((M,g)\) be a \(\mathscr{C}^{0}\)-Asymptotically Flat Riemannian \(3\)-manifold possibly with boundary, satisfying \(H_{2}(M,\partial M,\mathbb{Z})=\{0\}\). Then, the Hawking mass (2.3) is well defined and monotone nonincreasing along the weak IMCF of any outward minimizing connected closed \(\Omega\) homologous to the boundary \(\partial M\), as long as the flow is contained in a region of nonnegative scalar curvature._
## 3. Isoperimetry in nonnegative scalar curvature
### Nonnegative scalar curvature and the reverse Isoperimetric inequality
To see plainly the role of the monotonicity of the Hawking mass in isoperimetric issues, we focus on the case of manifolds with no boundary. it is not difficult to conceive that, shrinking and shrinking an initial set \(\Omega\) around a point, one can build through a limiting procedure a weak IMCF
originating from a point \(o\). This can actually be obtained with a similar argument as in [11, Proposition 7.2]. Moreover, arguing as in [11, Blowdown Lemma 7.1], \(w(x)\) can be shown to behave as the Euclidean model \((n-1)\log(\mathrm{d}(x,o))\) around \(o\). In particular, one immediately observes that \(\mathfrak{m}_{H}(\partial\Omega_{t})\to 0\) when approaching \(o\), that is a \(t\to-\infty\), where \(\Omega_{t}=\{w\leq t\}\). Consequently, being \(\mathfrak{m}_{H}\) nondecreasing along the flow by Theorem 2.3, the level sets of \(w\) satisfy _reverse Willmore inequality_, meaning
\[\int_{\partial\Omega_{t}}\mathrm{H}^{2}\,d\sigma\leq 16\pi. \tag{3.1}\]
This immediately suggests that the isoperimetric quotient of such set is smaller than the Euclidean, and can be seen, in the smooth flow case, as follows.
Let \(v(t)=|\Omega_{t}|\). Then, by coarea formula and equation solved by \(w\) (2.2), one has
\[v^{\prime}(t)=\int_{\partial\Omega_{t}}\frac{1}{\mathrm{H}}\,\mathrm{d}\sigma. \tag{3.2}\]
On the other hand, by the Holder's inequality
\[|\partial\Omega_{t}|=\int_{\partial\Omega_{t}}\mathrm{H}^{\alpha}\,\mathrm{H }^{-\alpha}\ \mathrm{d}\sigma\leq\left(\int_{\partial\Omega_{t}}\mathrm{H}^{\alpha p}\ \mathrm{d}\sigma\right)^{\frac{1}{p}}\left(\int_{ \partial\Omega_{t}}\mathrm{H}^{-\alpha\frac{p}{(p-1)}}\ \mathrm{d}\sigma\right)^{\frac{p-1}{p}}\]
applied with \(\alpha=2/3\) and \(p=3\), one finds out that
\[\left(\int_{\partial\Omega_{t}}\frac{1}{\mathrm{H}}\,\mathrm{d}\sigma\right)^ {-1}\leq\frac{\left(\int_{\partial\Omega_{t}}\mathrm{H}^{2}\ \mathrm{d}\sigma\right)^{\frac{1}{2}}}{|\partial\Omega_{t}|^{\frac{3}{2}}}. \tag{3.3}\]
Now, the numerator in the right-hand side is estimated by (3.1), while, as far as the denominator is concerned, recall that the evolution forces the area \(|\partial\Omega_{t}|\) to equal \(\mathrm{e}^{t}\) times a constant. This is due to (2.4) (see [11, Lemma 1.6] for the computation in the setting of weak solutions). By the asymptotic behaviour at the pole pointed out above, the constant is in fact \(4\pi\), the area of the round unit \(2\)-sphere, so that
\[|\partial\Omega_{t}|=4\pi\mathrm{e}^{t}. \tag{3.4}\]
Consequently, combining these pieces of information with (3.3) and (3.2) leaves us with
\[v^{\prime}(t)\geq 2\pi\mathrm{e}^{\frac{3}{2}t}.\]
Integrating it from \(t\to-\infty\), which corresponds to the pole, where both the volume and the area of the level sets vanishes, and taking into account again (3.4), yields
\[|\partial\Omega_{t}|\leq(36\pi)^{\frac{1}{3}}|\Omega_{t}|^{\frac{2}{3}}. \tag{3.5}\]
In other words, the level sets of \(w\) satisfy a reverse sharp Euclidean Isoperimetric inequality, at least when the evolution is smooth. The following result of Shi [11], subsumed in particular by [11, (15)], constitutes the general statement. Its less transparent formulation is ultimately due to the fact that not any volume is covered along the weak evolution, where jumps are allowed, and actually occur.
**Theorem 3.1** (Shi's reverse Isoperimetric Inequality).: _Let \((M,g)\) be a \(\mathscr{C}^{0}\)-Asymptotically Flat Riemannian \(3\)-manifold, satisfying \(H_{2}(M,\partial M,\mathbb{Z})=\{0\}\), and let \(o\in M\). Let, for \(o\in M\), \(w_{o}\) be the weak IMCF issuing from \(o\), and let, for \(v>0\)_
\[t(v)=\inf\{\tau\,|\,|\{w_{o}\leq t\}|\geq v\}.\]
_Then, as long as \((\{w_{o}\leq t(v)\},g)\) has nonnegative scalar curvature, we have_
\[|\partial\{w_{o}\leq t(v)\}|\leq(36\pi)^{\frac{1}{3}}v^{\frac{2}{3}}. \tag{3.6}\]
The scalar curvature will rule the existence of isoperimetric sets substantially by means of the above result. In fact, in light of the general principle first devised by [15], the only way a minimizing sequence may fail to provide an isoperimetric set would consist in losing part of its volume at infinity. However, by the \(\mathscr{C}^{0}\)-Asymptotic Flatness, such volume would be fated to converge to a round ball in \(\mathbb{R}^{3}\). By Theorem 3.1, one can then safely replace this ball with a sublevel set of the IMCF enclosing the same drifted away volume, obtaining a competitor that is not worse.
**Remark 3.2** (Comparison with nonnegative Ricci curvature).: _A scheme to isoperimetric existence like the one being illustrated here works in any dimension if nonnegative scalar curvature is promoted to nonnegative Ricci curvature, as a consequence of the more basic Laplacian Comparison and Bishop-Gromov Theorems. Such classical results respectively state that, in a suitable weak sense, the mean curvature of a geodesic ball of radius \(r\) is bounded from above by \((n-1)/r\), while its area \(|\partial B(r)|\) is controlled from above by \(|\mathbb{S}^{n-1}|r^{n-1}\). We stick now to dimension \(3\), in order to facilitate the comparison with (3.5). Fix a point \(o\in M\), a manifold endowed with a metric of nonnegative Ricci curvature \(g\). Letting \(r(V)\) be the radius of the ball \(B(r(V))\) centered in \(o\) of volume \(V\), it is easily computed through classical variation formulas that_
\[r^{\prime}(V)=\frac{1}{|\partial B(r(V))|}\geq\frac{1}{4\pi r(V)^{2}}.\]
_Integrating such inequality, one gets_
\[r(V)\geq\left(\frac{3}{4\pi}V\right)^{\frac{1}{3}}. \tag{3.7}\]
_On the other hand, we have_
\[I(r(V))^{\prime}=\frac{1}{I(r(V)}\int_{\partial B(r(V)}\mathrm{H}\ \mathrm{d} \sigma\leq\frac{2}{r(V)}\leq\frac{2}{V^{\frac{1}{3}}}\left(\frac{4\pi}{3} \right)^{\frac{1}{3}},\]
_where we plugged \((\ref{eq:1})\) in. Integrating this other differential inequality, we conclude_
\[|\partial B(r)|\leq(36\pi)^{\frac{1}{3}}|B(r)|^{\frac{2}{3}}\]
_for any radius \(r\geq 0\), that fully mirrors \((\ref{eq:1})\), but for a different exhaustion. It should be clear that no particular features of the dimension have been exploited here, contrarily to the Gauss-Bonnet Theorem \((\ref{eq:1})\) utilized to infer the monotonicity of the Hawking mass, that in turn led to \((\ref{eq:1})\). The reverse isoperimetric inequality in nonnegative Ricci curvature was first pointed out by Morgan-Johnson [10], and it was first exploited to infer the existence of isoperimetric sets under suitable asymptotic assumptions by Mondino-Nardulli [10]._
### Isoperimetric analysis on manifolds with nonnegative scalar curvature
The following is a version of Nardulli's Generalized Compactness Principle, crafted for the Asymptoticly Flat framework. It substantially asserts that the runaway volume in an isoprimetric sequence is fully recovered with a ball in \(\mathbb{R}^{3}\). The sublevel sets of the IMCF satisfying, as just explained, a reverse Isoperimetric inequality, can thus be exploited to replace such lost volume and provide an isoperimetric set.
**Theorem 3.3** (Asymptotic Decomposition of Isoperimetric minimizing sequences).: _Let \((M,g)\) be a smooth, \(\mathscr{C}^{0}\)-Asymptotically Flat Riemannian manifold \((M,g)\) with Ricci curvature bounded
from below, and let \(I\) be its isoperimetric profile. Then, for any \(V>0\), there exists a possibly empty,_ bounded \(E\subset M\)_and a ball \(B\) in \(\mathbb{R}^{n}\) such that \(V=|E|+|B|\) and_
\[I(V)=|\partial E|+|\partial B|.\]
The version above is actually a consequence of the way more general [1, Theorem 1.1].
Clearly, the number of drifted away balls being at most one is due to the fact that a union of balls in \(\mathbb{R}^{3}\) is manifestly isoperimetrically less convenient than one.
**Remark 3.4**.: _Nardulli's earlier work has been recently vastly exploited and empowered in the context of possibly nonsmooth metric spaces with Ricci lower bounds, [1, 1, 2]. It led to far-reaching consequences regarding the existence of isoperimetric sets, isoperimetric inequalities and sharp properties of the isoperimetric profile in such regime._
There is a last issue to be taken into account before safely running the argument sketched above for the existence of Isoperimetric sets. Indeed, as already pointed out, the weak IMCF can jump, and in particular there could be some value \(V\) such that \(\{w\leq t\}\) encloses a volume \(V\) for no \(t\). This may cause trouble in case such volume is exactly the volume of the ball at infinity that we would like to replace. This problem is bypassed by the strict monotonicity of the Isoperimetric profile, a property that is intimately related to the outermost property of the minimal boundary.
**Proposition 3.5**.: _Let \((M,g)\) be a \(\mathscr{C}^{0}\)-Asymptotically Flat Riemannian \(3\)-manifold with Ricci curvature bounded from below and endowed with a closed, outermost minimal boundary. Then, its isoperimetric profile \(I\) is strictly increasing._
Proof.: We first of all recall that the isoperimetric profile is continuous in this case, see [13]. Fix a volume \(V\). Then, by Theorem 3.3, we know that
\[I(V)=|\partial E|+|\partial B|_{\mathbb{R}^{3}}\]
for a possibly empty \(E\subset M\) and for a Euclidean ball \(B\) satisfying \(V=|E|+|B|_{\mathbb{R}^{3}}\). Clearly, \(E\) must be isoperimetric for its own volume, i.e. \(|\partial E|=I(|E|)\), and in particular it is smooth. Assume \(E\) is nonempty. Then, performing an inward variation of \(\partial E\) supported in \(\partial E\setminus\partial M\), giving rise, for any \(\varepsilon>0\), to \(E_{\varepsilon}\subset E\) of volume \(|E|-\varepsilon\), we get
\[\liminf_{\varepsilon\to 0^{+}}\frac{I(V)-I(V-\varepsilon)}{\varepsilon}\geq \liminf_{\varepsilon\to 0^{+}}\frac{|\partial E|-|\partial E_{\varepsilon}|}{ \varepsilon}=\mathrm{H}_{E}, \tag{3.8}\]
where \(\mathrm{H}_{E}\) is the constant mean curvature of \(\partial E\setminus\partial M\). In the inequality in (3.8), we used
\[I(V-\varepsilon)\leq|\partial E_{\varepsilon}|+|\partial B|_{\mathbb{R}^{3}},\]
in force, since the sets \(E_{\varepsilon}\cup B_{j}\), with \(B_{j}\subset M\) approaching \(B\) in the \(\mathscr{C}^{0}\)-topology, form a valid family of competitors for the isoperimetric problem of volume \(V-\varepsilon\). The key step consists thus in showing that the constant \(\mathrm{H}_{E}\) is strictly positive. To see this, we can argue as done for [1, Lemma 2.8], after [12, Remark, p. 394]. Namely, one can flow a geodesic ball in the asymptotic region, that is mean-convex by [1, Lemma 4.3], through the Mean Curvature Flow of mean-convex surfaces with surgery in Riemannian \(3\)-manifolds [12]; as proved in such paper, this flow is fated to smoothly converge to the outermost minimal boundary \(\partial M\). One can then find a surface in the evolution touching \(\partial E\). If \(\mathrm{H}_{E}\) were nonpositive, this would result in a contradiction with the Maximum Principle. This completes the proof in neighbourhoods of volumes such that \(\partial E\) is nonempty.
In case \(E\) were empty, deforming \(B\subset\mathbb{R}^{3}\) inwardly as above yields (3.8), this time in terms of the mean curvature of \(B\subset\mathbb{R}^{n}\), that is obviously strictly positive. This concludes the proof.
With Theorem 3.1, Theorem 3.3 and Proposition 3.5 at hand, we can prove that isoperimetric sets exist in any \(\mathscr{C}^{0}\)-Asymptotically Flat Riemannian \(3\)-manifold with nonnegative scalar curvature and horizon boundary, for any volume, provided some lower bound on the Ricci curvature is in force. This is a refinement of [11, Proposition K.1]. Useful insights about the strategy employed were actually proposed by Brendle-Chodosh [1], including the key computation leading to Theorem 3.1.
**Theorem 3.6** (Existence of Isoperimetric sets in nonnegative scalar curvature).: _Let \((M,g)\) be \(\mathscr{C}^{0}\)-Asymptotically Flat Riemannian \(3\)-manifold with nonnegative scalar curvature and with Ricci curvature bounded from below, endowed with a closed, minimal outermost boundary. Then, for any \(V>0\), there exists an isoperimetric set of volume \(V\)._
Proof.: Let \(V>0\). As above, we have
\[I(V)=\left|\partial E\right|+\left|\partial B\right|_{\mathbb{R}^{3}} \tag{3.9}\]
for a possibly empty \(E\subset M\) and for a Euclidean ball \(B\) satisfying \(V=\left|E\right|+\left|B\right|_{\mathbb{R}^{3}}\). We can assume that \(\left|B\right|_{\mathbb{R}^{3}}>0\), otherwise \(E\) is already the isoperimetric set of volume \(V\) sought for.
Let now \(o\in M\) be far away from the boundary. We can assume that there exists a weak IMCF \(w_{o}\) issuing from \(o\), although this is not obtained through flows homologous to \(\partial M\), as Theorem 2.2 would require. In fact, we can attach a handlebody to \(\partial M\), and extend smoothly the metric \(g\) to this new complete manifold [13]. Then, Theorem 2.2, coupled with the limiting procedure already mentioned above, yields a weak IMCF \(w_{o}\) issuing from \(o\). By the known topological structure of manifolds with nonnegative scalar curvature and minimal outermost boundary, see e.g. [1], we have \(H_{2}(M,\partial M,\mathbb{Z})=\{0\}\). Consequently, this is also fulfilled by the extended manifold without boundary, and so, again as stated in Theorem 2.2, the surfaces \(\partial\{w_{o}\leq t\}\) remain connected.
By [16, Theorem 1.3], there exist functions \(f_{1}f_{2},\colon[0,+\infty)\to\mathbb{R}\) that grow to infinity at infinity such that
\[f_{1}(\mathrm{d}(x,o))\leq w_{o}(x)\leq f_{2}(\mathrm{d}(x,o)) \tag{3.10}\]
for any \(o\in M\). In fact, such result requires a Ricci lower bound and the validity of a global, possibly weighted Sobolev inequality. In the \(\mathscr{C}^{0}\)-Asymptotically Flat case this in force with no weight, as a direct consequence of the uniform Euclidean-like Isoperimetric inequality that can be directly obtained outside some sufficiently big compact set, and then, by [13, Theorem 3.2], on the whole manifold.
In light of (3.10), given \(v>0\) we can always find \(o\) sufficiently far in space such that \(\{w_{o}\leq t(v)\}\) is disjoint from the bounded \(E\) and \(\partial M\). Since such sublevel set is contained in a nonnegative scalar curved region, Shi's reverse Isoperimetric Inequality (3.6) holds for \(t(v)\leq t\). Choose now \(v=\left|B\right|\), and \(o\) as above. Let \(F=\{w_{o}\leq t(v)\}\), and observe that, by definition, \(\left|F\right|\geq v\). Now, if \(\left|F\right|=v\), then by (3.6) the set \(E\cup F\) is isoperimetric of volume \(V\). If instead \(\left|F\right|>v\), then by the strict monotonicity of \(I\) shown in Proposition 3.5 and (3.9) we have
\[I(\left|E\right|+\left|F\right|)>I(\left|E\right|+v)=I(V)=\left|\partial E \right|+(36\pi)^{\frac{1}{3}}v_{B}^{\frac{2}{3}}. \tag{3.11}\]
On the other hand, we also have
\[I(\left|E\right|+\left|F\right|)\leq\left|\partial E\right|+\left|\partial F \right|\leq\left|\partial E\right|+(36\pi)^{\frac{1}{3}}v^{\frac{2}{3}}, \tag{3.12}\]
where we have used again (3.6). Chaining (3.11) with (3.12) leads to a contradiction, that ends the proof.
We find worth spending some words on the arguably unnecessary assumption of some Ricci lower bound.
**Remark 3.7** (On the Ricci lower bound assumption).: _We exploited the Ricci lower bound in two places. The first one is in the fundamental Generalized Compactness statement of Theorem 3.3; in fact, both in [1] and in the subsequent, improved [1], some possibly synthetic bound on the Ricci tensor is required. On the other hand, it does not seem difficult to realize that, disposing as in our case of a global Asymptotic Flatness should be enough to guarantee the availability of all that is needed to carry out the arguments._
_A maybe more serious issue arises in connection with (3.10). In fact, at least the barrier \(f_{2}\) constructed in [13] strongly depends on the bound on the Ricci curvature. However, such bound does not exploit the Asymptotic Flatness at all, that should allow one to provide suitable barriers modelled on the Euclidean model solutions. The difficulties seem to lie in ensuring the validity of the double bounds up to to \(o\) and uniformly on \(o\). Without these specifications, a similar goal was achieved for different aims in [1, Proposition 3.2]._
## 4. The Isoperimetric mass and the Isoperimetric Penrose inequality
So far, we did not mention any particular example of \(3\)-manifold with nonnegative scalar curvature and minimal, outermost boundary. Let us stick now to the archetypal one, that will actually constitute the model for the geometric inequalities that we are going to present. The (\(3\)-dimensional) Schwarzschild manifold of (positive) mass \(\mathfrak{m}\) is the space \(\mathbb{R}^{3}\smallsetminus\{|x|<2\mathfrak{m}\}\) endowed with the rotationally symmetric metric
\[g=\left(1+\frac{\mathfrak{m}}{2|x|}\right)^{4}\delta_{ij}\,\mathrm{d}x^{i} \otimes\mathrm{d}x^{j}. \tag{4.1}\]
This Riemannian manifold is scalar flat, the boundary \(\partial M=\{|x|=2\mathfrak{m}\}\) is minimal, and since any other level set of \(|x|\) is constantly mean-curved, such boundary is also outermost, since the presence of any other closed minimal surfaces would result in a contradiction with the Maximum Principle.
It is known from early work of Bray [1], later generalized in various directions [1, 2, 13], that the isoperimetric sets in this warped products are only the Euclidean annuli given by \(E_{R}=\{2\mathfrak{m}\leq|x|\leq R\}\). Coherently with Shi's reverse Isoperimetric inequality, and with the fact that balls form an Inverse Mean Curvature Flow, one checks that
\[|E_{R}|-\frac{|\partial E_{R}|^{\frac{3}{2}}}{6\sqrt{\pi}}\geq 0.\]
Moreover, this bound is seen to scale as \(R\to+\infty\) with \(R^{2}\). Interestingly, one computes that the mass parameter \(\mathfrak{m}\) is in fact recovered with the following limit
\[\mathfrak{m}=\limsup_{R\to+\infty}\frac{2}{|\partial E_{R}|}\left(|E_{R}|- \frac{|\partial E_{R}|^{\frac{3}{2}}}{6\sqrt{\pi}}\right).\]
The above observations can serve as a motivation for the notion of Isoperimetric mass, a concept conceived by Huisken [14].
**Definition 4.1** (Isoperimetric mass).: _Let \((M,g)\) be a Riemannian \(3\)-manifold possibly with boundary, with infinite volume. Then, its Isoperimetric mass is defined as_
\[\mathfrak{m}_{\mathrm{iso}}=\sup_{(\Omega_{j})_{j\in\mathbb{N}}}\limsup_{j\to+ \infty}\frac{2}{|\partial\Omega|}\left(|\Omega|-\frac{|\partial\Omega|^{\frac {3}{2}}}{6\sqrt{\pi}}\right)\]
_where the supremum is taken among all exhaustions \((\Omega_{j})_{j\in\mathbb{N}}\) consisting of domains with \(\mathscr{C}^{1,\alpha}\)-boundary._
It is immediately checked from the above definition that, if an exhaustion consists of isoperimetric sets, this automatically realizes the required supremum. In particular, in nonnegative scalar curvature, the following holds.
**Lemma 4.2**.: _Let \((M,g)\) be a \(3\)-dimensional \(\mathscr{C}^{0}\)-Asymptotically Flat Riemannian manifold with nonnegative scalar curvature and Ricci curvature bounded from below, endowed with a closed, outermost minimal boundary. Then, we have_
\[\mathfrak{m}_{\mathrm{iso}}=\limsup_{V\to+\infty}\frac{2}{|\partial E_{V}|} \left(|E_{V}|-\frac{|\partial E_{V}|^{\frac{3}{2}}}{6\sqrt{\pi}}\right),\]
_where \(E_{V}\) is isoperimetric of volume \(V>0\)._
We should take into account the following _caveat_. The sequences of isoperimetric sets \((E_{V_{j}})_{j\in\mathbb{N}}\) of increasing volume \(V_{j}\) we have at hand do not a priori form an exhaustion. However, the useful [11, Proposition 37] asserts that, in the \(\mathscr{C}^{0}\)-Asymptotically Flat regime, one can relax the definition of \(\mathfrak{m}_{\mathrm{iso}}\) in order to replace the requirement for \((\Omega_{j})_{j\in\mathbb{N}}\) to form an exhaustion with \(|\partial\Omega_{i}|\to+\infty\) as \(i\to+\infty\), certifying the validity of the above Lemma.
Turning back to the observation of the Schwarzschild model, as a consequence of Lemma 4.2 we immediately deduce that \(\mathfrak{m}_{\mathrm{iso}}=\mathfrak{m}\). Moreover, just by computing the value of \(|\partial M|=\{|x|=2\mathfrak{m}\}\) from the expression for \(g\) in (4.1), one sees that
\[\sqrt{\frac{|\partial M|}{16\pi}}=\mathfrak{m}_{\mathrm{iso}}. \tag{4.2}\]
Together with Mazzieri, we proved that, in the general case considered in the above Section, and actually with no Ricci lower bound required, the left-hand side of (4.2) is controlled from above by the right-hand side [1, Theorem 1.3]. Inequalities bounding a suitable notion of mass in terms of the horizon boundary are usually denominated Penrose inequalities.
**Theorem 4.3** (Isoperimetric Penrose inequality).: _Let \((M,g)\) be a \(\mathscr{C}^{0}\)-Asymptotically Flat Riemannian \(3\)-manifold with nonnegative scalar curvature and closed, minimal outermost connected boundary. Then,_
\[\sqrt{\frac{|\partial M|}{16\pi}}\leq\mathfrak{m}_{\mathrm{iso}}. \tag{4.3}\]
_Moreover, equality holds if and only if \((M,g)\) is a Schwarzschild \(3\)-manifold with \(\mathfrak{m}_{\mathrm{iso}}=\mathfrak{m}\)._
Additional discussion on Penrose inequalities, including physical motivation behind them will be clearer in the next Section, where the Isoperimetric mass will be compared with other concepts of mass, most notably with the classical ADM mass. For the time being, we just point out that the latter is well defined only under stronger asymptotic assumptions on the metric decaying towards the flat one, and that these two notions of mass do actually coincide in these cases.
The general strategy for the proof of (4.3) follows Huisken-Ilmanen's [10], where the inequality was proved in terms of the ADM mass. It consists in exploiting again the monotonicity of the Hawking mass, this time along the weak Inverse Mean Curvature Flow of the minimal boundary. Since the mean curvature H vanishes on \(\partial M\), the initial value of this quantity becomes
\[\mathfrak{m}_{H}(\partial M)=\sqrt{\frac{|\partial M|}{16\pi}}.\]
Hence, since \(\mathfrak{m}_{H}\) is nondecreasing along the flow by Theorem 2.3, to prove (4.3) it suffices to show that
\[\lim_{t\to+\infty}\mathfrak{m}_{H}(\partial\Omega_{t})\leq\liminf_{t\to+ \infty}\frac{2}{|\partial\Omega_{t}|}\left(|\Omega_{t}|-\frac{|\partial\Omega _{t}|^{\frac{3}{2}}}{6\sqrt{\pi}}\right)\leq\mathfrak{m}_{\mathrm{iso}}. \tag{4.4}\]
This last step is completely different than Huisken-Ilmanen's asymptotic comparison with \(\mathfrak{m}_{\mathrm{ADM}}\). Notably, it is by far computationally easier, and way more general in the asymptotic assumptions needed.
**Remark 4.4**.: _In fact in (4.3) we do not even require any kind of Asymptotic Flatness in order to reach for (4.3); we just have to assume the existence of proper weak IMCF's. The issue of the existence of weak IMCF's is a very interesting topic per se, related to deep metric properties of the underlying manifold, see [10, 11]._
We provide a (sketch of) the proof of the crucial asymptotic comparison (4.4), in order to highlight the similarities with the computations leading to (3.3) above.
Proof of (4.4).: We directly assume, for simplicity, that, as \(t\to+\infty\),
\[\int_{\partial\Omega_{t}}\mathrm{H}^{2}\ \mathrm{d}\sigma\to 16\pi \tag{4.5}\]
as \(t\to+\infty\). We actually proved in [13] that this always can be assumed along the subsequence we will be interested in, otherwise \(\mathfrak{m}_{\mathrm{iso}}\) would be infinite, making (4.4) trivial. We can apply a version of de L'Hopital rule, to get
\[\liminf_{t\to+\infty}\frac{2}{|\partial\Omega_{t}|}\left(|\Omega_{t}|-\frac{| \partial\Omega_{t}|^{\frac{3}{2}}}{6\sqrt{\pi}}\right)\geq\liminf_{t\to+ \infty}\frac{2}{|\partial\Omega_{t}|}\left(\,\int_{\partial\Omega_{t}}\frac{ 1}{\mathrm{H}}\,\mathrm{d}\sigma-\frac{|\partial\Omega_{t}|^{\frac{3}{2}}}{4 \sqrt{\pi}}\right).\]
Then, the same application of the Holder inequality (3.3) then gives
\[\liminf_{t\to+\infty}\frac{2}{|\partial\Omega_{t}|}\left(|\Omega_ {t}|-\frac{|\partial\Omega_{t}|^{\frac{3}{2}}}{6\sqrt{\pi}}\right) \geq\liminf_{t\to+\infty}2\left(\frac{|\partial\Omega_{t}|}{\int_ {\partial\Omega_{t}}\mathrm{H}_{t}^{2}\ \mathrm{d}\sigma}\right)^{\frac{1}{2}}\left(1- \frac{1}{4\sqrt{\pi}}\left(\,\int_{\partial\Omega_{t}}\mathrm{H}^{2}\ \mathrm{d}\sigma \right)^{\frac{1}{2}}\right)\] \[=\liminf_{t\to+\infty}2\left(\frac{|\partial\Omega_{t}|}{\int_{ \partial\Omega_{t}}\mathrm{H}_{t}^{2}\ \mathrm{d}\sigma}\right)^{\frac{1}{2}}\frac{1- \frac{1}{16\pi}\int_{\partial\Omega_{t}}\mathrm{H}^{2}\ \mathrm{d}\sigma}{1+( \frac{1}{16\pi}\int_{\partial\Omega_{t}}\mathrm{H}^{2}\ \mathrm{d}\sigma)^{\frac{1}{2}}}\] \[=\liminf_{t\to+\infty}\frac{2\mathfrak{m}_{H}(\partial\Omega_{t}) }{(\frac{1}{16\pi}\int_{\partial\Omega_{t}}\mathrm{H}^{2}\ \mathrm{d}\sigma)^{\frac{1}{2}}+\frac{1}{16\pi}\int_{\partial\Omega_{t}} \mathrm{H}^{2}\ \mathrm{d}\sigma}\,.\]
The proof is complete resorting to (4.5) and the monotonicity of \(\mathfrak{m}_{H}\) along the weak IMCF.
The inequality (4.3) states that nonnegatively scalar curved manifolds with minimal outermost boundary always allow confining more volume in a region of a prescribed area than the Schwarzchild model would do, in the scaling invariant limit given by the isoperimetric mass. Once the notion of ADM mass is explained, this will give a direct link between isoperimetry (for large volumes) in nonnegative scalar curvature and energy/matter concepts in general relativity.
## 5. The ADM mass and equivalence between masses
In order to better understand the heuristics behind the definition of \(\mathfrak{m}_{\mathrm{ADM}}\) below, we start by recalling why manifolds fulfilling the assumptions considered above are so natural in the context of General Relativity. This popular, groundbreaking physical theory is formulated in the context of Lorentzian Geometry, and specifically in a \(4\)-manifold \((L,\mathfrak{g})\) satisfying the _vacuum_ Einstein Field Equations
\[\mathrm{Ric}_{\mathfrak{g}}-\frac{1}{2}\mathrm{R}_{\mathfrak{g}}=\mathrm{T}, \tag{5.1}\]
where \(T\) is named _stress-energy_ tensor and should be thought of as a datum. A largely accepted assumption on \(\mathrm{T}\) is the _dominant energy condition_, amounting simply to \(\mathrm{T}(V,V)\geq 0\) for any
timelike_\(V\), that is \(\mathfrak{g}(V,V)<0\). Consider now a _spacelike_\(3\)-hypersurface \((M,g)\), with \(g\) induced \(\mathfrak{g}\). With spacelike, it is just meant \(g(X,X)>0\) for any \(X\in T_{p}M\), for any \(p\in M\). We also consider the simplest case of so-called _time simmetry_, consisting in the vanishing of the second fundamental form of the immersion \(M\hookrightarrow L\). By means of the Gauss-Codazzi equations, (5.1) coupled with the dominant energy condition \(\mathrm{T}\geq 0\) induces on \((M,g)\) the identity
\[\mathrm{R}_{g}=16\pi\rho\geq 0, \tag{5.2}\]
where \(\rho=\mathrm{T}(V,V)\) for some timelike \(V\), so that the inequality follows from the dominant energy condition. We address the reader to [1, Section 1.1] for a detailed derivation of (5.2).
The _event horizon_ of a _black hole_ in \((L,\mathfrak{g})\) manifests in \((M,g)\) as a minimal outermost boundary. Finally, when modeling an _isolated gravitational system_, it becomes natural to assume some kind of Asymptotic Flatness, that in this note was only at \(\mathscr{C}^{0}\)-level, at least so far. Physically speaking, this condition says that the gravitational field is not influenced by the presence of some mass "at infinity". We address the interested reader to [1, 10] for a comprehensive treatment of the relativistic concepts involved.
### The ADM mass
To get an idea of the ADM mass, we briefly stick with the more familiar Newtonian framework. Suppose to have a mass density \(\rho\) in a 3D-time snapshot represented by an isolated gravitational system \((M,g)\). Newton's formulation models the gravitational field by a function \(V\) called _gravitational potential_, which satisfies the relation \(\Delta V=4\pi\rho\). Hence, using the divergence theorem, one can reconstruct the total mass of the system by observing the effects of the gravitational potential at infinity, since
\[\mathfrak{m}=\int_{M}\rho\,\mathrm{d}\mu=\int_{M}\frac{\Delta V}{4\pi}\, \mathrm{d}\mu=\lim_{r\to+\infty}\int_{\partial B_{r}}\frac{\partial V}{ \partial r}\,\mathrm{d}\sigma.\]
On the other hand, we just recalled that, in Einstein's formulation, the gravitational field is modelled by a metric \(g\) on \(M\) which is only constrained (in the time-symmetric case) by the equation \(\mathrm{R}_{g}=16\pi\rho\). One would like to compute the system's total mass simply by knowing the metric in this case as well. A first idea may be integrating the quantity \(\mathrm{R}_{g}/16\pi\) as we did in the Newtonian case. However, this approach has at least two main issues. The first one is that, even in a special paradigmatic case such as Schwarzschild's, the total mass would be zero, which does not correspond to what we expect. The second problem is that the superposition principle does not hold, since the scalar curvature is not a linear operator of \(g\). In the case \((M,g)\) is close to the Euclidean flat space, one could try to circumvent this by replacing the scalar curvature with its first-order linearization, that is
\[\mathrm{R}_{g}\approx\mathrm{R}_{\delta}+\mathrm{DR}|_{\delta}(g-\delta)= \frac{\mathrm{d}}{\mathrm{d}\varepsilon}\mathrm{R}(\delta+\varepsilon(g- \delta))\big{|}_{\varepsilon=0}\]
since \(\mathrm{R}_{\delta}=0\). Observe that, in the above formula, \(h=\delta+\varepsilon(g-\delta)\) is still a metric for \(|\varepsilon|\) small enough, and thus it makes sense to compute the scalar curvature of \(h\), which is
\[\mathrm{R}_{h} =h^{ij}(\partial_{k}H^{k}_{ij}-\partial_{i}H^{k}_{kj}+H^{k}_{ij} H^{s}_{ks}-H^{k}_{is}H^{s}_{jk})\] \[=\frac{\varepsilon}{2}\delta^{ij}\delta^{ks}\partial_{k}(- \partial_{s}g_{ij}+\partial_{j}g_{is}+\partial_{i}g_{js})-\frac{\varepsilon} {2}\delta^{ij}\delta^{ks}\partial_{i}(-\partial_{s}g_{kj}+\partial_{j}g_{ks}+ \partial_{k}g_{js})+O_{1}(\varepsilon^{2})\] \[=\frac{\varepsilon}{2}\delta^{ij}\delta^{ks}\partial_{k}(- \partial_{s}g_{ij}+2\partial_{j}g_{is})-\frac{\varepsilon}{2}\delta^{ij} \delta^{ks}\partial_{i}\partial_{j}g_{ks}+O_{1}(\varepsilon^{2})\] \[=\varepsilon\delta^{ij}\delta^{ks}(\partial_{k}\partial_{i}g_{js }-\partial_{s}\partial_{k}g_{ij})+O_{1}(\varepsilon^{2})\]
where \(H\) are the Christoffel symbols of \(h\). Hence,
\[\mathrm{R}_{g}\approx\delta^{ij}\delta^{ks}(\partial_{k}\partial_{i}g_{js}- \partial_{k}\partial_{s}g_{ij}).\]
Using the divergence theorem one has
\[\frac{1}{16\pi}\int_{M}\mathrm{R}_{g}\,\mathrm{d}\mu_{g} \approx\frac{1}{16\pi}\int_{M}\delta^{ij}\delta^{ks}(\partial_{k} \partial_{i}g_{js}-\partial_{k}\partial_{s}g_{ij})\,\mathrm{d}\mu_{\delta} \tag{5.3}\] \[=\lim_{r\to+\infty}\frac{1}{16\pi}\int_{\partial B_{r}}\delta^{ ij}(\partial_{i}g_{jk}-\partial_{k}g_{ij})\frac{x^{k}}{|x|}\,\mathrm{d}\sigma_{\delta}\] \[=\lim_{r\to+\infty}\frac{1}{16\pi}\int_{\partial B_{r}}g^{ij}( \partial_{i}g_{jk}-\partial_{k}g_{ij})\nu^{k}\,\mathrm{d}\sigma_{g}.\]
The quantity appearing on the right-hand side is what will be defined as Arnowitt-Deser-Miser's mass \(\mathfrak{m}_{\mathrm{ADM}}\)[1]. If the metric \(g\) is sufficiently close to the flat metric, the above computation shows that the integral of the scalar curvature can be approximated by such mass. Clearly, the two quantities are not, in general, the same, since the first one depends on the global behaviour of the metric, while the second one depends only on its behaviour at large distances. Now one can take the definition of the ADM mass and compare it to the integral of the scalar curvature. Consider then the vector field
\[Y^{k}=g^{kl}g^{ij}(\partial_{i}g_{jl}-\partial_{l}g_{ij})=g^{ij}\Gamma^{k}_{ij }-g^{ik}\Gamma^{j}_{ij}\]
and observe that its divergence computed with respect to \(g\) is
\[\partial_{k}Y^{k}+\Gamma^{l}_{lk}Y^{k} =\partial_{k}g^{ij}\Gamma^{k}_{ij}+g^{ij}\partial_{k}\Gamma^{k}_{ ij}-\partial_{k}g^{ik}\Gamma^{j}_{ij}-g^{ik}\partial_{k}\Gamma^{j}_{ij}+g^{ij} \Gamma^{l}_{lk}\Gamma^{k}_{ij}-g^{ik}\Gamma^{l}_{lk}\Gamma^{j}_{ij}\] \[=\mathrm{R}_{g}+\partial_{k}g^{ij}\Gamma^{k}_{ij}-\partial_{k}g^{ ik}\Gamma^{j}_{ij}.\]
Assume now that for some \(R\) there exists \(\mathrm{C}>0\) such that
\[\mathrm{C}^{-1}\delta \leq g\leq\mathrm{C}\delta\text{ in }M\smallsetminus B_{R} \int_{M\smallsetminus B_{R}}|\partial g|^{2}\,\mathrm{d}\mu_{\delta}<+\infty. \tag{5.4}\]
We want to show that the limit defining the ADM mass exists. Appealing again to the divergence theorem we have
\[\int_{\partial B_{s}}\left\langle Y\,|\,\nu\right\rangle\mathrm{d}\sigma_{g} -\int_{\partial B_{r}}\left\langle Y\,|\,\nu\right\rangle\mathrm{d}\sigma_{g} =\int_{B_{s}\smallsetminus B_{r}}\mathrm{div}\,Y\,\mathrm{d}\mu_{g} \geq\mathrm{C}\int_{B_{s}\smallsetminus B_{r}}|\partial g|^{2}\,\mathrm{d}\mu_{\delta}\]
for every \(R\leq r<s\). Hence taking the inferior limit as \(s\to+\infty\) we have
\[\liminf_{s\to+\infty}\int_{\partial B_{s}}\left\langle Y\,|\,\nu\right\rangle \mathrm{d}\sigma_{g}\geq\int_{M\smallsetminus B_{r}}|\partial g|^{2}\,\mathrm{d} \mu_{\delta}+\int_{\partial B_{r}}\left\langle Y\,|\,\nu\right\rangle\mathrm{d }\sigma_{g}.\]
Taking now the superior limit as \(r\to+\infty\), in view of our assumptions we get
\[\liminf_{s\to+\infty}\int_{\partial B_{s}}\left\langle Y\,|\,\nu\right\rangle \mathrm{d}\sigma_{g}\geq\limsup_{r\to+\infty}\int_{\partial B_{r}}\left\langle Y \,|\,\nu\right\rangle\mathrm{d}\sigma_{g},\]
proving the existence of the limit. Moreover, since
\[\left|\mathfrak{m}_{\mathrm{ADM}}-\int_{M\smallsetminus B_{R}}\mathrm{R}_{g}\, \mathrm{d}\mu_{g}\right|\leq\int_{\partial B_{R}}|Y|\,\mathrm{d}\sigma_{g}+ \mathrm{C}\int_{M\smallsetminus B_{R}}|\partial g|^{2}\,\mathrm{d}\mu_{\delta}\]
the ADM mass is finite if and only the scalar curvature is integrable on \(M\).
In order to complete the presentation of the ADM mass, we should ensure its independence on the asymptotic chart at infinity chosen, so that the rightmost-hand side of (5.3) provides a well-posed definition. This has been proved slightly strengthening the conditions in (5.4), by Bartnik [1] and Chrusciel [1], that is assuming \(g\) to be \(\mathscr{C}^{1}_{\tau}\)-Asymptotically Flat, \(\tau>1/2\).
**Definition 5.1**.: _Let \((M,g)\) be a \(\mathscr{C}^{1,\tau}\)-Asymptotically Flat Riemannian manifold, for \(\tau>1/2\). Then, the \(\mathfrak{m}_{\mathrm{ADM}}\) mass is defined as_
\[\mathfrak{m}_{\mathrm{ADM}}=\lim_{r\to+\infty}\frac{1}{16\pi}\int_{\partial B_ {r}}g^{ij}(\partial_{i}g_{jk}-\partial_{k}g_{ij})\nu^{k}\,\mathrm{d}\sigma_{g}. \tag{5.5}\]
It has actually been shown [10] that the requirement \(\tau>1/2\) is sharp, since one can provide \(\mathscr{C}^{1}_{1/2}\)-Asymptotically Flat metrics such that the right-hand side in (5.5) gives arbitrary values according to the selected chart at infinity.
### Equivalence between masses
A large amount of literature dealt, so far, with geometric inequalities involving the ADM mass. Most notable for us, are Schoen-Yau's Positive Mass Theorem [11, 12] ensuring under suitable decay conditions on the metric that \(\mathfrak{m}_{\mathrm{ADM}}\geq 0\) in nonnegatively scalar curved complete manifolds, and that it vanishes only on flat \(\mathbb{R}^{3}\). Solving a special case of conjecture of Penrose [13], Huisken-Ilmanen [15] sharpened such result in the connected horizon boundary case, obtaining the (Riemannian) Penrose Inequality
\[\sqrt{\frac{|\partial M|}{16\pi}}\leq\mathfrak{m}_{\mathrm{ADM}}. \tag{5.6}\]
With the description given above of \(\mathfrak{m}_{\mathrm{ADM}}\) as an actual (candidate) physical mass, the validity of the (Riemannian) Penrose inequality should appear more natural. In fact, the Schwarzschild metric, where the inequality is checked to hold as an equality, represents a space where the whole matter is shielded by the horizon \(\partial M\). Consequently, in a suitably restricted class, any other space should only increase the global mass, in relation with the area of the boundary, as (5.6) actually states.
Huisken-Ilmanen's proof of (5.6) exploits their Theorem 2.3 coupled with an involved asymptotic analysis resulting in \(\lim_{t\to+\infty}\mathfrak{m}_{H}(\partial\Omega_{t})\leq\mathfrak{m}_{ \mathrm{ADM}}\) along the weak IMCF of the horizon boundary. Bray [1], with an amazing completely different proof based on a _ad hoc_ developed conformal flow of metrics, removed the assumption of connected boundary. In both approaches, decay assumptions strictly stronger than those required by Definition 5.1 to give a well-posed definition of \(\mathfrak{m}_{\mathrm{ADM}}\) are imposed, namely \(\mathscr{C}^{1}_{1}\)-Asymptotically Flat coupled with \(\mathrm{Ric}\geq-\mathrm{C}{|x|}^{-2}\) in [15] and \(\mathscr{C}^{2}_{1}\)-Asymptotically Flat in [1].
The question about the validity of (5.6) in the optimal \(\mathscr{C}^{1}_{\tau}\)-Asymptotically Flat regime arises then naturally, and has been answered in collaboration with Mazizieri [1, Theorem 1.1]. Having then at hand the Isoperimetric Penrose Inequality (4.3) and (5.6), another even more natural (and more general) question regards the relation between \(\mathfrak{m}_{\mathrm{iso}}\) and \(\mathfrak{m}_{\mathrm{ADM}}\). Sharpening the earlier result envisioned by Huisken [15] and rigorously obtained (and strengthened) by Jauregui-Lee [10], we do obtain also [1, Theorem 4.13] the stronger
\[\mathfrak{m}_{\mathrm{iso}}=\mathfrak{m}_{\mathrm{ADM}} \tag{5.7}\]
in \(\mathscr{C}^{1}_{\tau}\)-Asymptotically Flat Riemannian manifolds with nonnegative scalar curvature and horizon boundary, for \(\tau>1/2\). Observe that such identity provides also a completely different proof of the well-posedness of \(\mathfrak{m}_{\mathrm{ADM}}\) in this asymptotic regime, first due to Bartnik and Chrusciel [1, 1]. In fact, the Isoperimetric mass is manifestly independent on the choice of the coordinate chart.
The proofs of both the Penrose Inequality (5.6) and of (5.7) under optimal decay assumptions in [1] are based on new insights about the asymptotic behaviour of the weak IMCF. The sharpening of the decay assumptions in the Penrose inequality substantially stems from the asymptotic comparison (4.4) and on its relation with the asymptotic behaviour of harmonic functions that takes advantage of the potential-theoretic version of the Hawking mass introduced in
[1]. The identification of the two masses (5.7) fundamentally relies also on a combination with Huisken-Jauregui-Lee argument [14, 15] yielding, through a monotonicity formula along a (modified) Mean Curvature Flow, a direct link among upper bounds on Hawking masses and upper bounds on the Isoperimetric mass.
In what follows, we outline an alternative strategy devised by Chodosh-Eichmair-Shi-Yu [13] that can be useful to prove (5.7), relating with the isoperimetric sets discussed before.
Before going on, we point out that the inequality
\[\mathfrak{m}_{\mathrm{ADM}}\leq\mathfrak{m}_{\mathrm{iso}}\]
can be obtained in the optimal asymptotic regime through a direct yet not trivial computation, that has been carried out in [10], and so we will deal with the reverse one only. A key observation is the following, deduced by the proof of [13, Theorem C.1]. When isoperimetric sets are known to exist, as in the setting of Theorem 3.6, they do realize, in the limit of infinite volume, the isoperimetric mass Lemma 4.2, and since each one has obviously constant mean curvature, the asymptotic comparison argument carried out in [1] can be reversed in some sense, allowing to estimate the isoperimetric mass _from above_ with the Hawking mass. This inspiring idea is expressed with the following computation.
\[\mathfrak{m}_{\mathrm{iso}} =\limsup_{V\to+\infty}\mathfrak{m}_{\mathrm{iso}}(E_{V})=\limsup _{V\to+\infty}\frac{2}{I(V)}\left(V-\frac{I(V)^{\frac{3}{2}}}{6\sqrt{\pi}}\right) \tag{5.8}\] \[\leq\limsup_{V\to+\infty}\frac{2}{I^{\prime}(V)}\left(1-\frac{ \sqrt{I(V)}I^{\prime}(V)}{4\sqrt{\pi}}\right)\] \[=\limsup_{V\to+\infty}\frac{2\sqrt{I(V)}\left(1-\frac{I(V)I^{ \prime}(V)^{2}}{16\pi}\right)}{\sqrt{I(V)}I^{\prime}(V)\left(1+\frac{\sqrt{I(V )}I^{\prime}(V)}{4\sqrt{\pi}}\right)}\] \[=\limsup_{V\to+\infty}\frac{32\pi\,\mathfrak{m}_{H}(\partial E_{ V})}{4\sqrt{\pi}I^{\prime}(V)\sqrt{I(V)}+I^{\prime}(V)^{2}I(V)},\]
where \(I(V)=|\partial E_{V}|\). If the Hawking mass of the isoperimetric sets of large volumes does satisfy the sharp bound \(\mathfrak{m}_{H}(\partial E_{V})\leq\mathfrak{m}_{\mathrm{ADM}}\), (5.8) in fact allows to conclude the desired bound \(\mathfrak{m}_{\mathrm{iso}}\leq\mathfrak{m}_{\mathrm{ADM}}\). By our work with Mazzieri [1], in the optimal decay case this is fulfilled if the isoperimetric sets of large volume have connected boundaries.
**Remark 5.2**.: _Starting from the next proposition and for all results up to the end of Section 5, we will drop the outermost condition on the boundary. This is due to known properties of Asymptotically Flat \(3\)-manifolds of nonnegative scalar curvature, see the recapitulatory [1, Lemma 2.8], asserting in particular that, if the manifold contains minimal surfaces, then one can find a minimal, outermost \(\Sigma\) enclosing \(\partial M\) and the analysis is applied on the new manifold with minimal outermost boundary \(\Sigma\). The masses are obviously the same as those of the original manifold. If \((M,g)\) possesses no closed minimal surfaces, and is in particular boundaryless, then one can safely work in such complete manifold._
**Proposition 5.3**.: _Let \((M,g)\) be a complete \(\mathscr{C}^{1}_{\tau}\)-Asymptotically Flat Riemannian \(3\)-manifold, \(\tau>1/2\), with nonnegative scalar curvature and (possibly empty) closed, minimal boundary. Assume that there exists \(V_{0}>0\) such that, for any \(V\geq V_{0}\), there exists an isoperimetric set \(E_{V}\) of volume \(V\) with connected boundary. Then,_
\[\mathfrak{m}_{\mathrm{iso}}=\mathfrak{m}_{\mathrm{ADM}}. \tag{5.9}\]
Proof.: As already pointed out, we just have to justify \(\mathfrak{m}_{\mathrm{iso}}\leq\mathfrak{m}_{\mathrm{ADM}}\). Assume \(\mathfrak{m}_{\mathrm{ADM}}<+\infty\) otherwise the proposition trivially follows. Assume also that \(\mathfrak{m}_{\mathrm{iso}}>0\), otherwise [1, Theorem 1.3] implies that \((M,g)\) is isometric to \(\mathbb{R}^{3}\) and the result trivially holds. Let \((V_{k})_{k\in\mathbb{N}}\) a sequence realising the superior limit in (5.8). Since \(\mathfrak{m}_{\mathrm{iso}}>0\), \(\mathfrak{m}_{H}(\partial E_{V_{k}})\geq 0\) for large \(k\). In particular, \(I^{\prime}(V_{k})^{2}I(V_{k})\leq 16\pi\). Assume by contradiction that there exists a not relabeled subsequence such that \(I^{\prime}(V_{k})^{2}I(V_{k})\leq 16\pi-\varepsilon\) for some positive \(\varepsilon>0\). Then, we would have
\[\frac{\varepsilon}{32\pi}\sqrt{I(V_{k})}\leq\mathfrak{m}_{H}(\partial E_{V_{k }})\leq\mathfrak{m}_{\mathrm{ADM}}, \tag{5.10}\]
where the last inequality follows as in [1, Theorem 1.1], giving the contradiction. In particular, \(I^{\prime}(V_{k})^{2}I(V_{k})\to 16\pi\) as \(k\to+\infty\). Plugging this piece of information into (5.8) we get
\[\mathfrak{m}_{\mathrm{iso}}\leq\limsup_{k\to+\infty}\mathfrak{m}_{H}(\partial E _{V_{k}})\leq\mathfrak{m}_{\mathrm{ADM}}, \tag{5.11}\]
proving (5.9). We stress the fact that the connectedness of \(\partial E_{V_{k}}\) is required in order to infer the rightmost bound in (5.10) and in (5.11).
The authors [11] can in fact count on connectedness, as they are working in \(\mathscr{C}_{1}^{2}\)-Asymptotically Flat manifolds, where isoperimetric sets of large volume are close to coordinate balls, thanks to the work of [13] (actually valid in \(\mathscr{C}_{\tau}^{2}\)-Asymptotically Flat, for \(\tau>1/2\)), that importantly weakens the assumptions of the earlier works of Eichmair-Metzger [1, 2], where the manifolds were assumed to be asymptotic to Schwarzschild. Moreover, in [11] the authors do also rely on Huisken-Ilmanen's bound on the Hawking mass, and thus their analysis could be pushed to \(\mathscr{C}_{\tau}^{2}\)-Asymptotically Flat manifolds for \(\tau>1/2\) coupled with \(\mathrm{Ric}\geq-\mathrm{C}|x|^{-2}\). Through the ADM-Penrose inequality in optimal decay assumptions [1, Theorem 1.1] the assumption on the Ricci curvature can be dropped. Actually, in [11], the authors also take advantage of an _a priori_ knowledge of \(I(V)\) and \(I^{\prime}(V)\) as \(T\to+\infty\). The proof of Proposition 5.3 actually shows that this is not needed.
### Nonlinear masses
In this last section, we briefly introduce and discuss the nonlinear potential theoretic counterpart of the isoperimetric mass. To this end, we first introduce the following \(p\)-capacity of a compact set \(K\in M\),
\[\mathfrak{c}_{p}(K)=\inf\Biggl{\{}\frac{1}{4\pi}\left(\frac{p-1}{3-p}\right)^ {p-1}\int_{M\smallsetminus K}|\mathrm{D}v|^{p}\,\mathrm{d}\mu\,\Bigg{|}\,v\in \mathscr{C}_{c}^{\infty}(M),\,v\geq 1\text{ on }K\Biggr{\}}.\]
As a consequence of the remarkable [14, Theorem 3.6], \(\mathscr{C}^{0}\)-Asymptotically Flat manifolds are \(p\)-nonparabolic, in particular implying that the above quantity is positive for any \(K=\partial\Omega\in\mathscr{C}^{1,\alpha}\), if \(1<p<3\). Inspired by the Isoperimetric mass, one can define the \(p\)-Isocapacitary mass as follows.
**Definition 5.4**.: _Let \((M,g)\) be a Riemannian \(3\)-manifold possibly with boundary, with infinite volume. Then, its \(p\)-Isocapacitary mass is defined as_
\[\mathfrak{m}_{\mathrm{iso}}^{(p)}=\sup_{(\Omega_{j})_{j\in\mathbb{N}}}\limsup_ {j\to+\infty}\frac{1}{2p\pi\mathfrak{c}_{p}(\partial\Omega_{j})^{\frac{2}{3-p }}}\left(|\Omega_{j}|-\frac{4\pi}{3}\mathfrak{c}_{p}(\partial\Omega_{j})^{ \frac{3}{3-p}}\right).\]
_where the supremum is taken among all exhaustions \((\Omega_{j})_{j\in\mathbb{N}}\) consisting of domains with \(\mathscr{C}^{1,\alpha}\)-boundary._
The above capacitary notion of mass has recently been considered for \(p=2\) by Jauregui [15], and together with Mazzieri [1] for \(1<p<3\). After the discussion that took place in the last sections, one could naturally wonder about the relation of \(\mathfrak{m}_{\mathrm{iso}}^{(p)}\) with the Isoperimetric
mass and the \(\mathfrak{m}_{\mathrm{ADM}}\). A first answer in this direction is the following, holding the generality of \(\mathscr{C}^{0}\)-Asymptotically Flat \(3\)-manifolds.
**Proposition 5.5**.: _Let \((M,g)\) be a \(\mathscr{C}^{0}\)-Asymptotically Flat Riemannian \(3\)-manifold. Then,_
\[\mathfrak{m}_{\mathrm{iso}}^{(p)}\leq\mathfrak{m}_{\mathrm{iso}}.\]
This is [1, Proposition 5.6]. Its proof is classical in nature and can be compared to the classical derivation of Isocapacitary inequalities from the Isoperimetric (see [13] or, more directly, the arguments in [10] and in the proof of [1, Theorem 4.1]). More precisely, it builds on the Polya-Szego inequality to be applied to the sharp asymptotic isoperimetric inequality
\[(6\sqrt{\pi}|\Omega|)^{\frac{2p}{3}}\leq|\partial\Omega|^{p}+2p\sqrt{\pi}( \mathfrak{m}_{\mathrm{iso}}+o(1))|\partial\Omega|^{\frac{2p-1}{2}},\]
that is actually a direct consequence of the definition of the Isoperimetric mass.
The reverse inequality is obtained in [1] in the setting of \(\mathscr{C}^{1}_{\tau}\)-Asymptotically Flat Riemannian \(3\)-manifolds, \(\tau>1/2\), with nonnegative scalar curvature. As these assumptions should suggest, we strongly rely on the well-posedness of the ADM mass, appearing in an asymptotic computation along geodesic spheres as that in [13]. The overall argument consists in a strengthening of the one proposed for [10, Theorem 5] and in the application of a suitable, new generalization of a capacitary estimate of Bray-Miao [1]. This last estimate reads
\[\mathfrak{c}_{p}(\partial\Omega)\leq\left(\frac{|\partial\Omega|}{4\pi}\right) ^{\frac{3-p}{2}}\!\!\!_{2}F_{1}\left(\frac{1}{2},\frac{3-p}{p-1},\frac{2}{p-1} ;1-\frac{1}{16\pi}\int_{\partial\Omega}\mathrm{H}^{2}\ \mathrm{d}\sigma\right)^{-(p-1)}\!\!\!\!, \tag{5.12}\]
where \(\phantom{-}{}_{2}F_{1}\) denotes the hypergeometric function, and it has been provided for connected \(\partial\Omega\) homologous to \(\partial M\) in [1, Proposition 2.14]. The only feature of the implicit function \(\phantom{-}{}_{2}F_{1}\) we will need will be recalled below. Resuming, and crucially exploiting (5.7), we conclude in [1, Theorem 1.3] that in \(\mathscr{C}^{1}_{\tau}\)-Asymptotically Flat \(3\)-manifolds with nonnegative scalar curvature and closed minimal boundary
\[\mathfrak{m}_{\mathrm{iso}}^{(p)}=\mathfrak{m}_{\mathrm{iso}}=\mathfrak{m}_{ \mathrm{ADM}}.\]
for any \(1<p\leq 2\). This was already known for \(p=2\) only under the stronger assumption of harmonic flatness at infinity [10, Corollary 8], a locution standing for \(g=u^{4}\delta\) outside a suitable compact set, where \(u\) is harmonic with respect to the Euclidean Laplacian.
Here, we observe that, the same assumption of connectedness of isoperimetric sets considered in Proposition 5.3 allows to get the identification \(\mathfrak{m}_{\mathrm{iso}}^{(p)}=\mathfrak{m}_{\mathrm{iso}}\) for \(1<p\leq 2\), in \(\mathscr{C}^{0}\)-Asymptotically Flat manifolds with nonnegative scalar curvature and minimal boundary. This is particularly interesting due to the fact that, in this asymptotic regime, the notion of \(\mathfrak{m}_{\mathrm{ADM}}\) is not a priori available, and so one cannot a priori go through computations involving its expression.
**Proposition 5.6**.: _Let \((M,g)\) be a complete \(\mathscr{C}^{0}\)-Asymptotically Flat Riemannian \(3\)-manifold with nonnegative scalar curvature and (possibly empty) closed, minimal boundary. Assume that there exists \(V_{0}>0\) such that, for any \(V\geq V_{0}\), there exists an isoperimetric set \(E_{V}\) of volume \(V\) with connected boundary. Then,_
\[\mathfrak{m}_{\mathrm{iso}}=\mathfrak{m}_{\mathrm{iso}}^{(p)} \tag{5.13}\]
_for every \(1<p\leq 2\)._
Proof.: The inequality \(\mathfrak{m}_{\mathrm{iso}}\geq\mathfrak{m}_{\mathrm{iso}}^{(p)}\) has already been pointed out to hold more in general, so we focus on the reverse \(\mathfrak{m}_{\mathrm{iso}}\leq\mathfrak{m}_{\mathrm{iso}}^{(p)}\).
For every \(V>0\) large enough, denote \(E_{V}\) the isoperimetric set of volume \(V\). Assume that \(\mathfrak{m}_{\mathrm{iso}}^{(p)}<+\infty\). Assume also that \(\mathfrak{m}_{\mathrm{iso}}>0\), otherwise [1, Theorem 1.3] implies that \((M,g)\) is
isometric to \(\mathbb{R}^{3}\) and the theorem trivially holds. Along a sequence \((V_{k})_{k\in\mathbb{N}}\) realising the superior limit in (5.8), \(\mathfrak{m}_{H}(\partial E_{V_{k}})\geq 0\). Evolve \(E_{V_{k}}\) using the weak IMCF and denote \(E_{V_{k}}^{t}\) its sublevels. We briefly point out that, since the isoperimetric sets are not known to be homologous to the boundary, the evolving hypersurfaces may in principle touch the boundary. If this happens, one should consider the weak IMCF _with jumps_, that is the modification described in [11, Section 6]. By (4.4) and the Geroch monotonicity formula Theorem 2.3, we have
\[\mathfrak{m}_{H}(\partial E_{V_{k}})\leq\lim_{t\to+\infty}\mathfrak{m}_{H}( \partial E_{V_{k}}^{t})\leq\mathfrak{m}_{\text{iso}}. \tag{5.14}\]
Observe now that, \(\mathfrak{m}_{H}(\partial E_{V_{k}}^{t})\geq 0\) and for large \(t\) the \(E_{V_{k}}^{t}\) has a connected boundary which is homologous to \(\partial M\). Applying (5.12), we have
\[\mathfrak{c}_{p}(\partial E_{V_{k}}^{t})\leq\left(\frac{|\partial E_{V_{k}}^{ t}|}{4\pi}\right)^{\frac{3-p}{2}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Appealing again to (5.12) and using Taylor's expansion of \({}_{2}F_{1}\) around \(0\), we get
\[\mathfrak{c}_{p}(\partial\Omega_{j}) \leq\left(\frac{|\partial\Omega_{j}|}{4\pi}\right)^{\frac{3-p}{2}}{ }_{2}F_{1}\left(\frac{1}{2},\frac{3-p}{p-1},\frac{2}{p-1};\frac{4\mathfrak{m}_ {\text{iso}}\sqrt{\pi}}{\sqrt{|\partial\Omega_{j}|}}(1+o(1))\right)^{-(p-1)} \tag{5.16}\] \[\leq\left(\frac{|\partial\Omega_{j}|}{4\pi}\right)^{\frac{3-p}{2} }\left(1+\frac{(3-p)\sqrt{\pi}}{\sqrt{|\partial\Omega_{j}|}}\mathfrak{m}_{ \text{iso}}(1+o(1))\right)^{-(p-1)}\] \[=\left(\frac{|\partial\Omega_{j}|}{4\pi}\right)^{\frac{3-p}{2}} \left(1-\frac{(p-1)(3-p)\sqrt{\pi}}{\sqrt{|\partial\Omega_{j}|}}\mathfrak{m}_ {\text{iso}}(1+o(1))\right).\]
We point out now that the \(p\)-Isocapacitary mass can in fact be computed through the equivalent formulation [1, Proposition 5.2]
\[\mathfrak{m}_{\text{iso}}^{(p)}=\sup_{(\Omega_{j})_{j\in\mathbb{N}}}\limsup_{ j\to+\infty}\frac{2\mathfrak{c}_{p}(\partial\Omega)^{\frac{p-2}{3-p}}}{p(3-p)} \left(\left(\frac{3|\Omega_{j}|}{4\pi}\right)^{\frac{3-p}{3}}-\mathfrak{c}_{p} (\partial\Omega_{j})\right).\]
Thus, (5.16) implies
\[\mathfrak{m}_{\text{iso}}^{(p)} \geq\limsup_{j\to+\infty}\mathfrak{m}_{\text{iso}}^{(p)}(\Omega_ {j})\geq\limsup_{j\to+\infty}\frac{2\mathfrak{c}_{p}(\partial\Omega)^{\frac{p- 2}{3-p}}}{p(3-p)}\left(\left(\frac{3|\Omega_{j}|}{4\pi}\right)^{\frac{3-p}{3}}- \mathfrak{c}_{p}(\partial\Omega_{j})\right)\] \[\geq\frac{p-1}{p}\mathfrak{m}_{\text{iso}}+\limsup_{j\to+\infty} \frac{2|\partial\Omega_{j}|^{\frac{p-2}{2}}}{p(3-p)(4\pi)^{\frac{p-2}{2}}} \left(\left(\frac{3|\Omega_{j}|}{4\pi}\right)^{\frac{3-p}{3}}-\left(\frac{| \partial\Omega_{j}|}{4\pi}\right)^{\frac{3-p}{2}}\right)\] \[=\frac{p-1}{p}\mathfrak{m}_{\text{iso}}+\limsup_{j\to+\infty} \frac{1}{p}\mathfrak{m}_{\text{iso}}(\Omega_{j})=\mathfrak{m}_{\text{iso}},\]
completing the proof.
For the same reasons pointed out after the proof of (5.9), the above provides an actual proof of (5.13) in \(\mathscr{C}_{\tau}^{2}\)-Asymptotically Flat manifolds with nonnegative scalar curvature and minimal boundary, when \(\tau>1/2\).
## 6. Questions and open problems
In this last Section, we collect some natural problems and questions in connection with the topics touched on so far.
1. _Connectedness of Isoperimetric sets_ As showed above, knowing that the isoperimetric sets of big volume have connected boundaries allows us to set the equivalence among Isocapacitary, Isoperimetric, and ADM masses. It would be then desirable to know that such property is true at least on \(\mathscr{C}_{\tau}^{1}\)-Asymptotically Flat \(3\)-manifolds with nonnegative scalar curvature, for \(\tau>1/2\). It is likely that, suitably reworking the computations of Nerz [15] in order for them to make sense without knowing the behaviour of second derivatives of the metric, this can be accomplished. Some of the insights contained in [1] about the asymptotic behaviour of the \(2\)-Hawking mass and its relation with the Hawking mass in this optimal regime could play a role.
2. _Higher dimensional analysis_. All the results presented are proved through computations that are very peculiar to dimension \(3\). In the end, they can all substantially be returned to the application of the Gauss-Bonnet Theorem (2.5) in the monotonicity calculation performed in Section 2.1. On the other hand, the fundamental Positive Mass Theorem has been proved through Schoen-Yau's [19] amazing contradiction argument
up to dimension 7 (see [14]), and consequently also Bray's approach to the Penrose inequality [10]. It would be very interesting to understand whether, possibly with related arguments, the Existence Theorem 3.6 and the various results on the Isoperimetric/Isocapacitary masses can be proved in higher dimensions.
3. _A \(\mathscr{C}^{0}\)-notion of_ ADM _mass._ A weakened notion of ADM mass, resulting well posed in particular on 3-manifolds that are \(\mathscr{C}^{0}_{\tau}\)-Asymptotically Flat with \(\tau>1/2\), and that admit a Ricci flow with nonnegative scalar curvature, has been recently devised by Burkhardt-Guim [1]. It would be nice to show that such quantity still coincides with the Isoperimetric mass. The availability of isoperimetric sets in this class of potentially nonsmooth metrics would be of interest too.
4. _A conjecture of Huisken._ Strongly related to the previous point, we mention a famous and formidable conjecture by Huisken (see e.g. [12, p. 2221-2223]), about the availability of an Isoperimetric Positive Mass Theorem on \(\mathscr{C}^{0}\) manifolds of dimension 3 admitting some suitable notion of nonnegative scalar curvature. We believe that the asymptotic comparison between the Hawking mass and the Isoperimetric mass devised in [1] and discussed here may serve as a useful tool, as it strongly weakens the asymptotic requirement at least on the decay of the metric. Manifolds of nonnegative scalar curvature in the sense of Ricci flow, considered by Burkhardt-Guim [1], might be a good family of metrics where to test Huisken's conjecture.
|
2310.16946 | How does Module Tracking for Agrivoltaics Differ from Standard
Photovoltaics? Performance & Technoeconomic Implications | Spatial-temporal sharing of sunlight between solar modules and crops needs to
be designed optimally in agrivoltaics (AV). For AV with fixed module tilts, the
sunlight balance is governed through the spatial density and elevation of the
modules which cannot be manipulated after the installation. For flexible
food-energy balancing across various seasons and crop rotations, modules with
single or dual axis mobility can be best suitable. AV tracking must be geared
towards ensuring a desired sunlight balance that may depend on many factors
including the crop type, module array density, socio-economic factors, and
local policies. Here, we explore single axis customized tracking (CT) for the
mobile AV using a techno-economic model that incorporates design parameters
including crop's shade sensitivity, module to land area ratio, and module
types, as well as the economic parameters including soft and hardware costs for
modules, feed-in-tariff, and crop income. CT is implemented through standard
tracking that tracks the sun around noon hours and its orthogonal, i.e.,
anti-tracking around sunrise and sunset. We evaluate the optimal CT schemes
that can maximize economic performance while ensuring the desired food-energy
yield thresholds. Economic feasibility for AV is evaluated in terms of the
ratio (ppr) of the price for the module system customizations to the
performance benefit due to the crop income. A case study for Punjab, Pakistan
shows that CT schemes for moderate shade sensitive crops and typically dense AV
module arrays can require 30 to 40 percent increase in the reference FIT to
ensure the food-energy yield threshold of 80 percent relative to standalone
food-energy farms for high and low value crops, respectively. CT schemes for a
lower crop yield threshold of 70 percent require the corresponding increase in
FIT to 10 to 20 percent, respectively. | Habeel Alam, Nauman Zafar Butt | 2023-10-25T19:30:05Z | http://arxiv.org/abs/2310.16946v1 | # How does Module Tracking for Agrivoltaics
###### Abstract
Spatial-temporal sharing of sunlight between solar modules and crops needs to be designed optimally in agrivoltaics (_AV_). For _AV_ with fixed module tilts, the sunlight balance is governed through the spatial density and elevation of the modules which cannot be manipulated after the installation. For a flexible food-energy balancing across various seasons and crop rotations, modules with single or dual axis mobility can be best suitable. _AV_ tracking must be geared towards ensuring a desired sunlight balance that may depend on many factors including the crop type, module array density, socio-economic factors, and local policies. Here, we explore single axis customized tracking (_CT_) for the mobile _AV_ using a techno-economic model that incorporates design parameters including crop's shade sensitivity, module to land area ratio, and module types, as well as the economic parameters including soft and hardware costs for modules, feed-in-tariff, and crop income. _CT_ is implemented through standard tracking that tracks the sun around noon hours and its orthogonal, _t. e.,_ anti-tracking around sunrise/sunset. We evaluate the optimal _CT_ schemes that can maximize economic performance while ensuring the desired food-energy yield thresholds. Economic feasibility for _AV_ is evaluated in terms of the ratio (_ppr_) of the price for the module system customizations to the performance benefit due to the crop income. A case study for Punjab, Pakistan shows that _CT_ schemes for moderate shade sensitive crops and typically dense _AV_ module arrays can require 30 to 40% increase in the reference _FIT_ to ensure the food-energy yield threshold of 80% relative to standalone food-energy farms for high and low value crops, respectively. _CT_ schemes for a lower crop yield threshold of 70% require the corresponding increase in _FIT_ to 10 to 20%, respectively. The proposed approach can be very effective for design and analysis of tracking schemes for _AV_ systems.
techno-economic model, agrivoltaics, feed-in-tariff, customized tracking, economics, energy yield, food yield
## I Introduction
The global pursuit of sustainable energy production has led to significant advancements in photovoltaic (_PV_) technology, making it a pivotal player in the transition to clean energy sources. Harnessing solar energy through _PV_ systems is not only crucial for addressing the escalating energy demand but also mitigating climate change by reducing greenhouse gas emissions [1]. Rapid proliferation of ground-mounted photovoltaics (_GMPV_), while promising clean energy generation, has however ignited a pressing land-use conflict. Agricultural land, already under duress due to factors such as climate change impacts and urbanization, faces the additional challenge of accommodating expansive _PV_ installations [2-5]. In this context, an innovative solution that reconciles energy production and agriculture, ensuring food security and sustainable energy generation, is of paramount importance.
Agrivoltaics (_AV_), emerges as a compelling solution to this conundrum [4] by enabling the coexistence of agriculture and solar energy production on the same land [6-8]. This integrated approach holds promise in alleviating land-use conflicts, optimizing resource utilization, and reducing greenhouse gas emissions[9]. The concept of _AV_ was initially proposed by Goetzberger and Zastrov back in 1981 [10], but due to poor efficiency of photovoltaics at that time, this concept only became popular in last decade [11] initially explored in simulations in the start of last decade [8] and later implemented in many academic and commercial installations for different locations and crops across the globe [12-16].
Currently more than 3000 _AV_ systems with cumulative capacity greater than \(14GW_{p}\) have been installed across the globe [12-14, 17]. Design parameters including modules' pitch, elevation, tilt angle have been explored [18-25]. The results indicate many synergies in the food-energy-water nexus including increased water use efficiency, higher crop yields for specific crops, improvement in microclimate, and a cooling effect on solar modules resulting in enhanced energy production. These benefits prompted governments in different countries like Germany (1982), Japan (2004), US (2008), China (2016), India (2018) and South Korea (2019) to develop and adopt policies supporting implementation of Agrivoltaics [26, 27].
Like commercial _PV_ installations, agrivoltaics installations can be either fixed tilt or tracking technology. Although fixed tilt _AV_ requires a lower capital cost as compared to the tracking systems, solar tracking for agrivoltaics can provide a higher flexibility to adjust the sunlight balance between crops and modules [28]. In the case of commercial _PV_ installations, single or double axis solar trackers dynamically adjust the module orientation to perpetually face the sun to maximize energy capture. Tracking solutions for _AV_, however, requires models that can address the distinctive challenges and considerations to cater for a broad range of crop
shade response and food-energy yield requirements. The standard solar tracking (\(ST\)) for \(AV\) can result in a drastic reduction the crop yield which is contrary to the purpose of agrivoltaics. By using customized (also known as controlled or smart) tracking (\(CT\)), which utilizes both \(ST\) and its orthogonal, _i.e.,_ anti-tracking (\(AT\)) at different time intervals along the day, the crop and energy yield constraints are achievable through an optimized design of the tracking scheme.
Despite its importance, customized tracking for \(AV\) is relatively less explored and reported. Valle et. al introduced the concept of customized tracking (\(CT\)) briefly and Elamri et. al explored the concept of control tracking (customized tracking) by developing a model which simulated the effects of fluctuating radiation and rain redistribution by the solar panels on crop growth, yield and water consumption [28, 29]. Their \(CT\) scheme minimizes radiation interception before noon and after afternoon but shades the crops during the hot hours. Hussnain et al optimized the design of \(AV\) systems, such as the spatial density, orientation, and tracking of the module arrays, according to the photosynthetic needs of different crops [30]. More recently, Willockx et. al explored the performance of a fixed vertical system and a dynamic single-axis tracker in Belgium with sugar beet cultivation [31] with theoretical modeling and field measurements over two growing seasons. The tracking system outperformed the fixed vertical system in both energy yield (+30%) and land use efficiency (+20%), mainly due to its ability to optimize the module position and shade levels for the crops based on time and location.
While the above-mentioned studies have validated the benefits of tracking for \(AV\) in terms of land usage efficiency, increased crop yield for specific crops, and efficient water utilization, the economic and financial modeling which is crucial for policy makers and social acceptance is rarely reported for tracking \(AV\) systems. Nevertheless, a few studies have been reported for fixed tilt \(AV\) economic modeling and field experiments. Schindle et al reported a simple model based on price performance ratio and compared economic performance of winter wheat and potatoes in an \(AV\) system with \(GMPV\)[12]. The higher \(LCOE\) for \(AV\) was considered the price while the revenue from crops was considered as performance benefit. The study revealed that a high revenue from potatoes could offset the higher \(LCOE\) of \(AV\) and could make it profitable in comparison with \(GMPV\) even with reduction in biomass yield of potatoes to ~87% with respect to full sun condition. Winter wheat, on the other hand, could not achieve economic feasibility due to lower profits from crops. Ryyan et al presents a numerical model by using a performance indicator based on economics, not land equivalent ratio (LER), to evaluate and optimize the \(AV\) system with paddy rice, for six different locations across the globe [32]. It finds that \(AV\) can provide 22-132 times higher profit than conventional rice farming while maintaining 80-90% of rice production.
A recent field study in Germany provides economic analysis of agrivoltaics in apple farming based on three pilot projects [33]. Using different calculation methods to assess the costs and benefits, the study finds that \(AV\) can reduce the investment and operational costs of the apple farming system by 26% and 8%, respectively. It however can decrease the apple quality and revenues by 10% and 8%, respectively. The investigation in [24] delves into the economic performance of \(AV\) relative to the roof top and \(GMPV\) configurations. The study reveals that \(GMPV\) systems exhibit a cost advantage of approximately 33% over \(AV\) systems due to reduced expenditures but net present value (\(NPV\)) for \(AV\) systems may ultimately yield a higher level of profitability by the end of project lifetime. In [34, 35], an economic framework (FEADPLUS) is presented to evaluated from the perspective of maintaining the profitability of farmer. The framework however misses the impact of land preservation cost on the profitability of solar investor with respect to \(GMPV\).
The above-mentioned studies are although useful, their focus is limited and does not incorporate the combined effect of varying the module design, land costs, crop rotations, and \(FIT\). In particular, the economic tradeoffs for various tracking options for \(AV\) modules for different types of crops, soft and hardware costs have not been investigated. A holistic model is needed to explore the effect of shade sensitivities of different crops on tracking schemes and varying module configurations for \(AV\) to meet the food and energy constraints. We have recently presented a technoeconomic model [36], which explores the aforementioned aspects for the for fixed \(AV\) modules including \(N/S\) faced and vertical bifacial \(E/W\) faced configurations. In this paper, we extend the framework for tracking \(AV\) systems and explore the design of efficient tracking schemes in terms of food-energy yield requirements and the economic performance. In addition, we evaluate the known economic parameters such as price and performance benefits in terms of system parameters including the hardware and soft costs, energy yields, land to module area ratio, and \(FIT\).
Specifically, we develop a techno-economic model that addresses the following questions for the design and performance of tracking \(AV\) in this paper: How a variety of shade response for crops influence the \(CT\) schemes? (ii) What is the impact of energy and crop yield thresholds on the design of \(CT\) schemes. (iii) What \(CT\) schemes can be economically feasible relative to standard standalone food-energy systems while ensuring the desired food-energy thresholds, (iv) How the module array design in terms of land to module area ratio influence the techno-economics?, (v) What are the required Feed in tariffs for the mobile \(AV\) systems with \(CT\) for the crops having different market values (vi) How \(CT\) schemes vary across various global locations for a given system design and food-energy thresholds.
The rest of the paper is arranged as follows: In section II, we report the methodology and mathematical modelling of this techno
economic framework highlighting its major assumptions and components. In Section III, we apply the framework to assess the economic feasibility of \(AV\) for different tracking orientations (\(ST\), \(AT\) and \(CT\)) across two simulated crop rotations for Khanewal located in Southern Punjab, Pakistan. Section III presents results and discussion while discussing questions (i)--(vi) as mentioned in the preceding paragraph and. Finally, Section IV reports conclusion and limitations.
## II Mathematical modelling
### _Customized Tracking (CT) Model_
Agrivoltaics systems can be categorized into two types based on their configurations a) fixed tilt systems which include \(N/S\) faced fixed tilt system and vertical tilt bifacial \(E/W\) faced system b) tracking systems which incorporate trackers and use tracking strategies such as standard tracking (\(ST\)) and anti-tracking (\(AT\)). For \(N/S\) faced fixed tilt orientation the modules are elevated at the height of 3-5 m and face \(N/S\) direction while bifacial panels are installed vertically facing \(E/W\) direction and generally elevated at the height of 1 m (crop height) for \(E/W\)**faced vertical bifacial** orientation. The spatial light distribution over crops is more homogeneous than \(N/S\) fixed tilt system but the energy generation is also lower[18]. In case of **Standard tracking (ST)** scheme, \(PV\) modules track the sun prioritizing energy generation over food production. **Anti tracking (\(AT\))** as the name suggests is opposite of \(ST\) in such a way that in \(AT\), module face is kept parallel to direct beam throughout the day prioritizing food production over energy generation. Fig. 1 shows conceptual design of the \(N/S\), vertical bifacial \(E/W\), \(ST\) and \(AT\) orientations for \(AV\).
The \(ST\) and \(AT\) may not be the best techno-economic approach for agrivoltaics because while \(ST\) maximizes the overall energy performance of \(AV\) systems, agriculture production is decreased and may not be acceptable. \(AT\) on the other hand, provides an agricultural production close to the full sun condition but significantly reduces the energy yield. Customized single axis solar tracking (\(CT\)) scheme is defined by multiplexing \(ST\), which maximizes the energy, with anti-tracking (\(AT\)) which maximizes the agricultural yield. \(CT\) incorporates both \(ST\) and \(AT\) such that \(ST\) is implemented for \(n\) hours with \(n\)/2 number of hours on each side of midday (noon) while \(AT\) is implemented for the rest of the day as shown in Fig. 2.
### _Energy and shading Model_
In our previous publications [19, 20], we explained the model for simulating energy generation within photovoltaic modules and the photosynthetically active radiation (\(PAR\)) available to crops beneath these modules. Here, we provide a concise overview of our methodology. Assuming relatively large arrays of modules and neglecting edge effects, we address shading patterns in two spatial dimensions, namely, perpendicular to the arrays' length and the height above ground. We employ a validated view factor model, established through field experiments [19, 20, 32], to compute sunlight interception by the modules, thereby determining the temporal \(PV\) yield. This calculation encompasses contributions from direct sunlight, diffused light, and albedo (both direct and diffuse components). To ascertain the \(PAR\) reaching the crops, we compute shading for direct and diffused light within 2-D vertical planes beneath the modules. Our simulations utilize typical meteorological data for Khanewal, Punjab, Pakistan (30.2864 \({}^{\circ}\)N, 71.9320 \({}^{\circ}\)E) [32, 36]. The model is used to compute energy yield (\(Y_{PV}\)) which is the ratio of energy yield per unit module area of a given \(AV\) orientation to the energy yield per unit module area of GMPV. The model also evaluates the shading ratio which determines the light availability to the crops is the ratio of light available on the ground with modules installed to the light available at ground without the modules.
### _Shade Sensitivities for Crop_
Crop yield reduction resulting from shading is quantified by assessing the decrease in the photosynthetically active radiation (\(PAR\)) received by the crop throughout the day. \(Y_{Crop}\) is defined as the percentage of the biomass yield for a crop under shading to biomass yield of the same crop under no shading condition. \(Y_{Crop}\) as a function of the \(PAR\) availability for the crops relative to full sun condition has recently been analyzed in a meta-analysis with data from 58 studies [37]. Fig. 3 shows the response of \(Y_{crop}\) to \(PAR\) from the results reproduced from [37]. Crops having different shade sensitivities are classified as (i) shade sensitive (\(S\)) which are highly susceptible to shade, (ii) shade tolerant (\(T\)) which are moderately affected by shade, and (iii) shade loving (\(L\)) which are mildly affected by shade.
Fig. 1: Typical \(N/S\) fixed tilt, \(E/W\) vertical bifacial, Standard tracking (\(ST\)) and Anti tracking (\(AT\)) \(AV\) systems with pitch (\(p\)) and height (\(h\)) labelled.
Fig. 2: Customized tracking scheme illustration which utilizes solar tracking at noon while anti tracking for rest of the day to meet food and energy constraints.
Land to module ratio (\(A_{LM}\)) is a parameter which depicts the module spatial density for agrivoltaics. For a given total area of modules, higher module density results in lower \(A_{LM}\) and thus higher shading (lower \(PAR\)). Fig. 4 shows the impact of various crop sensitivities in form of bars over the range of land to module area ratio on four different \(AV\) orientations a) \(N/S\) fixed tilt, b) E/W vertical, c) \(ST\) and d) \(AT\) for Khanewal, Pakistan. \(Y_{crop}\) for the crop types \(S\), \(T\), and \(L\) is shown. For crops type \(S\), \(Y_{crop}\) is (mildly affected by land to module area ratio for all module configurations. For crop type \(S\), \(Y_{crop}\) is heavily dependent on module configuration as well as on the land to module area ratio as both of the factors contribute to the shading ratio. \(Y_{crop}\) increases for shade sensitive crop with increase in \(A_{LM}\) from full density (\(FD,A_{LM}=2\)) to one-third density (\(TD,A_{LM}=6\)). In terms of module configurations, \(AT\) is best suited for shade sensitive crops, followed by E/W vertical and the fixed tilt \(N/S\) orientations.
Fig 5. shows the annual values of \(Y_{crop}\) and \(Y_{pv}\) for Rabi and Kharif seasons as a function of \(ST\) hours along the day for three different crop shade sensitivities for three different land to module area ratio (\(A_{LM}=2(FD),4(HD)\)_and_\(6(TD)\)). Both \(Y_{pv}\) and \(Y_{crop}\) threshold criteria considered here is 80%. To support he \(Y_{pv}\) criteria, \(ST\) in a day must be 5 hours or greater as highlighted by the light green shaded region. When land to module area ratio (\(A_{LM}\)) is 2, Crop type \(L\) cannot be supported with any \(CT\) scheme while crop \(T\) can be supported with \(ST\) of 5-8 hours in a day. The crop \(L\) can be supported with \(ST\) of 5
Figure 4: Effect of different module orientations (a) \(N/S\) fixed tilt, b) \(E/W\) vertical bifacial, c) \(ST\) d) \(AT\)) on \(Y_{crop}\) for different crop sensitivities for range of land to module area(\(A_{LM}\)) ratio for Khanewal, Pakistan. Biomass yield (\(Y_{crop}\)) increases with increase in \(A_{LM}\) irrespective of crop sensitivity and orientations. \(AT\) is best performing for lower \(A_{LM}\) followed by vertical \(E/W\), then \(N/S\) fixed tilt. \(ST\) is not recommended for shade sensitive crops at \(FD\) and \(HD\) for \(AV\).
Figure 3: Shade response for the crops of various shade sensitivities adapted from [37]
land to module area ratio of 4, crop types \(L\) and \(T\) can both be supported with \(ST\) of 5 - 12 hours or more while crop \(S\) cannot be supported. When land to module area ratio is 6, crop types \(L\) and \(T\) can both be supported with \(ST\) of 5 - 12 hours or more while crop \(S\) can be supported with \(ST\) of 5-8 hours in a day. It should be noted that beyond a critical value of \(ST\) hours in a day which is around 10 hours, \(Y_{crop}\) and \(Y_{PV}\) tends to saturate. Below the \(ST\) of 10 hours in a day, the variation in \(Y_{PV}\) is significantly large as compared to that for \(Y_{crop}\).
### _Economic Model_
We use price-performance ratio (\(ppr\)) as a benchmark to evaluate the economic performance of \(AV\). This model is based on our recent work [36] with extensions necessary for \(ppr\) based analysis. The price and performance factors can widely vary according to the business scenario and the land/system ownerships. [12] describes five scenarios based on the several cooperation models between land users that include \(PV\) operator, farmer, and the landowner. Although multiple business scenarios can exist in \(AV\) between farmer, \(PV\) investor, and the landowner, here we primarily focus on the case when the farming and \(PV\) investments are owned by a single entity so that the maximizing of the overall profit is the main objective. The other scenarios where the \(PV\) and the farming investments are shared between multiple owners, the model can be extended and applied according to the specific details of the business contract.
Typically, the hardware customization (i.e., the elevated mounting and stronger foundations) are the main contributors to the \(AV\) price while the soft costs (\(EPC\), taxes, and land lease, etc.) may have a relatively small difference as compared to the standard \(GMPV\). While the hardware costs (\(C_{M}\)) are usually modulated by global economics, soft costs (\(C_{L}\)) depend more on the country specific policies and can further depend on the type of land and business models. With the bifurcation of the levelized cost of electricity (\(LCOE\)) into hardware and soft costs, can be re-written as [36, 38]:
\[LCOE=\frac{C_{M}+C_{L}}{YY_{T}\cdot X}=\frac{c_{M}\,A_{M}+c_{L}A_{L}}{YY_{T} \cdot X}=\frac{M_{L}+A_{L}/A_{M}}{YY_{T}\cdot X/c_{L}} \tag{1}\]
where \(A_{M}\) and \(A_{L}\) are the total module and land areas for \(AV\), respectively, and \(YY_{T}\) and \(YY\) are the total energy and energy production per module area, respectively. \(\chi\equiv\sum_{k=1}^{Y}(1-d)^{k}(1+r)^{-k}\), where \(d\), and \(r\) are rates for depreciation and discount rates, respectively. (\(M_{L}=c_{M}/c_{L}\)) is the hardware to soft cost ratio, where \(c_{M}\) and \(c_{L}\) are the hardware costs per unit module area and soft cost per unit land area, respectively. It is an important quantity which can influence the relative price for the \(AV\) system. \(M_{L}\) is typically close to 10 in US and vary between 5 - 35 worldwide [39] as shown in appendix (Fig. A3).
Fig. 5: Annual variation in \(Y_{PV}\) and \(Y_{crop}\) for shade tolerant, shade sensitive and shade loving crops as function of daily standard tracking hours for full density (\(FD,A_{LM}=2\)), half density (\(HD,A_{LM}=4\)) and one-third density (\(TD,A_{LM}=6\)). By increasing the land to module area ratio from 2 to 6, the \(Y_{crop}\) constraint of 80% for shade sensitive (\(S\)) crop can be achieved for \(ST\) hours of 6-8 while the number of \(ST\) hours for shade tolerant (\(T\)) and shade loving (\(L\)) crops also increase with increase in \(A_{LM}\).
### _A.1 Technoeconomic modeling without policy intervention_
Here we assume that there are no subsidies from government for \(AV\). The case with feed-in-tariff (\(FIT\)) incentive will be discussed later. The business case for this scenario can be made if the dual food-energy profit from \(AV\) exceeds or equates the individual profits, had the land was utilized for a single use, i.e., either food or energy. Since the energy investment and the net energy profits are usually much larger as compared to that for the agriculture on a given land area, the business case can be written in comparison to the standard \(GMPV\) system:
\[P_{e,PV}-P_{e,AV}\leq P_{e,AV} \tag{2}\]
where \(P_{e,AV},P_{e,PV}\) are the annual energy profit from \(AV\) and \(GMPV\), respectively, and \(P_{e,AV}\) denotes the \(AV\) profit from crops in \(S/\)year. The left- and right-hand sides of (2) represents the price and performance benefit, respectively, for the case of single entity owned food-energy \(AV\) business with respect to a standard \(GMPV\) system for a given capacity of the energy generation. The price (\(p\)) can further be decomposed into hardware and soft cost components using (1):
\[p=\begin{pmatrix}M_{L}+\frac{A_{L}}{AM}\\ \frac{YY_{T}}{A_{MAV}}x_{L}\\ \end{pmatrix}_{AV}-\begin{pmatrix}M_{L}+\frac{A_{L}}{AM}\\ \frac{YY_{T}}{A_{GMPV}}x_{L}\\ \end{pmatrix}_{GMPV}*YY_{T} \tag{3}\]
where \(YY_{T}\) is the total annual energy production which is taken to be the same for AV and GMPV.
After some simplifications, (3) can be written as [36]:
\[p=\begin{bmatrix}\left(\frac{c_{MAV}}{c_{M_{GMPV}}}\right)+\left(\epsilon \,\frac{A_{LM}A_{AV}}{M_{L}}\frac{c_{L,AV}}{c_{L_{GMPV}}}\right)-\left(\frac{ A_{LM}GMPV}{M_{L}}+1\right)Y_{PV}\end{bmatrix}*\frac{c_{M_{GMPV}}.A_{MAV}}{\chi} \tag{4}\]
where \(Y_{PV}\) which is the ratio of annual energy generated per unit module area for \(AV\) to that for standard fixed tilt \(GMPV\) is also equal to the total module area ratio for \(AV\) to that for \(GMPV\) since both systems are assumed to generate the same total annual energy. \(Y_{PV}=1\) if the \(AV\) system has the same module tilt and orientation as that for the reference \(GMPV\). \(Y_{PV}\) can be greater than 1 if modules with tracking are used for \(AV\), and \(Y_{PV}<1\) for the vertically mounted bifacial modules facing East/West. The terms \(A_{LM_{GMPV}}\) and \(A_{LM_{AV}}\) are the land to module area ratios for \(GMPV\) and \(AV\), respectively. \(A_{LM_{GMPV}}\approx 2\) for conventional \(GMPV\) while \(A_{LM_{AV}}\) is usually greater than 2 so that excessive shading could be avoided for the crops. As noted previously \(A_{LM_{AV}}\approx 2\) and 4 are sometimes referred to full density and half density \(AV\) systems in literature.
The 1st and 2nd terms in (4) represent the difference in hardware and soft cost for \(AV\) relative to standard fixed tilt \(GMPV\). The practical value of the 1st term, i.e., \(\frac{c_{R_{AV}}}{c_{M_{GMPV}}}\equiv\kappa_{M}\) depends upon specific economic details for a given \(AV\) system. For example, \(\kappa_{M}\) reported for \(\sim\)5m elevated mounting is about 1.38 in one of the studies done in Germany [12]. Since trackers typically could increase the module hardware premium cost by \(\sim\)20% [40], \(\kappa_{M}\) for elevated AV with tracking could be higher than that for the elevated fixed tilt AV systems. The 2nd term in (4) contains the soft costs ratio (\(\frac{c_{L_{AV}}}{c_{L_{GMPV}}}\)) for the \(AV\) module systems to that for \(GMPV\) which incorporates the difference in their land lease cost, engineering, procurement, and construction (\(EPC\)) costs, and labor costs. It has an inverse dependency on \(M_{L}\) which implies that the relative economic impact of soft costs reduces if the hardware to soft cost ratio for the system is higher for a system. \(\epsilon\) in the 2nd term is a fraction that signifies how the soft costs scale when the land area for \(AV\) is increased. \(\epsilon\) is typically less than 1 and can be related to increase in the electrical wiring, \(EPC\) costs, and labor when the land area is increased for a given total capacity of modules [41]. The 3rd term in (4) incorporates the effect of \(Y_{PV}\) and has inverse proportionality with \(M_{L}\) which implies that the relative economic effect of varying the energy produced per unit area for \(AV\) vs. \(GMPV\) diminishes as \(M_{L}\) is increased.
Since the hardware costs often play a dominant role in the economic feasibility of \(AV\), it is insightful to normalize the price relative to the hardware cost of the standard \(GMPV\). The normalized price (\(p^{\prime}\)) is given by:
\[p^{\prime}=\frac{p}{A_{MAV}.c_{M_{GMPV}}/\chi}=\left[\left(\frac{c_{MAV}}{c_{ M_{GMPV}}}\right)+\left(\epsilon\,\frac{A_{LM}A_{AV}}{M_{L}}\frac{c_{L,AV}}{c_{L_{ GMPV}}}\right)-\left(\frac{A_{LM}GMPV}{M_{L}}+1 \right)Y_{PV}\right] \tag{5}\]
The 1\({}^{\rm{st}}\) and 2\({}^{\rm{nd}}\) terms represent the difference in hardware and soft cost for \(AV\) relative to \(GMPV\). The 3\({}^{\rm{rd}}\) term represents the impact of relative energy generation per module area for \(AVS\) as compared to that for \(GMPV\). The three terms in right hand side of (5) can be written in shorthand as:
\[p^{\prime}=(\kappa_{M}+\kappa_{L}-Y^{\prime}_{PV}) \tag{6}\]
For the ideal limit, when the energy generation per module area is the same and there is no difference in soft and hardware costs for \(AV\) vs. \(GMPV\), \(p^{\prime}\) approaches to zero. For practical cases considered in this study, \(p^{\prime}\) is typically between 0.4 - 0.8.
The performance benefit can be written as:
\[pb=Y_{crop}\times P_{cfultsun} \tag{7}\]
Where \(P_{cfultsun}\) is the crop yield under the full sun condition and \(Y_{crop}\) is the percentage biomass yield for AV relative to full sun.
To compute \(ppr\), we divide the performance benefit with the same normalization factor as we have used for the price. The normalized performance benefit (\(pb^{\prime}\)) can be written as:
\[pb^{\prime}=\frac{pb}{A_{MAV}.^{c_{M}}{}_{GMPV}/x}=A_{LMAV}\left(\frac{Y_{ crop}\times P_{cfultsun}/A_{LAV}}{c_{MGPV}/x}\right) \tag{8}\]
where the ratio in the brackets represents the crop profit earned from a unit area of land divided by the hardware cost of in stalling the same unit area of \(GMPV\) module. \(pb^{\prime}\) is typically smaller than \(p^{\prime}\) and can vary across a wide range for low vs. high value crop. \(pb^{\prime}\)can have a wide range ranging from the order of 0.1 for some of the high value crops, such as the horticulture crops, to the order 0.001 for the low value crops.
The price-performance ratio (\(ppr\)) is given as:
\[ppr=\frac{p}{pb}=\frac{p^{\prime}}{pb^{\prime}} \tag{9}\]
Since the \(pb\) can be much smaller than \(p\) for many of the practical scenarios, it can be challenging to attain \(ppr\leq 1\). This then necessitates some policy interventions to facilitate an economic viability for AV as discussed in the next section:
### Technoeconomic modeling with policy intervention
When government incentives such as feed in tariff (\(FIT\)) are available, their economic impact be included in the performance term.
\[pb^{\prime}=\ \left(\frac{pb/A_{LAV}}{c_{M_{GMPV}}/x}\right)+\Delta FIT \left(\frac{\gamma Y_{TC}\chi}{A_{MAV}c_{M_{GMPV}}}\right) \tag{10}\]
where \(\Delta FIT\) is the difference in \(FIT\) for \(AV\) and \(GMPV\) and is assumed to be a positive number. For a given \(AV\) system, a threshold \(\Delta FIT\) can be computed to enhance \(pb^{\prime}\) so that \(ppr^{\prime}\) becomes close to one.
\(\Delta FIT\) can be used as a tool by the policy makers to support agricultural land preservation through \(AV\). Moreover, \(\Delta FIT\) can be made crop-specific if cultivation of some selected crops needs to be promoted at a given location.
## III Results and Discussions
The modeling framework is applied to a case study for two conceptual \(AV\) farms: a) high value, and b) low value farms represent crop rotations that yield high and low annual profit, respectively for Khanewal (30.2864\({}^{\circ}\) N, 71.9320\({}^{\circ}\) E), Punjab, Pakistan. Each farm is studied under various CT schemes and compared to reference fixed tilt south faced \(GMPV\). The cropping cycle and reported crop yield/revenues for Khanewal are taken into consideration while simulating the low value and high value farms. Crop rotation for the high value farm comprises of tomato, cauliflower, and garlic over the year, while for the low value farm, it consists of wheat and cotton as shown in Table I in appendix. These crops can be classified under shade tolerant crops based in their biomass yield (\(Y_{crop}\)) as shown in Fig. A2 in appendix.
### _CT for various crop types and seasons_
_CT_ schemes can be optimized for a given crop type and season by adjusting the number of daily _ST_ hours centered around noon while doing _AT_ during rest of the day. Fig 6. shows the monthly values of \(Y_{Crop}\) and \(Y_{P\nu}\) for Rabi and Kharif seasons as a function of _ST_ hours along the day for three different crop shade sensitivities for land to module area ratio (\(A_{LM}\)). The Rabi season in Pakistan is from Nov-Apr while Kharif season is from May-Oct. Rabi crops for shade sensitive, shade tolerant and shade loving categories are shown Fig. 6 (a, c, and e), respectively, while Kharif crops for the same categories are shown in Fig. 6. (b, d, and f), respectively. The \(Y_{P\nu}\) tends to saturate as the _ST_ hours go beyond 10 hours while the saturation of \(Y_{Crop}\) curve is dependent on crop's shade sensitivity. To illustrate the feasible design space for the _ST_ hours in a day, we assume a case where the thresholds for \(Y_{Crop}\) and \(Y_{P\nu}\) of 80% need to be satisfied. The yellow and green shaded regions in Fig. 6 respectively represent the daily allowed _ST_ hours where \(Y_{P\nu}\) and \(Y_{Crop}\) thresholds are met across all the months in the season. An overlap between the two shaded regions corresponds to the tracking design for the daily _ST_ hours that could meet the energy and food constraints. It can be observed that \(Y_{P\nu}\) threshold is met across all months with \(ST>7\) hours for both Rabi and Kharif, respectively. The \(Y_{Crop}\) threshold, however, has a strong dependence on the shade sensitivity of the crop. For crop type \(S\) (Fig. 6 a-b), the crop threshold is not met even with _AT_ (i.e., \(ST=0\)) for the whole day for both the seasons. For crop type \(T\) (Fig. 6 c-d), the \(Y_{Crop}\) threshold is met for _ST_ \(\leq 7\) hours for both Rabi and Kharif. For these crops, the feasible tracking scheme is only when _ST_ in a day is around 7 hours where the required food and energy thresholds are barely met simultaneously across all months of the season. Finally, for crop type \(L\) (Fig. 6 e-f), the food and energy thresholds are conveniently met irrespective of _ST_ hours and there is a complete overlap for all values of \(Y_{P\nu}\) and \(Y_{Crop}\) since the crop yield remains above 80% even when _ST_ hours are increased to 12. It should be noted that both \(Y_{P\nu}\) and \(Y_{Crop}\) show monthly variations across all types of crops and seasons. This is due to the natural variations in the sun's trajectory across months that change the shading ratio for the crop and solar energy generation. Although, the shaded regions in Fig. 6 are drawn with an assumption that the daily _ST_ hours are not designed to be changed across various months in each season, this is not an essential requirement in practical situations and is assumed here for simplicity. A monthly adjustment in _ST_ hours across the season can indeed be implemented to better facilitate the food-energy thresholds.
### _Impact of land to module area ratio on CT_
The tracking scheme that can meet the thresholds for both crop and energy depends on the crop shade sensitivities as described in the previous section. At the system design stage, the land to module area ratio can be optimized by varying row-to-row spacing for the module arrays (assuming the land area for the system is adjustable) to allow for a lower shading ratio and a broader range of crops in the system. Fig. 7 shows how an increase of land to module area ratio from 2 to 6 can make _CT_ scheme viable for the crop type \(S\) in both Rabi and Kharif seasons. The thresholds for \(Y_{P\nu}\) and \(Y_{Crop}\) are taken as 80% and 70%, respectively. For land
Fig. 6: Monthly \(Y_{Crop}\) and \(Y_{P\nu}\) are shown for Rabi and Kharif seasons as a function of _ST_ hours along the day for three different crop shade sensitivities at land to module area ratio of 2. The yellow and green boxes show 80% constraints for \(Y_{Crop}\) and \(Y_{P\nu}\), respectively. For the most shade sensitive crops, the constraints for the yield cannot be met at any value for _ST_ hours although the shade tolerant crop comes closer to the constraints for both seasons. The constraints are conveniently met for the shade loving crop for _ST_ hours of 8 or above.
to module area ratio of 2 (full density \(FD\)) and 4 (half density \(HD\)), there is hardly any \(CT\) solution available to support the given food-energy thresholds except for the Rabi season where \(ST=7\) hours in a day can barely meet the thresholds with land to module area ratio of 4. On the other hand, when the land to module ratio is increased to 6 (one-third density \(TD\)), the food-energy thresholds are conveniently met for a broader range (\(>6\)) of daily \(ST\) hours across both seasons.
The above results highlight that crop and energy yield thresholds could be met either by selecting crops with an appropriate shade sensitivity for a given module configuration or by increasing the land to module area ratio at the design stage to allow for a broader crop types. If an AV system is already installed, then land to module area ratio is fixed and it cannot be altered. In this case, we can only customize the tracking for selected crops to meet the food-energy thresholds.
### _Techno-economic modeling for the tracking AV_
Till now, we have considered \(CT\) from the perspective of fulfilling the food-energy thresholds for crops with different shade sensitivities and systems with varying land to module area ratio. In practice, however, the economic aspects could often play a decisive role in determining the \(CT\) scheme. In this section, we will explore the economic performance of mobile \(AV\) systems with various \(CT\) schemes relative to the standard \(GMPV\) system. System parameters including land to module ratio and daily \(ST\) hours in a day are explored along with the economic parameters including crop profit and \(FIT\) to quantify their effect on the economics. Only the crops having moderate shade sensitivities are considered in this section to keep the focus on the economic analysis. The approach is however applicable to any shade sensitivity for the crops. In the following sub-sections, we will first apply economic model on the standard \(ST\) and \(AT\) in comparison with the south faced fixed tilt \(AV\) systems. We will then explore \(CT\) schemes that can maximize the economic performance while ensuring food-energy yield thresholds.
### _Effect of land to module area ratio_
Fig 8 shows the how various economic paramters that define the price and perfromance benefit (eqs. (5) and (8)) depend on the land to module area ratio. Fig 8a shows that the hardware cost ratio remains constant as a function of land to module area ratio for both mobile and standard \(AV\) systems. Hardware cost is higher for the mobile modules as compared to the fixed tilt orientation as expcted [41]. Fig 8b shows the effect of module to land area ratio on the \(2^{\text{nd}}\) term (\(\kappa_{L}\)) in the price equation (eq. (5)) that contains the effect of soft cost. \(\kappa_{L}\) increases linearly with increase in land to module area ratio with the slope that depends on the scaling factor \(\epsilon\) and \(c_{L_{AM}}/c_{L_{COMP}}\) (inset) as shown in Fig. 8b. Fig 8c shows the \(pb^{\prime}\) for low and high value crops which both increase linearly with land to module area ratio as more crops can be grown with increasing land. Moreover, the shading ratio for the crops reduces as the land to module area ratio is increased. The inset figure shows the zoomed plot for \(pb^{\prime}\) for the low value crops. Note that \(pb^{\prime}\) is much higher for high value crops in comparison with low value crops. For the lowest land to module area ratio, anti
Fig. 7: Monthly values of \(Y_{Crop}\) and \(Y_{PP}\) are shown for Rabi and Kharif seasons as a function of \(ST\) hours along the day for the most shade sensitive crop at land to module area density of 2 (Fig. 7a, b), 4 (Fig. 7c, d), and 6 (Fig. 7e, f). The yellow and green boxes show 70% and 80% constraints for \(Y_{Crop}\) and \(Y_{PP}\), respectively. The energy constraints are met for \(ST\) hours–7 hours in a day as shown in Fig. 6. In a, b, and d, the energy and food thresholds are not simultaneously met at any value of \(ST\) hours in a day. For c, the food-energy threshold are barely met at \(ST=7\) hours in a day. For Fig. 7e, f, the food-energy thresholds are met for all values of \(ST\) hours greater than 7.
tracking shows a higher \(pb^{\prime}\) while at higher land to module area ratios, \(pb^{\prime}\) for all module configuration converge. This is due to the fact that significantly higher quantity of sunlight is available for crops with \(AT\) as compared to \(ST\) and fixed tilt system which results in higher \(Y_{crop}\). At higher land to module area ratios, the shading ratio becomes significantly lower for \(ST\) and fixed tilt systems as well, and thus the anti tracking is not as beneficial as compared to other module configurations. Fig 8d shows the \(Y_{PV}\) for different module schemes which remains constant irrespective of land to module area ratio. This is because energy yield per unit module area does not change with varying the land area unless there is mutual shading between the modules. For the range of module to land areas we have considered, mutual shading between modules is not significant. Figure 8d shows that \(ST\) scheme generates the highets yield followed by \(N/S\) faced fixed tilt modules while \(AT\) scheme has the worst energy perfromance as most of the light is delivered to the crops.
Fig. 9 shows the price, performance and ppr for the \(ST\), \(AT\), and \(N/S\) faced modules as a function of land to module area ratio considering the high value crops. Price and performance both increase linearly with increase in land to module area ratio of 3 and higher although the relative increase in the performance exceeds that for the price. This results in decrease in the \(ppr\) as shown if Fig. 9c. As \(ppr\leq 1\) is desired for economic feasibility, higher land to module area ratio tends to achieve this because of the increasing trend in the performance. Around the land to module area ratio of 6, ppr decrease tends to saturate while the economic feasibility, i.e. \(ppr<1\) is still not achieved. Compared to \(ST\) and \(N/S\) fixed tilt systems, \(AT\) has the significantly higher ppr because of its lowest contribution in the energy yield and a high initial hardware cost.
Figure 8: Effect of land module area ratio (\(A_{LM}\)) on various economic parameters a) Hardware cost ratio (\(\kappa_{M}\)), b) Normalized soft cost ratio, (\(\kappa_{L}\)) c) Normalized crop profit ratio (\(P_{C}\)) and d) Normalized energy yield ratio (\(Y_{PV}\)) for \(N/S\), \(ST\) and \(AT\) orientations for \(AV\) for \(M_{L}=10\). \(\kappa_{M}\) and \(Y_{PV}\) remains constant irrespective of \(A_{LM}\) for each orientation while \(\kappa_{L}\) and \(P_{C}\) shows increasing trend with increase in \(A_{LM}\) for all the orientations. The insets of Fig 8b and c shows the impact \(A_{LM}\) on land price ratio (\(C_{L_{LM}}/C_{L_{EMP}}\)) and zoomed in \(P_{C}\) for low value crops respectively.
Figure 9: Effect of land module area ratio (\(A_{LM}\)) on a) price, b) performance, and c) price performance ratio (\(ppr\)) for \(N/S\), \(ST\) and \(AT\) orientations for \(AV\) for high value crop (HV) at \(M_{L}=10\). Price and performance increase linearly with increase in \(A_{LM}\) while \(PPR\) decrease with increase in \(A_{LM}\) for the three orientations explored. The economic feasibility (\(ppr\leq 1\)) is not achieved for any orientation.
### Effect of module soft to hardware cost ratio (\(M_{l}\))
Module hardware to soft cost ratio can have important implications for the economic feasibility of \(AV\). Fig. 10 shows the effect of \(M_{L}\)on price, performance and ppr for the \(ST\) scheme and high value crop rotation. Lower \(M_{L}\) therefore implies a higher soft cost and vice versa. Fig. 10a highlights that increasing \(M_{L}\) lowers the slope of the price as a function of land to module area ratio. Higher \(M_{L}\) results in decrease in \(ppr\) and improves the economic viability of the standard tracking at higher land to module area ratios. These results highlight that when module to soft cost ratio is higher, increasing the land area (which mostly affects the soft costs) has a relatively mild impact on price. In contrast, when the module to land ratio is lower, increasing the land area (i.e., higher soft costs) has a stronger impact on price. Fig. 10c shows that with a higher \(M_{L}\) of 30, \(ppr\) can almost reach to its desired range of \(\leq\)1 at land to module area ratio of 6.
### Effect of crop's market value
Fig. 11 shows the effect of crop's value on the performance benefit and ppr as a function of land to module area ratio using high and low value crops. For low value crops, the performance benefit is significantly low (inset of Fig 11b) and economic feasibility is not achieved for any land to module area ratio. It should be noted that the curves of ppr for \(N/S\) faced modules and \(ST\) tend to saturate at higher \(A_{LM}\) for both low and high value crops. For high value crops the economic feasibility is still not fully achieved for \(ST\) and \(N/S\) faced modules at module to land area ratio of (\(A_{LM}\)) 6 although the ppr comes close to 1.
### Effect of FIT
Since economic feasibility is often not achieved even for high value crops, policy intervention in terms of subsidy, feed-in tariff, loans might be required to make \(AV\) economically attractive to investors and farmers. The effect of \(\Delta FIT\) on performance and ppr is incorporated in 10. Fig. 12 shows the effect of \(\Delta FIT\) on ppr for high value crop and \(M_{L}=10\) varying the land to module area ratio from 2 to 6. \(\Delta FIT\) impacts the performance curves, which shift upwards, while the ppr curves shift downwards with increase in \(\Delta FIT\). \(AV\) system for \(N/S\) faced fixed tilt modules and \(ST\) become economically feasible for \(\Delta FIT=10\%\) when their \(ppr\) falls below 1. \(AT\), on the other hand, will require a high value of \(\Delta FIT\), (even greater than 30%) to become economically feasible. In case of N/S faced tilt modules and \(ST\), \(AV\) remains economically viable for all values of land to module area ratio for \(\Delta FIT>10\%\).
Figure 11: Effect of crop’s market value (\(HV\) and \(LV\)) on: a) price, b) performance, and c) price performance ratio (\(ppr\)) for \(N/S\), \(ST\) and \(AT\) module schemes for a range of land module area ratio (\(A_{LM}\)=2-6) at \(M_{L}=10\). The crop’s profit impacts performance which increases linearly as \(A_{LM}\) is increased which correspondingly decrease the \(ppr\). Inset of Fig. 11b shows the zoom in performance for LV crop. A substantially smaller performance results in an extremely high \(ppr\) for LV crops. The economic feasibility (\(ppr\leq 1\)) is not achieved for all types of module schemes although \(ppr\) decreases significantly at higher \(A_{LM}\) for high value crops.
Figure 10: Effect of module hardware to soft cost ratio (\(M_{L}\)) on a) price, b) performance, and c) price performance ratio (\(PPR\)) for \(N/S\), \(ST\) and \(AT\) orientations of \(AV\) for high value crop (\(HV\)) for land module area ratio (\(A_{LM}\)=2-6). Higher \(M_{L}\)(lower land related costs) increases the slope of the price curve and thus decreases \(ppr\), while performance remains unaffected with change in \(M_{L}\). The economic feasibility (\(ppr\leq 1\)) is not achieved for smaller \(M_{L}\) while \(M_{L}=30\) enables \(ppr\)\(\sim\)1 for \(A_{LM}=6\).
### Economic impact of customized tracking
As the \(CT\) uses a combination of both \(AT\) and \(ST\) along the day, it can be explored to find the economic feasibility while either \(ST\) or \(AT\) fails to simultaneously meet the thresholds for both food and energy yield. Fig. 13 shows variation in price, performance and ppr for \(CT\) schemes with respect to \(ST\) hours in a day. The figure is drawn for land to module area ratio of 3 and \(M_{L}=10\) for high value (\(HV\)) and low value (\(LV\)) crops. A comparison for \(\Delta FIT=0\) and \(\Delta FIT>0\) cases is performed for both LV and HV crops which depicts similar trends. LV however requires a higher \(\Delta FIT\) as that for HV to become economically feasible. For LV crop, \(\Delta FIT=30\%\) is required for economic viability for the daily ST hours\(\geq 8\), while for HV crop, \(\Delta FIT=10\%\) enables economic viability with daily ST of 10 hours or higher. Since \(ppr\) is the ratio of price and performance, the intersection of price and performance curves in Fig. 13c and 13d highlight the required \(ST\) hours to obtain \(ppr\leq 1\) hence making the AV economically viable.
Figure 12: Effect of \(\Delta FIT\) on ppr for N/S, ST and AT for HV crop. Except AT and p/h=2, all the orientations become economically viable for \(\Delta FIT\geq 10\%\). The insets in Fig. 12c and d shows zoomed in curves of N/S and ST which are already economically viable. The insets show an increasing trend with \(A_{LM}\)due to higher profits from energy. AT might need higher \(\Delta FIT>30\%\) to become economically feasible due to poor energy yield.
Figure 13: Variation of price, performance and ppr with standard tracking hours in a day for p/h=3 and \(M_{L}=10\). Price, performance and ppr shows decreasing trend for \(\Delta FIT=0\) for both LV and HV crops. The performance shows increasing trend in case of \(\Delta FIT=30\%\) and \(10\%\) for LV and HV crops respectively. The \(ppr\) becomes economically viable when price and performance curves intersect each-h other and
Fig. 14 shows the \(\Delta FIT_{TH}\) (defined as the bare minimum \(\Delta FIT\) that is required for \(ppr\leq 1\)) for \(LV\) and \(HV\) crops as a function of standard tracking hours in a day. Since the performance benefit for \(LV\) is significantly low (implying high ppr) a greater \(\Delta FIT_{TH}\) is required for it in comparison to \(HV\) crops. As \(ST\) hours increase from 0 (that corresponds to \(AT\)) to 12 (that corresponds to standard \(ST\)), \(\Delta FIT_{TH}\) requirement for both \(LV\) and \(HV\) decreases. This is mainly because of energy yield and thus the energy profits becoming higher with increase in \(ST\) hours. As discussed in previous section, however, increasing \(ST\) hours for improving the economics must be limited to the constraints imposed by food-energy thresholds for the given \(AV\) system. Higher energy profit might tempt the \(AV\) investor to maximize the \(ST\) hours which may decrease the crop yield drastically. This can be regulated by policy to curtail the \(ST\) hours in a day to safeguard the food-energy thresholds. The \(ST\) hours in a day should therefore be a crop and threshold dependent parameter, and it should be selected carefully in such a manner that both energy and crop thresholds are met in most economically beneficial way.
### _Limitations and future extensions_
While the criteria outlined in equations (9) and (10) provide accurate assessments, our model assumes the economic viability based on the premise that the same entity owns both the energy and food production. This assumption holds true in cases where a farmer is also the solar investor or vice versa, but it may not always be applicable. When the solar investor and the farmer are different entities, the profits generated from energy and crop yields, as well as the associated land costs, must be distributed according to their business arrangement. In such scenarios, government policy interventions become significantly more critical and can exert substantial influence on the technical and economic parameters. We are planning to extend our model to address these diverse scenarios as part of our future research.
## IV Conclusions
In this paper, we have explored customized tracking (\(CT\)) for \(AV\) through a techno-economic model. The \(CT\) multiplexes the standard sun tracking (ST) with its orthogonal, _i.e._, anti-tracking (AT) such that the ST covers noon hours and AT is done towards morning/evening. Economic feasibility is modeled using the price and performance benefit framework where price corresponds to the module system customizations required for \(AV\) while the performance benefit is the crop income. The model computes the price separately for the soft and hardware components of \(AV\) and incorporates for any difference in the energy produced per unit module area relative to the standard \(GMPV\) configuration. Using the model, we explore the effect of crop's shade sensitivity, module type and areal density, and economic factors including the crop income and required feed-in-tariff. We show how the duration of ST hours in a day can be optimized to meet the threshold yield requirements for food and energy yield and to maximize the economic benefit. A case study for Lahore, Pakistan based on the model is presented with the following key conclusions:
* Combined food-energy yield requirements for the shade loving crops can be supported with the standard sun tracking across the whole day for land to module area ratio of 2 (full array density) or greater (reduced array density).
* Combined food-energy yield requirements for the crops with moderate shade sensitivity are barely met with standard tracking of 6 hours in a day with full array density. At reduced array density (land to module area ratio of 4 or more) standard tracking across the whole day can provide the required food-energy thresholds.
* Combined food-energy yield requirements for the crops with high shade sensitivity cannot be supported with full density arrays except with anti-tracking across the whole day. Half density arrays can barely support the food-energy requirements in some months for these crops with maximum standard tracking of 6 hours in a day. For further reduced density (land to module area ratio of 6) standard tracking across the whole day can support the food-energy thresholds for these crops.
Fig. 14: Variation in \(\Delta FIT_{TH}\) for HV and LV for \(p/h=3\) and \(M_{L}=10\) crops wrt standard tracking hours in a day. The \(\Delta FIT_{TH}\) requirement decreases with increase in \(ST\) hours.
* For high value crops, economic feasibility is not met without _FIT_ incentive even at reduced module array densities. A 10% increase in the reference _FIT_ for _GMPV_ can meet the economic threshold for high value crops with a slight reduction in the standard module array density.
* For low value crops, \(\sim\)30% incentive in _FIT_ is required to meet the economic threshold for high value crops with a slight reduction in the standard module array density.
* The requirement of higher _FIT_ increases when the standard tracking hours in a day are reduced. The standard tracking hours are however required to be reduced when shade sensitivity of the crop demands smaller shading ratio to meet the crop yield threshold. The reduced standard tracking hours should nevertheless be compensated by a higher _FIT_ incentive for _AV_'s economic feasibility. In our case study for high value crops, ST of 12 hours and 6 hours in a day requires _FIT_ incentive of 10% and 30%, respectively. The respective _FIT_ incentives for the case of low value crops are 20% and 40%.
In summary, we show that techno-economic feasibility and design of module tracking for _AV_ can be customized using the presented model. Although the tracking infrastructure requires a high capital cost, it offers great flexibility to address the requirements of food-energy threshold yields. The economic performance is typically higher when standard tracking is done for most part of the day due to relatively high energy profits. This should however not be an acceptable solution when crops sunlight requirement is not met due to high shading. A model-based optimization for the standard tracking hours for the desired crop need is therefore a valuable solution.
## V Appendix
### _Variation of CT across various global locations_
Since the daily and seasonal trajectory of sun varies with reference to the global coordinates, optimal _CT_ schemes can vary for different global locations. Fig A1 illustrates this behavior for four different locations (Khanewal, Heggelbach, Arizona and Sydney) where the latter is in the southern hemisphere. For each location, the maximum allowed _ST_ hours are evaluated using the approach described in Fig. A1 for the shade tolerant crop. The energy and crop yield thresholds set to 80% and land to module area ratio is 2. For the winter months (Nov-Feb) in the northern hemisphere, \(ST\geq 12\) hours is possible for Lahore, Heggelbach and Arizona, while _ST_ for Sydney is limited to \(\sim\)7 hours where it is summer as shown in Fig. A1(a). During the winter months (April-Aug) for the southern hemisphere, \(ST\geq 12\) is possible. For months of Mar-Oct and Sep-Mar, locations in the northern and southern hemisphere show a slight variation in the maximum allowed daily _ST_ hours as shown in Fig. A1 (a).
Fig. A1 (b) shows a comparison of monthly \(Y_{pv}\) for the four global locations when the _ST_ hours are customized as shown in Fig. A1 (a). The energy yield is highest in the month of June for Lahore for locations in northern hemisphere while for Sydney it is highest in December. These trends highlight that when _CT_ schemes are optimized across different global locations, the resultant food-energy yield may be slightly different across these locations. This insight may be useful when comparing the data from _AV_ system spread across different locations in the world.
### _Crop sensitivities and revenue Inputs:_
In the high-value farm, crop rotation includes tomato, cauliflower, and garlic throughout the year, while the low-value farm involves wheat and cotton cultivation, as indicated in Table I. The economic details of these crops for 2018 in Pakistan are mentioned in Table I and are used for the economic case study in this paper. Fig. A2 shows the shade response for these crops as a function of land to area module ratio which is computed based on the model described in our previous study [36]. These crops can be categorized as shade-tolerant crops based on their \(Y_{Crop}\) trend shown in Fig. A2 for \(N/S\), \(ST\) and \(AT\) module systems.
Table I. Cropping cycle and net profit from Cotton and wheat for Low value farm, and Tomato, Cauliflower and Garlic for High Value Farm for Khanewal.
### _Global variation in module to land soft cost ratio (\(M_{l}\)):_
Fig. A3. shows global variation in module to land soft cost ratio. \(M_{L}\) is highest for Japan while lowest for Saudi Arabia and the average \(M_{L}\) across the globe is around 10. Low values of \(M_{L}\) corresponds to higher land costs [38]
### _Effect of Feed-in tariff_
Globally, electricity tariffs are on a downward trend, primarily driven by ongoing advancements in photovoltaic (PV) technology and the decreasing costs associated with it [43]. This phenomenon is also evident in Pakistan, where PV feed-in tariffs have been steadily declining, making solar energy more affordable [44]. In recent years, \(PV\) tariffs in Pakistan have ranged from 5 to 7 cents per kilowatt-hour (KWh). To illustrate the impact of increasing the feed-in tariff (\(FIT\)) for agiroultaics (\(AV\)) to cover the additional costs associated with AV systems, we present a table summarizing the \(\Delta FIT\) (in %) required for \(AV\) to achieve an economic equivalence with respect to \(GMPV\) (\(HV\) and \(LV\) Farm) for Khanewal for different \(A_{LM}\) and \(M_{L}\) for \(N/S\), \(AT\) amd \(ST\)
## Acknowledgments
This material is based upon work supported by the Doctoral Fellowship at LUMS. Authors also acknowledge the support of Professor Asharful Alam at Purdue University, USA for fruitful discussion and guidance throughout this work.
|
2307.02324 | Large deviation principle for the norm of the Laplacian matrix of
inhomogeneous Erdős-Rényi random graphs | We consider an inhomogeneous Erd\H{o}s-R\'enyi random graph $G_N$ with vertex
set $[N] = \{1,\dots,N\}$ for which the pair of vertices $i,j \in [N]$, $i\neq
j$, is connected by an edge with probability $r_N(\tfrac{i}{N},\tfrac{j}{N})$,
independently of other pairs of vertices. Here, $r_N\colon\,[0,1]^2 \to (0,1)$
is a symmetric function that plays the role of a reference graphon. Let
$\lambda_N$ be the maximal eigenvalue of the Laplacian matrix of $G_N$. We show
that if $\lim_{N\to\infty} \|r_N-r\|_\infty = 0$ for some limiting graphon
$r\colon\,[0,1]^2 \to (0,1)$, then $\lambda_N/N$ satisfies a downward LDP with
rate $\binom{N}{2}$ and an upward LDP with rate $N$. We identify the associated
rate functions $\psi_r$ and $\widehat{\psi}_r$, and derive their basic
properties. | Rajat Subhra Hazra, Frank den Hollander, Maarten Markering | 2023-07-05T14:28:20Z | http://arxiv.org/abs/2307.02324v1 | Large deviation principle for the norm of the Laplacian matrix of inhomogeneous Erdos-Renyi random graphs
###### Abstract.
We consider an inhomogeneous Erdos-Renyi random graph \(G_{N}\) with vertex set \([N]=\{1,\ldots,N\}\) for which the pair of vertices \(i,j\in[N]\), \(i\neq j\), is connected by an edge with probability \(r_{N}(\frac{i}{N},\frac{j}{N})\), independently of other pairs of vertices. Here, \(r_{N}\colon[0,1]^{2}\to(0,1)\) is a symmetric function that plays the role of a reference graph. Let \(\lambda_{N}\) be the maximal eigenvalue of the Laplacian matrix of \(G_{N}\). We show that if \(\lim_{N\to\infty}\|r_{N}-r\|_{\infty}=0\) for some limiting graphon \(r\colon[0,1]^{2}\to(0,1)\), then \(\lambda_{N}/N\) satisfies a downward LDP with rate \(\binom{N}{2}\) and an upward LDP with rate \(N\). We identify the associated rate functions \(\psi_{r}\) and \(\widehat{\psi}_{r}\), and derive their basic properties.
Key words and phrases:Inhomogeneous Erdos-Renyi random graph, Laplacian matrix, largest eigenvalue, graphons, large deviation principle, rate function 2000 Mathematics Subject Classification: 05C80, 60B20, 60C05, 60F10 The research in this paper was supported through NWO Gravitation Grant NETWORKS 024.002.003.
## 1. Introduction and main results
Section 1.1 provides background. Section 1.2 states the LDP for the empirical graphon associated with inhomogeneous Erdos-Renyi random graphs. Section 1.3 looks at graphon operators, in particular, the Laplacian operator that is the central object in the present paper. Sections 1.4-1.5 state the downward, respectively, upward LDP for the largest eigenvalue of the Laplacian matrix and present basic properties of the associated rate functions. Section 1.6 places the various theorems in their proper context.
### Background
Spectra of matrices associated with a graph play a crucial role in understanding the geometry of the graph. Given a finite graph on \(N\) vertices, two important matrices are the _adjacency matrix_\(A_{N}\) and the _Laplacian matrix_\(L_{N}=D_{N}-A_{N}\), where \(D_{N}\) is the diagonal matrix whose elements are the degrees of the vertices. In this paper we focus on the largest eigenvalue of \(L_{N}\) when the underlying graph is a _dense inhomogeneous_ Erdos-Renyi random graph. The largest eigenvalue of \(A_{N}\) satisfies a large deviation principle (LDP). This fact is an immediate consequence of the LDP for the empirical graphon derived in [16] and [25] in combination with the contraction principle, because the norm of the adjacency graphon operator is bounded and continuous on the space of graphons. The rate function is given in terms of a variational formula involving the rate function of the LDP for the empirical graphon. In [11] we analysed this variational formula, identified the basic properties of the rate function, and identified its scaling behaviour near its unique minimiser and its two boundary points.
The extension of the LDP to \(L_{N}\) poses new challenges, because \(L_{N}\) is a more delicate object than \(A_{N}\). For one, the upward and the downward large deviations for the largest eigenvalue of \(L_{N}\) live on _different scales_, and the norm of the Laplacian graphon operator lacks certain continuities properties that hold for the norm of the adjacency graphon operator. Like for \(A_{N}\), it is not possible to explicitly solve the variational formulas for the two associated rate functions. Nonetheless, we derive their basic properties and identify their scaling behaviour near their
###### Contents
* 1 Introduction
* 2 Preliminaries
* 3 The _Cartan-Moment Theorem_
* 3.1 The _Cartan-Moment Theorem_
* 3.2 The _Cartan-Moment Theorem_
* 3.3 The _Cartan-Moment Theorem_
* 3.4 The _Cartan-Moment Theorem_
* 3.5 The _Cartan-Moment Theorem_
* 3.6 The _Cartan-Moment Theorem_
* 3.7 The _Cartan-Moment Theorem_
* 3.8 The _Cartan-Moment Theorem_
* 3.9 The _Cartan-Moment Theorem_
* 3.10 The _Cartan-Moment Theorem_
* 3.11 The _Cartan-Moment Theorem_
* 3.12 The _Cartan-Moment Theorem_
* 3.13 The _Cartan-Moment Theorem_
* 3.14 The _Cartan-Moment Theorem_
* 3.15 The _Cartan-Moment Theorem_
* 3.16 The _Cartan-Moment Theorem_
* 3.17 The _Cartan-Moment Theorem_
* 3.18 The _Cartan-Moment Theorem_
* 3.19 The _Cartan-Moment Theorem_
* 3.20 The _Cartan-Moment Theorem_
* 3.21 The _Cartan-Moment Theorem_
* 3.22 The _Cartan-Moment Theorem_
* 3.23 The _Cartan-Moment Theorem_
* 3.24 The _Cartan-Moment Theorem_
* 3.25 The _Cartan-Moment Theorem_
* 3.26 The _Cartan-Moment Theorem_
* 3.27 The _Cartan-Moment Theorem_
* 3.28 The _Cartan-Moment Theorem_
* 3.30 The _Cartan-Moment Theorem_
* 3.31 The _Cartan-Moment Theorem_
* 3.32 The _Cartan-Moment Theorem_
* 3.33 The _Cartan-Moment Theorem_
* 3.34 The _Cartan-Moment Theorem_
* 3.35 The _Cartan-Moment Theorem_
* 3.36 The _Cartan-Moment Theorem_
* 3.37 The _Cartan-Moment Theorem_
* 3.38 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.31 The _Cartan-Moment Theorem_
* 3.32 The _Cartan-Moment Theorem_
* 3.33 The _Cartan-Moment Theorem_
* 3.34 The _Cartan-Moment Theorem_
* 3.35 The _Cartan-Moment Theorem_
* 3.36 The _Cartan-Moment Theorem_
* 3.37 The _Cartan-Moment Theorem_
* 3.38 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.31 The _Cartan-Moment Theorem_
* 3.32 The _Cartan-Moment Theorem_
* 3.33 The _Cartan-Moment Theorem_
* 3.34 The _Cartan-Moment Theorem_
* 3.35 The _Cartan-Moment Theorem_
* 3.36 The _Cartan-Moment Theorem_
* 3.37 The _Cartan-Moment Theorem_
* 3.38 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.31 The _Cartan-Moment Theorem_
* 3.32 The _Cartan-Moment Theorem_
* 3.33 The _Cartan-Moment Theorem_
* 3.34 The _Cartan-Moment Theorem_
* 3.35 The _Cartan-Moment Theorem_
* 3.36 The _Cartan-Moment Theorem_
* 3.37 The _Cartan-Moment Theorem_
* 3.38 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.31 The _Cartan-Moment Theorem_
* 3.32 The _Cartan-Moment Theorem_
* 3.33 The _Cartan-Moment Theorem_
* 3.34 The _Cartan-Moment Theorem_
* 3.35 The _Cartan-Moment Theorem_
* 3.36 The _Cartan-Moment Theorem_
* 3.37 The _Cartan-Moment Theorem_
* 3.38 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.31 The _Cartan-Moment Theorem_
* 3.32 The _Cartan-Moment Theorem_
* 3.33 The _Cartan-Moment Theorem_
* 3.34 The _Cartan-Moment Theorem_
* 3.35 The _Cartan-Moment Theorem_
* 3.36 The _Cartan-Moment Theorem_
* 3.37 The _Cartan-Moment Theorem_
* 3.38 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.31 The _Cartan-Moment Theorem_
* 3.32 The _Cartan-Moment Theorem_
* 3.33 The _Cartan-Moment Theorem_
* 3.34 The _Cartan-Moment Theorem_
* 3.35 The _Cartan-Moment Theorem_
* 3.36 The _Cartan-Moment Theorem_
* 3.37 The _Cartan-Moment Theorem_
* 3.38 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.31 The _Cartan-Moment Theorem_
* 3.32 The _Cartan-Moment Theorem_
* 3.33 The _Cartan-Moment Theorem_
* 3.34 The _Cartan-Moment Theorem_
* 3.35 The _Cartan-Moment Theorem_
* 3.36 The _Cartan-Moment Theorem_
* 3.37 The _Cartan-Moment Theorem_
* 3.38 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.31 The _Cartan-Moment Theorem_
* 3.32 The _Cartan-Moment Theorem_
* 3.33 The _Cartan-Moment Theorem_
* 3.34 The _Cartan-Moment Theorem_
* 3.35 The _Cartan-Moment Theorem_
* 3.36 The _Cartan-Moment Theorem_
* 3.37 The _Cartan-Moment Theorem_
* 3.38 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.31 The _Cartan-Moment Theorem_
* 3.32 The _Cartan-Moment Theorem_
* 3.33 The _Cartan-Moment Theorem_
* 3.34 The _Cartan-Moment Theorem_
* 3.35 The _Cartan-Moment Theorem_
* 3.36 The _Cartan-Moment Theorem_
* 3.37 The _Cartan-Moment Theorem_
* 3.38 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.31 The _Cartan-Moment Theorem_
* 3.31 The _Cartan-Moment Theorem_
* 3.32 The _Cartan-Moment Theorem_
* 3.33 The _Cartan-Moment Theorem_
* 3.34 The _Cartan-Moment Theorem_
* 3.35 The _Cartan-Moment Theorem_
* 3.36 The _Cartan-Moment Theorem_
* 3.37 The _Cartan-Moment Theorem_
* 3.38 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.31 The _Cartan-Moment Theorem_
* 3.32 The _Cartan-Moment Theorem_
* 3.33 The _Cartan-Moment Theorem_
* 3.34 The _Cartan-Moment Theorem_
* 3.35 The _Cartan-Moment Theorem_
* 3.36 The _Cartan-Moment Theorem_
* 3.37 The _Cartan-Moment Theorem_
* 3.38 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.31 The _Cartan-Moment Theorem_
* 3.32 The _Cartan-Moment Theorem_
* 3.33 The _Cartan-Moment Theorem_
* 3.34 The _Cartan-Moment Theorem_
* 3.35 The _Cartan-Moment Theorem_
* 3.36 The _Cartan-Moment Theorem_
* 3.37 The _Cartan-Moment Theorem_
* 3.38 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.31 The _Cartan-Moment Theorem_
* 3.31 The _Cartan-Moment Theorem_
* 3.32 The _Cartan-Moment Theorem_
* 3.33 The _Cartan-Moment Theorem_
* 3.34 The _Cartan-Moment Theorem_
* 3.35 The _Cartan-Moment Theorem_
* 3.36 The _Cartan-Moment Theorem_
* 3.37 The _Cartan-Moment Theorem_
* 3.38 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.31 The _Cartan-Moment Theorem_
* 3.31 The _Cartan-Moment Theorem_
* 3.32 The _Cartan-Moment Theorem_
* 3.33 The _Cartan-Moment Theorem_
* 3.34 The _Cartan-Moment Theorem_
* 3.35 The _Cartan-Moment Theorem_
* 3.36 The _Cartan-Moment Theorem_
* 3.37 The _Cartan-Moment Theorem_
* 3.38 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.31 The _Cartan-Moment Theorem_
* 3.32 The _Cartan-Moment Theorem_
* 3.33 The _Cartan-Moment Theorem_
* 3.34 The _Cartan-Moment Theorem_
* 3.35 The _Cartan-Moment Theorem_
* 3.36 The _Cartan-Moment Theorem_
* 3.37 The _Cartan-Moment Theorem_
* 3.38 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.31 The _Cartan-Moment Theorem_
* 3.31 The _Cartan-Moment Theorem_
* 3.32 The _Cartan-Moment Theorem_
* 3.33 The _Cartan-Moment Theorem_
* 3.34 The _Cartan-Moment Theorem_
* 3.34 The _Cartan-Moment Theorem_
* 3.35 The _Cartan-Moment Theorem_
* 3.36 The _Cartan-Moment Theorem_
* 3.37 The _Cartan-Moment Theorem_
* 3.38 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
* 3.39.1 The _Cartan-Moment Theorem_
* 3.39 The _Cartan-Moment Theorem_
#### 1.2.2. LDP for empirical graph
Consider a sequence of symmetric functions \(r_{N}\colon[0,1]^{2}\to(0,1)\), \(N\in\mathbb{N}\), that are constant on the blocks \([\frac{i-1}{N},\frac{i}{N})\times[\frac{j-1}{N},\frac{j}{N})\), \(1\leq i,j\leq N\), such that
\[\lim_{N\to\infty}\|r_{N}-r\|_{\infty}=0 \tag{1.4}\]
for some symmetric function \(r\colon[0,1]^{2}\to(0,1)\) that plays the role of a _reference graphon_. Let \(G_{N}\) be the inhomogeneous Erdos-Renyi random graph with vertex set \([N]=\{1,\ldots,N\}\) for which the pair of vertices \(i,j\in[N]\), \(i\neq j\), is connected by an edge with probability \((r_{N})_{ij}\), independently of other pairs of vertices, where \((r_{N})_{ij}\) is the value of \(r_{N}\) on the block \([\frac{i-1}{N},\frac{i}{N})\times[\frac{j-1}{N},\frac{j}{N})\). The function \(r_{N}\) plays the role of a _block reference graphon_ converging to some _reference graphon r_ as \(N\to\infty\). We assume that the reference graphon \(r\in\mathcal{W}\) in (1.4) satisfies
\[\log r,\log(1-r)\in L^{1}([0,1]^{2}), \tag{1.5}\]
and that the analogue of (1.5) holds for \(\log r_{N}\) and \(\log(1-r_{N})\) as well.
Let \(A_{N}\) be the _adjacency matrix_ of \(G_{N}\) defined by
\[A_{N}(i,j)=\left\{\begin{array}{ll}1,&\mbox{if there is an edge between vertex $i$ and vertex $j$,}\\ 0,&\mbox{otherwise.}\end{array}\right. \tag{1.6}\]
Let \(D_{N}\) be the diagonal degree matrix defined by \(D_{N}(i,i)=\sum_{j\in[N]\setminus i}A_{N}(i,j)\) and \(D_{N}(i,j)=0\) for \(i\neq j\), and put \(L_{N}=D_{N}-A_{N}\), which is the _Laplacian matrix_. Write \(\mathbb{P}_{N}\) to denote the law of \(G_{N}\). Use the same symbol for the law on \(\mathcal{W}\) induced by the map that associates with the graph \(G_{N}\) its empirical graphon \(h^{G_{N}}\), defined by
\[h^{G_{N}}(x,y)=\left\{\begin{array}{ll}1,&\mbox{if there is an edge between vertex $\lceil Nx$\rceil$ and vertex $\lceil Ny\rceil$,}\\ 0,&\mbox{otherwise.}\end{array}\right. \tag{1.7}\]
Write \(\widetilde{\mathbb{P}}_{N}\) to denote the law of \(\widetilde{h}^{G_{N}}\).
The following LDP, which is an extension of the celebrated LDP for homogeneous ERRG derived in [14], is proved in [16] and [25]. (For more background on large deviation theory, see, for instance, [15].)
**Theorem 1.1**.: **[LDP for inhomogeneous ERRG, [16], [25]]** _Subject to (1.5), the sequence \((\widetilde{\mathbb{P}}_{N})_{N\in\mathbb{N}}\) satisfies the large deviation principle on \((\widetilde{\mathcal{W}},\delta_{\Box})\) with rate \(\binom{N}{2}\), i.e.,_
\[\begin{split}&\limsup_{N\to\infty}\binom{N}{2}^{-1}\log \widetilde{\mathbb{P}}_{N}(\mathcal{C})\leq-\inf_{\widetilde{h}\in\mathcal{C}} J_{r}(\widetilde{h})\quad\forall\,\mathcal{C}\subset\widetilde{\mathcal{W}}\mbox{ closed},\\ &\liminf_{N\to\infty}\binom{N}{2}^{-1}\log\widetilde{\mathbb{P}}_ {N}(\mathcal{O})\geq-\inf_{\widetilde{h}\in\mathcal{O}}J_{r}(\widetilde{h}) \quad\forall\,\mathcal{O}\subset\widetilde{\mathcal{W}}\mbox{ open},\end{split} \tag{1.8}\]
_where the rate function \(J_{r}\colon\widetilde{\mathcal{W}}\to\mathbb{R}\) is given by_
\[J_{r}(\widetilde{h})=\inf_{\phi\in\mathcal{M}}I_{r}(h^{\phi}), \tag{1.9}\]
_where \(h\) is any representative of \(\widetilde{h}\) and_
\[I_{r}(h)=\int_{[0,1]^{2}}\mathrm{d}x\,\mathrm{d}y\ \mathcal{R}\big{(}h(x,y)\mid r(x,y) \big{)},\quad h\in\mathcal{W}, \tag{1.10}\]
_with_
\[\mathcal{R}\big{(}a\mid b\big{)}=a\log\tfrac{a}{b}+(1-a)\log\tfrac{1-a}{1-b} \tag{1.11}\]
_the relative entropy of two Bernoulli distributions with success probabilities \(a\in[0,1]\), \(b\in(0,1)\) (with the convention \(0\log 0=0\))._
**Remark 1.2**.: Theorem 1.1 was proved in [16] under the assumption that \(r\) is bounded away from \(0\) and \(1\). In [25] this assumption was relaxed to (1.5), and it was also shown that \(J_{r}\) is a good rate function, i.e., \(J_{r}\not\equiv\infty\) and \(J_{r}\) has compact level sets. Note that (1.9) differs from the expression in [16], where the rate function is the lower semi-continuous envelope of \(I_{r}(h)\). However, as shown in [25], under (1.5) the two rate functions are equivalent, since \(J_{r}(\widetilde{h})\) is lower semi-continuous on \(\widetilde{\mathcal{W}}\). \(\spadesuit\)
As an application of Theorem 1.1, it was shown in [11] that the largest eigenvalue of the adjacency matrix satisfies the LDP with rate \(\binom{N}{2}\). The rate function was analysed in detail for reference graphons that are rank-\(1\). In the present paper we focus on the LDP for the largest eigenvalue of the Laplacian matrix \(L_{N}\).
### Graphon operators
For \(h\in\mathcal{W}\), the graphon operator \(\mathcal{T}_{h}\) is the integral operator on \(L^{2}([0,1])\) defined by
\[(\mathcal{T}_{h}u)(x)=\int_{[0,1]}\mathrm{d}y\,h(x,y)u(y),\qquad x\in[0,1]. \tag{1.12}\]
Note that \(\mathcal{T}_{h}\) is a compact operator. Define the _degree function_ as
\[d_{h}(x)=\int_{[0,1]}\mathrm{d}y\,h(x,y),\qquad x\in[0,1]. \tag{1.13}\]
The _degree operator_\(\mathcal{D}_{h}\) is the multiplication operator on \(L^{2}([0,1])\) defined by
\[(\mathcal{D}_{h}u)(x)=d_{h}(x)u(x), \tag{1.14}\]
The _Laplacian operator_\(\mathcal{L}_{h}\) is the linear integral operator on \(L^{2}([0,1])\) defined by
\[(\mathcal{L}_{h}u)(x)=\int_{[0,1]}\mathrm{d}y\,h(x,y)[u(x)-u(y)],\qquad x\in[ 0,1]. \tag{1.15}\]
Note that
\[\mathcal{L}_{h}=\mathcal{D}_{h}-\mathcal{T}_{h}. \tag{1.16}\]
Recall that, given an operator \(S\) on a Hilbert space, the _spectrum_ of \(S\) is defined as
\[\sigma(S)=\{\lambda\in\mathbb{C}\colon\,S-\lambda I\text{ is not invertible}\}. \tag{1.17}\]
Let \(\sigma_{d}(S)\) denote the _discrete spectrum_ of \(S\), which consists of all the isolated eigenvalues with finite algebraic multiplicity. The _essential spectrum_ of \(S\) is denoted by
\[\sigma_{\mathrm{ess}}(S)=\sigma(S)\setminus\sigma_{d}(S). \tag{1.18}\]
The essential spectrum is closed, and the discrete spectrum can only have accumulation points on the boundary of the essential spectrum. Since it is known that compact operators do not affect the essential spectrum, we have
\[\sigma_{\mathrm{ess}}(\mathcal{L}_{h})=\sigma_{\mathrm{ess}}(\mathcal{D}_{h}). \tag{1.19}\]
The operator \(\mathcal{L}_{h}\) is not as well-behaved as \(\mathcal{T}_{h}\) with the cut-norm. In fact, even when a sequence of graphons \((h_{n})_{n\in\mathbb{N}}\) converges in cut-norm to a graphon \(h\), the eigenvalues and eigenvectors of \(\mathcal{L}_{h_{n}}\) may not converge to those of \(\mathcal{L}_{h}\), as was already observed in [17], [27]. Assuming as in [11] that the reference graphon is rank-\(1\) does not help. In fact, if we assume that \(r\) is rank-\(1\) and is continuous, then \(0\) is the only eigenvalue of \(\mathcal{L}_{r}\) and \(\sigma_{\mathrm{ess}}(\mathcal{L}_{r})=d_{r}([0,1])\), as shown in [17, Proposition 5.11].
If \(h\) is the empirical graphon of a graph \(G\) with \(N\) vertices, then \(N\|\mathcal{T}_{h}\|\) equals the largest eigenvalue of the adjacency matrix of \(G\), and \(N\|\mathcal{D}_{h}\|\) equals the maximum degree of \(G\). In fact,
for any graphon \(h\) the spectrum of \(\mathcal{D}_{h}\) equals the range of \(d_{h}\) and the operator norm of \(\mathcal{D}_{h}\) equals the supremum norm \(d_{h}\), i.e.,
\[\|\mathcal{D}_{h}\|=\|d_{h}\|_{\infty}, \tag{1.20}\]
where \(\|d_{h}\|_{\infty}\) is the \(L^{\infty}\)-norm of the function \(d_{h}\). This follows from the fact that \(\mathcal{D}_{h}\) is a multiplication operator. Let
\[\|\mathcal{L}_{h}\|=\sup_{\begin{subarray}{c}u\in L^{2}([0,1])\\ \|u\|_{2}=1\end{subarray}}\|\mathcal{L}_{h}u\|_{2} \tag{1.21}\]
be the operator norm of \(\mathcal{L}_{h}\), where \(\|\cdot\|_{2}\) denotes the \(L^{2}\)-norm. Since \(\mathcal{L}_{h}\) is a normal operator with a non-negative spectrum, \(\|\mathcal{L}_{h}\|\) also equals the supremum of the spectrum of \(\mathcal{L}_{h}\).
**Proposition 1.3**.: **[Properties of the Laplacian operator]**
_(i) Let \(h\) be a graphon and \(\mathcal{L}_{h}\) be the Laplacian operator on \(L^{2}([0,1])\). Then \(\mathcal{L}_{h}\) is a bounded operator, and \(h\mapsto\|\mathcal{L}_{h}\|\) is lower semi-continuous in the cut-metric._
_(ii) Let_ \(G\) _be a graph with_ \(N\) _vertices and_ \(h^{G}\) _be the empirical graphon associated with_ \(G\) _and let_ \(L_{N}\) _be the Laplacian matrix with spectral norm_ \(\|L_{N}\|\)_. Then it follows that_
\[\frac{\|L_{N}\|}{N}=\|\mathcal{L}_{h^{G}}\|\qquad\forall\,N. \tag{1.22}\]
**Remark 1.4**.: Note that \(h\mapsto\|\mathcal{L}_{h}\|\) is not continuous in the cut-metric. For example, consider the sequence of graphons \((h_{N})_{N\in\mathbb{N}}\) such that \(h_{N}\) is the empirical graphon of the \(N\)-star graph (i.e., \(1\) vertex connected by an edge to each of the \(N-1\) other vertices, and no further edges). Then \(h_{N}\downarrow 0\) as \(N\to\infty\) in the cut-metric, but \(\|\mathcal{L}_{h_{N}}\|=1\) for all \(N\in\mathbb{N}\). \(\spadesuit\)
Proposition 1.3 is proven in Section 2.2.
### Main theorems: downward large deviations
Let
\[\lambda_{N}=\|L_{N}\|=\sup_{\begin{subarray}{c}u\in L^{2}([0,1])\\ \|u\|_{2}=1\end{subarray}}\|L_{N}u\|_{2}. \tag{1.23}\]
be the maximal eigenvalue of \(L_{N}\), where \(\|\cdot\|_{2}\) denotes the \(L^{2}\)-norm. Abbreviate
\[C_{r}=\|\mathcal{L}_{r}\|. \tag{1.24}\]
Our goal is to show that \(\lambda_{N}/N\) satisfies a downward LDP as \(N\to\infty\), with rate \(\binom{N}{2}\) and with a rate function that can be analysed in detail. We write \(\mathbb{P}_{N}^{*}\) to denote the law of \(\lambda_{N}\).
**Theorem 1.5**.: **[Downward LDP]** _Subject to (1.5),_
\[\lim_{N\to\infty}\binom{N}{2}^{-1}\log\mathbb{P}_{N}^{*}(\lambda_{N}/N\leq \beta)=-\psi_{r}(\beta),\qquad\beta\in[0,C_{r}], \tag{1.25}\]
_with_
\[\psi_{r}(\beta)=\inf_{\begin{subarray}{c}\widetilde{h}\in\mathcal{W}\\ \|\mathcal{L}_{h}\|\leq\beta\end{subarray}}J_{r}(\widetilde{h})=\inf_{ \begin{subarray}{c}h\in\mathcal{W}\\ \|\mathcal{L}_{h}\|\leq\beta\end{subarray}}I_{r}(h). \tag{1.26}\]
The second equality in (1.26) uses that \(\|\mathcal{L}_{r}\|=\|\mathcal{L}_{r^{\phi}}\|\) for any \(\phi\in\mathcal{M}\), as is evident after replacing \(u\) by \(u^{\phi^{-1}}\) in (1.21) given by \(u^{\phi^{-1}}(x)=u(\phi^{-1}(x))\). Since the maximal eigenvalue is invariant under relabelling of the vertices, we need not worry about the equivalence classes.
Let
\[C_{r}^{0}=\int_{[0,1]^{2}}\log\tfrac{1}{1-r}. \tag{1.27}\]
When \(\beta=C_{r}\), the optimal graphon is the reference graphon \(r\) almost everywhere, for which \(I_{r}(r)=0\), and no large deviation occurs. When \(\beta=0\), the optimal graphon is the zero graphon \(\underline{0}\equiv 0\), for which \(I_{r}(\underline{0})=C_{r}^{0}\) (see Fig. 1).
**Theorem 1.6**.: **[Properties of the rate function]** _Subject to (1.5):_
_(i)_ \(\psi_{r}\) _is continuous and strictly decreasing on_ \([0,C_{r}]\)_, with_ \(\psi_{r}(0)=C_{r}^{0}>0\) _and_ \(\psi_{r}(C_{r})=0\)_._
_(ii) For every_ \(\beta\in[0,C_{r}]\)_, the set of minimisers of the variational formula for_ \(\psi_{r}(\beta)\) _in (_1.26_) is non-empty and compact in_ \(\widetilde{\mathcal{W}}\)_._
Under the _additional assumptions_ that
\[\lambda_{\max}(\mathcal{L}_{r})<\|d_{r}\|_{\infty}, \tag{1.29}\] \[r\text{ is bounded away from }0\text{ and }1,\] (1.30) \[r\text{ is continuous}, \tag{1.28}\]
we are able to compute the behaviour of \(\psi_{r}\) around \(C_{r}\). Here \(\lambda_{\max}(\mathcal{L}_{r})\) denotes the largest eigenvalue of \(\mathcal{L}_{r}\). Note that (1.29) is stronger than (1.5). Recall that \(d_{r}(x)=\int_{[0,1]}\mathrm{d}y\,r(x,y)\), \(x\in[0,1]\). It is easy to check that, subject to (1.28),
\[C_{r}=\|d_{r}\|_{\infty}. \tag{1.31}\]
**Theorem 1.7**.: **[Scaling of the rate function]** _Subject to (1.28)-(1.29),_
\[\psi_{r}(\beta)\asymp\int_{S_{r}(\beta)}\mathrm{d}x\,\frac{1}{v_{r}(x)}\,(d_{r }(x)-\beta)^{2},\qquad\beta\uparrow C_{r}, \tag{1.32}\]
_where_
\[S_{r}(\beta)=\{x\in[0,1]\colon\,d_{r}(x)\geq\beta\} \tag{1.33}\]
_and_
\[v_{r}(x)=\int_{[0,1]}\mathrm{d}y\,r(x,y)[1-r(x,y)]. \tag{1.34}\]
The proofs of Theorems 1.5-1.7 are given in Sections 3.1-3.3.
### Main theorems: upward large deviations
Put
\[J_{r}(x,\beta)=\int_{[0,1]}\mathrm{d}y\,\mathcal{R}\big{(}\widehat{r}_{\beta}( x,y)\mid r(x,y)\big{)},\qquad x\in[0,1], \tag{1.35}\]
where
\[\widehat{r}_{\beta}(x,y)=\frac{\mathrm{e}^{\theta(x,\beta)}r(x,y)}{\mathrm{e}^{ \theta(x,\beta)}r(x,y)+[1-r(x,y)]} \tag{1.36}\]
is a _Cramer-type transform_ of the reference graphon \(r\), with a Langrange multiplier function \(\theta(x,\beta)\), \(x\in[0,1]\), chosen such that \(\int_{[0,1]}\mathrm{d}y\,\widehat{r}_{\beta}(x,y)=\beta\), \(x\in[0,1]\). Note that \(\widehat{r}_{\beta}\) does not need to be symmetric and therefore is not necessarily a graphon. Under the _additional assumption_ that
\[r \tag{1.37}\]
is non-negative definite (i.e., \[\mathcal{T}_{r}\] is a non-negative definite operator)
we are able to derive an upward LDP and identify the associated rate function in term of \(J_{r}\).
**Theorem 1.8**.: **[Upward LDP]** _Subject to (1.29) and (1.37),_
\[\lim_{N\to\infty}N^{-1}\log\mathbb{P}_{N}^{*}(\lambda_{N}/N\geq\beta)=- \widehat{\psi}_{r}(\beta),\qquad\beta\in[C_{r},1], \tag{1.38}\]
_with_
\[\widehat{\psi}_{r}(\beta)=\inf_{x\in[0,1]}J_{r}(x,\beta). \tag{1.39}\]
Define
\[C_{r}^{1}=\widehat{\psi}_{r}(1)=\inf_{x\in[0,1]}\int_{[0,1]}\mathrm{d}y\,\log \frac{1}{r(x,y)}. \tag{1.40}\]
When \(\beta=C_{r}\), the Lagrange multiplier in (1.36) is \(\theta(x,C_{r})\equiv 0\), for which \(J_{r}(x,C_{r})\equiv 0\), and no large deviation occurs. When \(\beta=1\), the Lagrange multiplier is \(\theta(x,1)\equiv\infty\), for which \(\widehat{\psi}_{r}(1)=C_{r}^{1}\) (see Fig. 2).
**Theorem 1.9**.: **[Properties of the rate function]** _Subject to (1.29), \(\widehat{\psi}_{r}\) is continuous and strictly increasing on \([C_{r},1]\), with \(\widehat{\psi}_{r}(C_{r})=0\) and \(\widehat{\psi}_{r}(1)=C_{r}^{1}>0\)._
**Theorem 1.10**.: **[Scaling of the rate function]** _Subject to (1.29)-(1.30) and (1.37),_
\[\widehat{\psi}_{r}(\beta)\sim\widehat{K}_{r}(\beta-C_{r})^{2},\qquad\beta \downarrow C_{r}, \tag{1.41}\]
_with_
\[\widehat{K}_{r}=\frac{1}{2}\inf_{x\in\mathcal{D}_{r}}\frac{1}{v_{r}(x)}, \tag{1.42}\]
_where \(\mathcal{D}_{r}=\{x\in[0,1]\colon\,d_{r}(x)=\|d_{r}\|_{\infty}\}\)._
The proofs of Theorems 1.8-1.10 are given in Sections 4.1-4.3.
### Discussion
**1.** Theorems 1.5-1.7 establish the downward LDP and identify basic properties of the rate function \(\psi_{r}\). Note that \(\lim_{\beta\uparrow C_{r}}S_{r}(\beta)=\mathcal{D}_{r}\). Hence, if \(|\mathcal{D}_{r}|=0\), then the scaling of \(\psi_{r}\) near \(C_{r}\) is _faster than quadratic_, which suggest that \(\widehat{\lambda}_{N}/N\) does _not_ satisfy a standard central limit theorem. On the other hand, if \(|\mathcal{D}_{r}|>0\), then the scaling is quadratic and a standard central limit is expected to hold. Both questions are open. Several scenarios for the precise scaling are possible depending on how \(d_{r}\) scales near its maxima.
**2.** Assumption (1.5) is basic because it underlies Theorem 1.1, which is the jump board for our downward LDP in Theorem 1.5 and the general properties of the downward rate function in Theorem 1.6. Assumption (1.28) is needed for the upper bound in the scaling of the downward rate function in Theorem 1.7. It is the most severe assumption in the present paper, although it is still satisfied for a large class of graphons, for instance, those \(r\) that are rank-\(1\) and continuous (see [17, Proposition 5.11]). Assumption (1.28) guarantees that, for all graphons \(h\) close enough to \(r\) in \(L^{2}\)-norm, \(\|\mathcal{L}_{h}\|=\|d_{h}\|_{\infty}\). This allows us to reduce the analysis of the rate function for the LDP of the Laplacian norm to the analysis of the rate function for the maximum degree, which is considerably easier. Assumption (1.29) implies that \(v_{r}(x)\geq\delta d_{r}(x)\), \(x\in[0,1]\) for some \(\delta>0\) (via (1.34)), and ensures that the integral in (1.32) is well-defined.
**3.** Theorems 1.8-1.10 establish the upward LDP and identify basic properties of the rate function \(\widehat{\psi}_{r}\). The decay of \(\widehat{\psi}_{r}\) towards zero is _quadratic_, which suggests that \(\widehat{\lambda}_{N}/N\) satisfies a standard central limit theorem. The interpretation of (1.42) is that the curvature of \(\widehat{\psi}_{r}\) at its unique zero \(C_{r}\) is the _inverse_ of the variance of the associated central limit theorem, in line with standard folklore of large deviation theory (see [7]). Since \(v_{r}(x)\) can be viewed as the variance of the empirical distribution of the degrees of the vertices with label \(\approx xN\) and the large deviations are controlled by \(x\in\mathcal{D}_{r}\), the relation in (1.42) is intuitively plausible.
**4.** Assumption (1.37) is very similar to Assumption (1.28), since it implies \(\lambda_{\max}(\mathcal{L}_{r})\leq\|d_{r}\|_{\infty}\). Again, it is needed to make sure that for our upward LDP in Theorem 1.8_the largest degree is dominant_. In fact, we will see that \(J(x,\beta)\) in (1.35) is the upward rate function for the degrees of the vertices with label \(\approx xN\). Assumption (1.29) is needed to ensure that the general properties of the upward rate function in Theorem 1.9 hold, Assumptions (1.29)-(1.30) are needed to get the sharp scaling of the upward rate function in Theorem 1.10 as stated in (1.41)-(1.42).
**5.** Whereas the scaling constant in Theorem 1.10 is sharp, it is not sharp in Theorem 1.7. The proof of Theorem 1.7 shows that it lies between \(1\) and \(2\).
**6.** The fine properties of \(\psi_{r}\) and \(\widehat{\psi}_{r}\) remain elusive. Neither convexity nor analyticity are obvious.
**7.** In the proofs we derive an LDP for the maximum degree of \(G_{N}\) and analyse its rate function. The results we obtain for the maximum degree are analogous to Theorems 1.5-1.10 and are of independent interest. See Remark 3.4.
## 2. Proof of proposition
Section 2.1 shows that the degree operator is lower semi-continuous. Section 2.2 provides the proof of Proposition 1.3.
### Lower semi-continuity
**Lemma 2.1**.: _The map \(h\mapsto\|\mathcal{D}_{h}\|\) is lower semi-continuous in the cut-metric._
Proof.: Let \((\widetilde{f}_{n})_{n\in\mathbb{N}}\subset\widetilde{\mathcal{W}}\) be a sequence converging to some \(\widetilde{f}\in\widetilde{\mathcal{W}}\) in the cut-metric \(\delta_{\square}\). Without loss of generality we may assume that it converges in the cut-distance \(d_{\square}\) as well, i.e., \(\lim_{n\to\infty}d_{\square}(f_{n},f)=0\). Assume that
\[\liminf_{n\to\infty}\|\mathcal{D}_{f_{n}}\|<\|\mathcal{D}_{f}\|. \tag{2.1}\]
Then there exists a \(\delta>0\) (independent of \(n\)) such that \(\|\mathcal{D}_{f}\|>\|\mathcal{D}_{f_{n}}\|+\delta\) for infinitely many \(n\in\mathbb{N}\). By (1.20), there exists a set \(A\subset[0,1]\) of positive measure such that
\[d_{f}(x)>\|\mathcal{D}_{f}\|-\tfrac{\delta}{2}>\|\mathcal{D}_{f_{n}}\|+\tfrac {\delta}{2}\geq d_{f_{n}}(x)+\tfrac{\delta}{2}\qquad\forall\,x\in A. \tag{2.2}\]
Hence, by (1.13),
\[\int_{A\times[0,1]}\mathrm{d}x\,\mathrm{d}y\,f(x,y)>\int_{A\times[0,1]}\mathrm{ d}x\,\mathrm{d}y\,f_{n}(x,y)+\tfrac{\delta}{2}\lambda(A) \tag{2.3}\]
with \(\lambda\) the Lebesgue measure. By the definition of the cut-distance in (1.2),
\[d_{\square}(f_{n},f)\geq\int_{A\times[0,1]}\mathrm{d}x\,\mathrm{d}y\left[f(x,y )-f_{n}(x,y)\right]>\tfrac{\delta}{2}\lambda(A). \tag{2.4}\]
Since \(\lambda(A)\) and \(\delta\) are independent of \(n\), this inequality is a contradiction with the fact that \(\lim_{n\to\infty}d_{\square}(f_{n},f)=0\). Hence (2.1) is impossible.
### Proof of Proposition 1.3
Proof.: (i) The first part of the statement follows from the fact that \(\mathcal{L}_{h}\) is the sum of two bounded operators. For the second part of the statement, we need that \(h\mapsto\|\mathcal{T}_{h}\|\) is continuous and that \(h\mapsto\|\mathcal{D}_{h}\|\) is lower semi-continuous. The former was shown shown in [24, Lemma 3.6], the latter was shown in Lemma 2.1.
Let \(g\in L^{2}([0,1]^{2})\). Then, by (1.16),
\[\begin{split}&\liminf_{n\to\infty}\left[\|\mathcal{L}_{f_{n}}(g) \|_{2}^{2}-\|\mathcal{L}_{f}(g)\|_{2}^{2}\right]\\ &\geq\liminf_{n\to\infty}\left[\|\mathcal{D}_{f_{n}}(g)\|_{2}^{2 }-\|\mathcal{D}_{f}(g)\|_{2}^{2}\right]+\liminf_{n\to\infty}\left[\|T_{f_{n}} (g)\|_{2}^{2}-\|T_{f}(g)\|_{2}^{2}\right]\\ &\qquad-2\limsup_{n\to\infty}\left[\langle\mathcal{D}_{f_{n}}(g), T_{f_{n}}(g)\rangle-\langle\mathcal{D}_{f}(g),T_{f}(g)\rangle\right]\\ &\geq-2\limsup_{n\to\infty}\left[\langle\mathcal{D}_{f_{n}}(g), T_{f_{n}}(g)\rangle-\langle\mathcal{D}_{f}(g),T_{f}(g)\rangle\right].\end{split} \tag{2.5}\]
It remains to show that the last expression equals \(0\). Since the simple functions are dense in \(L^{2}([0,1]^{2})\), we may assume without loss of generality that \(g\) is simple, i.e., \(g=\sum_{i=1}^{k}\alpha_{i}\mathbf{1}_{A_{i}}\).
Estimate
\[\begin{split}&\left|\langle\mathcal{D}_{f_{n}}(g),T_{f_{n}}(g) \rangle-\langle\mathcal{D}_{f}(g),T_{f}(g)\rangle\right|\\ &=\left|\int_{[0,1]^{3}}\,\mathrm{d}y\,\mathrm{d}y^{\prime}\, \mathrm{d}x\,f_{n}(x,y)f_{n}(x,y^{\prime})g(x)g(y)-\int_{[0,1]^{3}}\,\mathrm{d }y\,\mathrm{d}y^{\prime}\,\mathrm{d}x\,f(x,y)f(x,y^{\prime})g(x)g(y)\right|\\ &=\left|\int_{[0,1]^{3}}\,\mathrm{d}y\,\mathrm{d}y^{\prime}\, \mathrm{d}x\left[f_{n}(x,y)f_{n}(x,y^{\prime})-f(x,y)f(x,y^{\prime})\right]g(x )g(y)\right|\\ &=\left|\sum_{1\leq i,j\leq k}\alpha_{i}\alpha_{j}\int_{[0,1]^{3} }\,\mathrm{d}y\,\mathrm{d}y^{\prime}\,\mathrm{d}x\left[f_{n}(x,y)f_{n}(x,y^{ \prime})-f(x,y)f(x,y^{\prime})\right]\mathbf{1}_{A_{i}}(x)\mathbf{1}_{A_{j}}(y )\right|\\ &\leq\sum_{1\leq i,j\leq k}\left|\alpha_{i}\alpha_{j}\right| \left(\left|\int_{[0,1]^{3}}\,\mathrm{d}y\,\mathrm{d}x\,\mathrm{d}y^{\prime} \,f_{n}(x,y^{\prime})[f_{n}(x,y)-f(x,y)]\,\mathbf{1}_{A_{i}}(x)\mathbf{1}_{A_{ j}}(y)\right|\\ &\qquad\qquad\qquad\qquad+\left.\left|\int_{[0,1]^{3}}\,\mathrm{d }y^{\prime}\,\mathrm{d}x\,\mathrm{d}y\,f(x,y)[f_{n}(x,y^{\prime})-f(x,y^{ \prime})]\,\mathbf{1}_{A_{i}}(x)\mathbf{1}_{A_{j}}(y)\right|\right).\end{split} \tag{2.6}\]
Note that, for every \(y^{\prime}\in[0,1]\), \(0\leq f_{n}(x,y^{\prime})\mathbf{1}_{A_{i}}(x),\mathbf{1}_{A_{j}}(y)\leq 1\) and, for every \(y\in[0,1]\), \(0\leq f(x,y)\mathbf{1}_{A_{i}}(x)\mathbf{1}_{A_{j}}(y)\leq 1\). Hence, by the definition of the cut-distance, the above expression is bounded from above by
\[\sum_{1\leq i,j\leq k}\left|\alpha_{i}\alpha_{j}\right|2d_{\square}(f_{n},f), \tag{2.7}\]
which tends to zero as \(n\to\infty\).
We have thus shown that \(\liminf_{n\to\infty}\left\|\mathcal{L}_{f_{n}}(g)\right\|_{2}\geq\left\| \mathcal{L}_{f}(g)\right\|_{2}\) for all simple functions \(g\in L^{2}([0,1])\). Now let \(\varepsilon>0\). Then there exists a simple \(g^{\prime}\in L^{2}([0,1])\) such that \(\left\|\mathcal{L}_{f}\right\|<\left\|\mathcal{L}_{f}(g)\right\|+\varepsilon\). By the previous result,
\[\liminf_{n\to\infty}\left\|\mathcal{L}_{f_{n}}\right\|\geq\liminf_{n\to \infty}\left\|\mathcal{L}_{f_{n}}(g^{\prime})\right\|_{2}\geq\left\|\mathcal{L }_{f}(g^{\prime})\right\|>\left\|\mathcal{L}_{f}\right\|-\varepsilon. \tag{2.8}\]
Since this holds for arbitrary \(\varepsilon>0\), the proof is complete.
(ii) Let \(G\) be a graph with \(N\) vertices. We prove that \(\frac{1}{N}L_{N}\) and \(\mathcal{L}_{h^{G}}\) have the same eigenvalues. First, let \(u=(u_{1},\ldots,u_{N})\) be an eigenvector of \(\frac{1}{N}L_{N}\) with eigenvalue \(\lambda\). Let \(u\colon[0,1]\to\mathbb{R}\), and define \(\overline{u}\colon[0,1]\to\mathbb{R}\) to be the step function that on \([\frac{i-1}{N},\frac{i}{N})\), \(1\leq i\leq N\), equals the average of \(u\) over that interval. Then \(u\) is an eigenfunction of \(\mathcal{L}_{h^{G}}\) with eigenvalue \(\lambda\) if and only if, for all \(1\leq j\leq N\) and \(x\in[\frac{j-1}{N},\frac{j}{N})\),
\[(\mathcal{L}_{h^{G}}u)(x)=\frac{1}{N}\sum_{i\in[N]\setminus j}A_{N}(i,j)u(x)- \frac{1}{N}\sum_{i\in[N]\setminus j}A_{N}(i,j)\overline{u}_{i}=\lambda u(x), \tag{2.9}\]
i.e.,
\[u(x)=\left(\frac{1}{N}\sum_{i\in[N]\setminus j}A_{N}(i,j)-\lambda\right)^{-1} \frac{1}{N}\sum_{i\in[N]\setminus j}A_{N}(i,j)\overline{u}_{i}. \tag{2.10}\]
Similarly, \(v=(v_{1},\ldots,v_{N})\) is an eigenvector of \(\frac{1}{N}L_{N}\) with eigenvalue \(\lambda\) if and only if
\[v_{j}=\left(\frac{1}{N}\sum_{i\in[N]\setminus j}A_{N}(i,j)-\lambda\right)^{-1} \frac{1}{N}\sum_{i\in[N]\setminus j}A_{N}(i,j)v_{i}. \tag{2.11}\]
Since every eigenfunction of \(\mathcal{L}_{h^{G}}\) is constant on the blocks \([\frac{j-1}{N},\frac{j}{N})\), \(1\leq j\leq N\), we can view each eigenfunction of \(\mathcal{L}_{h^{G}}\) as an eigenvector of \(\frac{1}{N}L_{N}\), and vice versa.
Finally, note that \(\mathcal{L}_{h^{G}}\) is a normal operator and has a non-negative spectrum. So the norm of \(\mathcal{L}_{h^{G}}\) equals the supremum of its spectrum. Furthermore, the essential spectrum of \(\mathcal{L}_{h^{G}}\) equals the range of the degree function \(d_{h^{G}}(x)\)[17, Proposition 5.11]. Since the largest eigenvalue of the Laplacian matrix is bounded from below by the maximum degree [19], it follows that the largest eigenvalue of \(\mathcal{L}_{h^{G_{N}}}\) equals its norm.
## 3. Proof of theorems for downward LDP
Sections 3.1-3.3 provide the proof of Theorems 1.5-1.7, respectively.
### Proof of Theorem 1.5
#### 3.1.1. Upper bound
By Proposition 1.3(i), \(h\mapsto\|\mathcal{L}_{h}\|\) is lower semi-continuous. Hence the set \(\{\widetilde{h}\in\widetilde{\mathcal{W}}\colon\,\|\mathcal{L}_{\widehat{h}} \|\leq\beta\}\) is closed in \(\widetilde{\mathcal{W}}\). By Proposition 1.3(ii), \(\frac{\|L_{N}\|}{N}=\|\mathcal{L}_{h^{G_{N}}}\|\), and so we can use Theorem 1.1 in combination with the contraction principle [15] to get
\[\limsup_{N\to\infty}\binom{N}{2}^{-1}\log\mathbb{P}_{N}\left(\frac{\|L_{N}\|} {N}\leq\beta\right)\leq-\inf_{\begin{subarray}{c}\widetilde{h}\in\widetilde{ \mathcal{W}}\\ \|\mathcal{L}_{\widehat{h}}\|\leq\beta\end{subarray}}J_{r}(\widetilde{h})=- \inf_{\begin{subarray}{c}h\in\mathcal{W}\\ \|\mathcal{L}_{h}\|\leq\beta\end{subarray}}I_{r}(h). \tag{3.1}\]
#### 3.1.2. Lower bound
The proof proceeds via a change of measure. The key is the following lemma.
**Lemma 3.1**.: _Let \(h^{G_{N}}\) be the empirical graphon corresponding to the inhomogeneous Erdos-Renyi random graph \(G_{N}\). Then \(\lim_{N\to\infty}\|\mathcal{L}_{h^{G_{N}}}\|=\|\mathcal{L}_{r}\|\) in probability._
Proof.: Since \(\mathcal{L}_{h}\) is the sum of \(\mathcal{D}_{h}\) and \(-\mathcal{T}_{h}\) (recall (1.16)), it suffices to prove the claim for these two operators separately.
To show that \(\mathcal{D}_{h^{G_{N}}}\) converges to \(\mathcal{D}_{r}\) in operator norm in probability, let \(d_{r_{N}}\) be the degree function of the block graphon \(r_{N}\). Then
\[\|d_{h^{G_{N}}}-d_{r}\|_{\infty}\leq\|d_{h^{G_{N}}}-d_{r_{N}}\|_{\infty}+\|d_{ r_{N}}-d_{r}\|_{\infty}. \tag{3.2}\]
Because \(\|r_{N}-r\|_{\infty}\downarrow 0\) as \(N\to\infty\) by (1.4), the second term vanishes. As for the first term,
\[\begin{split}&\mathbb{P}_{N}(\|d_{h^{G_{N}}}-d_{r_{N}}\|_{\infty} \geq t)\\ &\leq\sum_{i\in[N]}\mathbb{P}_{N}\left(\left|\frac{1}{N}\sum_{i \in[N]\setminus j}^{N}A_{N}(i,j)-\frac{1}{N}\sum_{i\in[N]\setminus j}^{N}r_{N, ij}\right|\geq t\right)\leq 2N\mathrm{e}^{-2Nt^{2}}\downarrow 0\end{split} \tag{3.3}\]
by a straightforward application of Hoeffding's inequality. We use that \(A_{N}(i,j)\) has Bernoulli distribution with mean \((r_{N})_{ij}\). Hence \(\|d_{h^{G_{N}}}-d_{r_{N}}\|_{\infty}\to 0\) as \(N\to\infty\) in probability, and so
\[\|\mathcal{D}_{h^{G_{N}}}-\mathcal{D}_{r}\|=\|d_{h^{G_{N}}}-d_{r}\|_{\infty} \to 0\text{ in probability}. \tag{3.4}\]
To show that \(T_{h^{G_{N}}}\) converges to \(T_{r}\) in operator norm in probability, it suffices to note that
\[\|T_{h^{G_{N}}}-T_{r}\|\leq\sqrt{2}\,d_{\square}(h^{G_{N}},r)\downarrow 0 \quad\text{ as $N\to\infty$ in probability}. \tag{3.5}\]
The first inequality was shown in [24, Lemma 3.6]. Convergence in the cut-metric was shown in [13, Lemma 5.11].
We are now ready to prove the lower bound.
**Proposition 3.2**.: _Let \(\beta\in[0,C_{r}]\), and let \(h\in\mathcal{W}\) be such that \(\|\mathcal{L}_{h}\|<\beta\). Then_
\[\liminf_{N\to\infty}\frac{2}{N^{2}}\log\mathbb{P}_{N}(\|\mathcal{L}_{h^{G_{N}}} \|<\beta)\geq-I_{r}(h). \tag{3.6}\]
Proof.: The proof comes in three steps.
**1.** We begin by giving the proof for the case when \(h\) is a _continuous_ graphon. Denote the law of an inhomogeneous ERRG with reference graphon \(r_{N}\) by \(\mathbb{P}_{N,r_{N}}\) instead of \(\mathbb{P}_{N}\), and the law of an inhomogeneous ERRG with reference graphon \(h_{N}\) by \(\mathbb{P}_{N,h_{N}}\), where \(h_{N}\) is the block graphon that is obtained by averaging \(h\) over blocks of size \(1/N\). Let \(x=\|\mathcal{L}_{h}\|\), and define
\[U_{\varepsilon}^{x}=\{f\in\mathcal{W}\mid\|\mathcal{L}_{f}\|\in(x-\varepsilon,x+\varepsilon)\}. \tag{3.7}\]
Note that \(h\) satisfies (1.4) by uniform continuity. Hence, by Lemma 3.1,
\[\mathbb{P}_{N,h_{N}}(U_{\varepsilon}^{x})\uparrow 1,\qquad N\to\infty, \tag{3.8}\]
for all \(\varepsilon>0\). Write
\[\mathbb{P}_{N,r_{N}}(U_{\varepsilon}^{x})=\mathbb{P}_{N,h_{N}}(U_{\varepsilon }^{x})\,\frac{1}{\mathbb{P}_{N,h_{N}}(U_{\varepsilon}^{x})}\int_{U_{ \varepsilon}^{x}}\exp\left(-\log\frac{\mathrm{d}\mathbb{P}_{N,h_{N}}}{\mathrm{ d}\mathbb{P}_{N,r_{N}}}\right)\,\mathrm{d}\mathbb{P}_{N,h_{N}}. \tag{3.9}\]
By Jensen's inequality,
\[\log\mathbb{P}_{N,r_{N}}(U_{\varepsilon}^{x})\geq\log\mathbb{P}_{N,h_{N}}(U_{ \varepsilon}^{x})-\frac{1}{\mathbb{P}_{N,h_{N}}(U_{\varepsilon}^{x})}\int_{U_{ \varepsilon}^{x}}\left(\log\frac{\mathrm{d}\mathbb{P}_{N,h_{N}}}{\mathrm{d} \mathbb{P}_{N,r_{N}}}\right)\mathrm{d}\mathbb{P}_{N,h_{N}}. \tag{3.10}\]
Using (3.8) and [13, Lemma 5.7], we obtain
\[\liminf_{N\to\infty}\frac{2}{N^{2}}\log\mathbb{P}_{N,r_{N}}(U_{\varepsilon}^{x })\geq-\lim_{N\to\infty}\frac{2}{N^{2}}\int_{U_{\varepsilon}^{x}}\left(\log \frac{\mathrm{d}\mathbb{P}_{N,h_{N}}}{\mathrm{d}\mathbb{P}_{N,r_{N}}}\right) \mathrm{d}\mathbb{P}_{N,h_{N}}=-I_{r}(h). \tag{3.11}\]
Note that [13, Lemma 5.7] was stated for the homogeneous ERRG, but the proof may be extended to the inhomogeneous ERRG (see [26, Theorem 6.1]). Because \(U_{\varepsilon}^{x}\subseteq\{f\in\mathcal{W}\mid\|\mathcal{L}_{f}\|<\beta\}\) for \(\varepsilon>0\) small enough, we conclude that
\[\liminf_{N\to\infty}\frac{2}{N^{2}}\log\mathbb{P}_{N,r_{N}}(\|\mathcal{L}_{h^{ G_{N}}}\|<\beta)\geq-I_{r}(x). \tag{3.12}\]
**2.** We next extend the proof to the case where \(h\) is a _block graphon_ such that \(\|\mathcal{L}_{h}\|<\beta\). Assume that \(h\) is constant on the blocks \([\frac{i-1}{N},\frac{i}{N})\times[\frac{j-1}{N},\frac{j}{N})\), \(1\leq i,j\leq N\). Then there exists a sequence of continuous graphons \((h_{k})_{k\in\mathbb{N}}\) that converges to \(h\) in \(L^{2}\) with \(h_{k}\leq h\). Note that, for \(f\in\mathcal{W}\),
\[\|\mathcal{L}_{f}\| =\sup_{\|u\|_{2}\leq 1}\langle g,\mathcal{L}_{f}(u)\rangle= \sup_{\|u\|_{2}\leq 1}\int_{[0,1]^{2}}\mathrm{d}y\,\mathrm{d}x\,u(x)f(x,y)[u(x)-u(y)]\] \[=\tfrac{1}{2}\sup_{\|u\|_{2}\leq 1}\left[\int_{[0,1]^{2}} \mathrm{d}y\,\mathrm{d}x\,u(x)f(x,y)[u(x)-u(y)]+\int_{[0,1]^{2}}\mathrm{d}y\, \mathrm{d}x\,u(y)f(x,y)[u(y)-u(x)]\right]\] \[=\tfrac{1}{2}\sup_{\|u\|^{2}\leq 1}\int_{[0,1]^{2}}\mathrm{d}x\, \mathrm{d}y\,f(x,y)[u(x)-u(y)]^{2}, \tag{3.13}\]
where we use that \(f\) is a symmetric function. From the above formula it is immediate that, since \(h_{k}\leq h\), we have \(\|\mathcal{L}_{h_{k}}\|\leq\|\mathcal{L}_{h}\|<\beta\). Then, by Lemma 3.2,
\[\liminf_{N\to\infty}\frac{2}{N^{2}}\log\mathbb{P}_{N}(\|\mathcal{L}_{h}c_{N} \|<\beta)\geq-I_{r}(h_{k}),\qquad k\geq k_{0}. \tag{3.14}\]
We conclude by noting that \(I_{r}\) is continuous in \(L^{2}([0,1]^{2})\), implying that \(I_{r}(h_{k})\) converges to \(I_{r}(h)\) as \(k\to\infty\).
**3.** We finally extend the proof to the case when \(h\) is an _arbitrary_ graphon such that \(\|\mathcal{L}_{h}\|<\beta\). Let \(\overline{h}_{N}\) be the \(N\)-block approximant of \(h\) for some \(N\). Then \(\overline{h}_{N}\) converges to \(h\) in \(L^{2}\) as \(N\to\infty\)[13, Proposition 2.6]. Again, it suffices to prove
\[\|\mathcal{L}_{\overline{h}_{N}}\|\leq\|\mathcal{L}_{h}\|. \tag{3.15}\]
First, suppose that \(\|\mathcal{L}_{\overline{h}_{N}}\|=\|d_{\overline{h}_{N}}\|_{\infty}\). For \(x\in[\frac{i-1}{N},\frac{i}{N})\),
\[d_{\overline{h}_{N}}(x)=N\int_{[\frac{i-1}{N},\frac{i}{N})}\mathrm{d}x\,h(x) \leq\|d_{h}\|_{\infty}. \tag{3.16}\]
It follows that \(\|\mathcal{L}_{\overline{h}_{N}}\|=\|d_{\overline{h}_{N}}\|_{\infty}\leq\|d_ {h}\|_{\infty}\leq\|\mathcal{L}_{h}\|\). Next, suppose that \(\|\mathcal{L}_{\overline{h}_{N}}\|>\|d_{\overline{h}_{N}}\|_{\infty}\). Then \(\|\mathcal{L}_{\overline{h}_{N}}\|=\lambda_{\max}(\mathcal{L}_{\overline{h} _{N}})\) and there exists an eigenfunction \(u\) of \(\mathcal{L}_{h}\) with \(\|u\|_{2}=1\) such that \(\langle\mathcal{L}_{\overline{h}_{N}}u,u\rangle=\|\mathcal{L}_{\overline{h}_{ N}}\|\). In the proof of Proposition 1.3 it was shown that \(u\) is constant on each of the intervals \([\frac{i-1}{N},\frac{i}{N})\), \(1\leq i\leq N\). Let \(u_{i}\) be the value of \(u\) on \([\frac{i-1}{N},\frac{i}{N})\), \(1\leq i\leq N\). Then
\[\begin{split}\langle\mathcal{L}_{h}u,u\rangle&= \int_{[0,1]^{2}}\,\mathrm{d}x\,\,\mathrm{d}y\,h(x,y)[u(x)-u(y)]u(x)\\ &=\sum_{i,j=1}^{N}\int_{[\frac{i-1}{N},\frac{i}{N})\times[\frac{i -1}{N},\frac{j}{N})}\,\mathrm{d}x\,\,\mathrm{d}y\,h(x,y)[u_{i}-u_{j}]u_{i}\\ &=\sum_{i,j=1}^{N}\int_{[\frac{i-1}{N},\frac{i}{N})\times[\frac{ j-1}{N},\frac{j}{N})}\,\mathrm{d}x\,\,\mathrm{d}y\,\overline{h}_{N}(x,y)[u_{i}-u_{j}]u _{i}=\langle\mathcal{L}_{\overline{h}_{N}}u,u\rangle.\end{split} \tag{3.17}\]
The desired result now follows from the fact that \(\|\mathcal{L}_{h}\|=\sup_{\|v\|_{2}=1}\langle\mathcal{L}_{h}v,v\rangle\geq \langle\mathcal{L}_{h}u,u\rangle=\langle\mathcal{L}_{\overline{h}_{N}}u,u \rangle=\|\mathcal{L}_{\overline{h}_{N}}\|\).
From Proposition 3.2, we obtain the lower bound
\[\liminf_{N\to\infty}\frac{2}{N^{2}}\log\mathbb{P}_{N}(\|\mathcal{L}_{h^{G_{N} }}\|<\beta)\geq-\inf_{\begin{subarray}{c}h\in\mathcal{W}\\ \|\mathcal{L}_{h}\|<\beta\end{subarray}}I_{r}(h). \tag{3.18}\]
So it just remains to show
\[\inf_{\begin{subarray}{c}h\in\mathcal{W}\\ \|\mathcal{L}_{h}\|<\beta\end{subarray}}I_{r}(h)=\inf_{\begin{subarray}{c}h\in \mathcal{W}\\ \|\mathcal{L}_{h}\|\leq\beta\end{subarray}}I_{r}(h). \tag{3.19}\]
Let \(h\in\mathcal{W}\) with \(\|\mathcal{L}_{h}\|=\beta\). Then \(\|\mathcal{L}_{(1-\varepsilon)h}\|=(1-\varepsilon)\beta<\beta\) for all \(\varepsilon>0\). Furthermore, \((1-\varepsilon)h\) converges to \(h\) in \(L^{2}\) as \(\varepsilon\downarrow 0\) and \(I_{r}\) is continuous in \(L^{2}\), so \(I_{r}((1-\varepsilon)h)\to I_{r}(h)\) as \(\varepsilon\downarrow 0\). Hence, the left-hand side of (3.19) is bounded from above by the right-hand side. The converse inequality is trivial, which settles the proof of the lower bound.
### Proof of Theorem 1.6
Proof.: (i) It was shown in the proof of Proposition 3.2 that if \(h_{1}\leq h_{2}\), then \(\|\mathcal{L}_{h_{1}}\|\leq\|\mathcal{L}_{h_{2}}\|\). Via the same proof as for [11, Theorem 1.5], it follows that \(\psi_{r}\) is strictly decreasing on \([0,C_{r}]\).
Let \(t>0\), and let \((t_{n})_{n\in\mathbb{N}}\) be a strictly increasing sequence converging to \(t\). By the compactness of \(\widetilde{\mathcal{W}}\), the lower semi-continuity of \(I_{r}\) and \(\widetilde{h}\mapsto\|\mathcal{L}_{\widetilde{h}}\|\), and the invariance of \(\widetilde{h}\mapsto\|\mathcal{L}_{h}\|\) under measure-preserving bijections, there exists a sequence of graphons \((\widetilde{f}_{n})_{n\in\mathbb{N}}\) such that \(\|\mathcal{L}_{f_{n}}\|\geq t_{n}\) and \(\psi_{r}(t_{n})=I_{r}(\widetilde{f}_{n})\) for all \(n\in\mathbb{N}\). Again, by the compactness of \(\widetilde{\mathcal{W}}\), we may assume that the sequence \((\widetilde{f}_{n})_{n\in\mathbb{N}}\) converges to some \(\widetilde{f}\in\widetilde{\mathcal{W}}\) in the cut-metric. Because \(\widetilde{h}\mapsto\|\mathcal{L}_{\widetilde{h}}\|\) is lower
semi-continuous, we have \(\|\mathcal{L}_{\widetilde{f}}\|\geq t\). Once again invoking the lower semi-continuity of \(I_{r}\), we conclude that
\[\liminf_{n\to\infty}\psi_{r}(t_{n})=\liminf_{n\to\infty}I_{r}(\widetilde{f}_{n}) \geq I_{r}(\widetilde{f})\geq\psi_{r}(t). \tag{3.20}\]
Right-continuity of \(\psi_{r}\) now follows by noting that \(\psi_{r}\) is strictly decreasing. The proof of the left-continuity of \(\psi_{r}\) is the same as the proof of (3.19).
(ii) This statement is immediate from the lower semi-continuity of \(h\mapsto\|\mathcal{L}_{h}\|\) and \(I_{r}\), combined with the compactness of \(\widetilde{\mathcal{W}}\).
### Proof of Theorem 1.7
Proof.: We first prove a large deviation principle for the degree of a single vertex \(\approx xN\) with rate \(N\) and with rate function \(J_{r}(x,\beta)\) defined in (1.35). We subsequently derive upper and lower bounds for \(\psi_{r}(\beta)\) in terms of this rate function, and use these to compute the scaling of \(\psi_{r}\) near its minimum.
#### 3.3.1. LDP for single degrees
Let \(d_{i}\) be the degree of vertex \(i\). For \(x\in[0,1]\), let \(i_{x}\in\{1,\ldots,N\}\) be the index such that \(x\in[\frac{i_{x}-1}{N},\frac{i_{x}}{N})\). A crucial ingredient is the LDP for the family \(N^{-1}d_{i_{x}}\), \(N\in\mathbb{N}\), for fixed \(x\in[0,1]\), which we prove with the help of the Gartner-Ellis theorem. To that end, note that \(d_{i_{x}}=\sum_{j\in[N]}A_{i_{x}j}\), with \(A_{i_{x}j}=1\) if there is an edge between \(i\) and \(j\) and \(A_{i_{x}j}=0\) otherwise. Therefore the cumulant generating function of \(N^{-1}d_{i_{x}}\) is
\[\begin{split}\Lambda_{N}(x,\theta)&=\log\mathbb{E} \left[\exp\left(\theta\frac{1}{N}\sum_{j\in[N]}A_{i_{x}j}\right)\right]=\sum_{ j\in[N]\setminus i}\log\mathbb{E}\left[\exp\left(\theta\frac{1}{N}A_{i_{x}j} \right)\right]\\ &=\sum_{j\in[N]}\log\left((r_{N})_{i_{x}j}\,\mathrm{e}^{\frac{1} {N}\theta}+[1-(r_{N})_{i_{x}j}]\right)\\ &=N\int_{[0,1]\setminus[\frac{i_{x}-1}{N},\frac{i_{x}}{N})} \mathrm{d}y\,\log\left(r_{N}(x,y)\mathrm{e}^{\frac{1}{N}\theta}+[1-r_{N}(x,y) ]\right).\end{split} \tag{3.21}\]
Since \(\|r_{N}-r\|_{\infty}\to 0\), we have
\[\Lambda_{r}(x,\theta)=\lim_{N\to\infty}\frac{1}{N}\Lambda_{N}(x,N\theta)=\int_ {[0,1]}\mathrm{d}y\,\log\left(r(x,y)\mathrm{e}^{\theta}+[1-r(x,y)]\right). \tag{3.22}\]
Since \(\theta\mapsto\Lambda_{r}(x,\theta)\) is finite and differentiable on \(\mathbb{R}\), the Gartner-Ellis theorem tells us that the family \(N^{-1}d_{i_{x}}\), \(N\in\mathbb{N}\), satisfies the LDP on \([0,1]\) with rate \(N\) and with rate function \(x\mapsto J_{r}(x,\beta)\) given by
\[J_{r}(x,\beta)=\sup_{\theta\in\mathbb{R}}[\theta\beta-\Lambda_{r}(x,\theta)]. \tag{3.23}\]
The supremum is attained at the unique \(\theta(x,\beta)\) such that
\[\int_{[0,1]}\mathrm{d}y\,\widehat{r}_{\beta}(x,y)=\beta\qquad\forall\,x\in[0,1], \tag{3.24}\]
with
\[\widehat{r}_{\beta}(x,y)=\frac{\mathrm{e}^{\theta(x,\beta)}r(x,y)}{\mathrm{e}^ {\theta(x,\beta)}r(x,y)+[1-r(x,y)]}. \tag{3.25}\]
The Lagrange multiplier \(\theta(x,\beta)\) exists because \(r\in(0,1)\) almost everywhere by (1.5). A simple computation shows that
\[J_{r}(x,\beta)=\int_{[0,1]}\mathrm{d}y\,\mathcal{R}\big{(}\widehat{r}_{\beta}(x, y)\mid r(x,y)\big{)}=\sup_{u}\int_{[0,1]}\mathrm{d}y\,\mathcal{R}(u(y)\mid r(x,y)), \tag{3.26}\]
where the supremum is taken over all the measurable functions \(u\colon[0,1]\to[0,1]\) satisfying \(\int_{[0,1]}\mathrm{d}y\,u(y)\leq\beta\). The map
\[\theta\mapsto\int_{[0,1]}\mathrm{d}y\,\frac{\mathrm{e}^{\theta}r(x,y)}{ \mathrm{e}^{\theta}r(x,y)+1-r(x,y)}\]
is a continuous bijection from \(\mathbb{R}\) to \((0,1)\), again because \(r\in(0,1)\) almost everywhere, and so the inverse map \(\beta\mapsto\theta(x,\beta)\) is continuous.
#### 3.3.2. Properties of \(J_{r}\)
We need the following lemma for the derivatives of \(J_{r}\) with respect to \(\beta\). Henceforth we write \(J_{r}^{\prime}=\frac{\partial J_{r}}{\partial\beta}\) and \(\theta^{\prime}=\frac{\partial\theta}{\partial\beta}\), and use the upper index \((k)\) for the \(k\)-th derivative.
**Lemma 3.3**.: \(\beta\mapsto J_{r}(x,\beta)\) _is analytic for every \(x\in[0,1]\), with_
\[J_{r}^{(k)}(x,\beta)=\theta^{(k-1)}(x,\beta). \tag{3.27}\]
_Moreover, subject to (1.29),_
\[\sup_{\begin{subarray}{c}x\in[0,1]\\ \beta\in[\varepsilon,1-\varepsilon]\end{subarray}}|J_{r}^{(k)}(x,\beta)|< \infty\qquad\forall\,\varepsilon>0. \tag{3.28}\]
Proof.: Recall from Section 3.3.1 that
\[J_{r}(x,\beta)=\int_{[0,1]}\mathrm{d}y\,\mathcal{R}\big{(}\widehat{r}_{\beta} (x,y)\mid r(x,y)\big{)}=\theta(x,\beta)\beta-\int_{[0,1]}\mathrm{d}y\,\log \Big{(}1+\big{(}\mathrm{e}^{\theta(x,\beta)}-1\big{)}r(x,y)\Big{)}. \tag{3.29}\]
Differentiating with respect to \(\beta\) and using that \(d_{\widehat{r}_{\beta}}(x)=\beta\), we obtain
\[J_{r}^{\prime}(x,\beta)=\theta(x,\beta). \tag{3.30}\]
Implicit differentiation of the equation
\[\int_{[0,1]}\mathrm{d}y\,\frac{\mathrm{e}^{\theta}r(x,y)}{\mathrm{e}^{\theta}r (x,y)+(1-r(x,y))}=\beta \tag{3.31}\]
gives
\[\theta^{\prime}(x,\beta)=\left(\int_{[0,1]}\mathrm{d}y\,\frac{\mathrm{e}^{ \theta(x,\beta)}r(x,y)(1-r(x,y))}{\big{[}\mathrm{e}^{\theta(x,\beta)}r(x,y)+( 1-r(x,y))\big{]}^{2}}\right)^{-1}. \tag{3.32}\]
Fix \(\varepsilon>0\). Since \(r\) is bounded away from \(0\) and \(1\), we see that \(\theta^{\prime}(x,\beta)\) is bounded for \(x\in[0,1]\) and \(\beta\in[C_{r},1-\varepsilon]\) when \(\theta(x,\beta)\) is. Iteratively applying the chain, product and quotient rules of differentiation, we obtain that \(\theta^{(k)}\) is some polynomial of order \(k\) in the variables
\[\left(\int_{[0,1]}\mathrm{d}y\,\frac{\mathrm{e}^{\theta}r(x,y)(1-r(x,y))}{ \big{[}\mathrm{e}^{\theta}r(x,y)+(1-r(x,y))\big{]}^{2}}\right)^{-1},\qquad \int_{[0,1]}\mathrm{d}y\,\frac{f\left(\mathrm{e}^{\theta},r,\theta^{\prime}, \ldots,\theta^{(k-1)}\right)}{\big{[}\mathrm{e}^{\theta}r(x,y)+(1-r(x,y)) \big{]}^{j}}, \tag{3.33}\]
with \(f\) a polynomial and \(j\in\mathbb{N}\). By induction, we obtain that \(f\) is bounded when \(\theta\) is bounded, and hence that \(\theta^{(k)}\) is bounded.
Thus, it remains to show \(\sup_{x\in[0,1],\beta\in[\varepsilon,1-\varepsilon]}|\theta(x,\beta)|<\infty\). Let
\[r_{-}=\inf_{(x,y)\in[0,1]^{2}}r(x,y)>0,\qquad\widetilde{\theta}(\beta)=\log \frac{\beta(1-r_{-})}{(1-\beta)r_{-}}. \tag{3.34}\]
Since the map \(r\mapsto\frac{\mathrm{e}^{\theta}r}{\mathrm{e}^{\theta}r+(1-r)}\) is increasing, we have
\[\int_{[0,1]}\mathrm{d}y\frac{\mathrm{e}^{\widetilde{\theta}}r(x,y)}{\mathrm{e}^{ \widetilde{\theta}}r(x,y)+(1-r(x,y))}\geq\frac{\mathrm{e}^{\widetilde{\theta}} r_{-}}{\mathrm{e}^{\widetilde{\theta}}r_{-}+(1-r_{-})}=\beta. \tag{3.35}\]
Hence, \(\theta(x,\beta)\leq\widetilde{\theta}(\beta)<\infty\). Since \(\widetilde{\theta}(\beta)\) is independent of \(x\) and \(\sup_{\beta\in[\varepsilon,1-\varepsilon]}|\widetilde{\theta}(\beta)|<\infty\), all claims are settled.
#### 3.3.3. Lower bound
Let \(\bar{h}\) be a minimizer of (1.26). Then \(\int_{[0,1]}\mathrm{d}y\,\bar{h}(x,y)\leq\|\mathcal{L}_{\bar{h}}\|\leq\beta\) for each \(x\in S_{r}(\beta)\). Hence, by (3.26),
\[\int_{[0,1]}\mathrm{d}y\,\mathcal{R}\big{(}\bar{h}(x,y)\mid r(x,y)\big{)}\geq J _{r}(x,\beta), \tag{3.36}\]
which implies
\[\psi_{r}(\beta)=I_{r}(\bar{h})\geq\int_{S_{r}(\beta)}\mathrm{d}x\,J_{r}(x, \beta). \tag{3.37}\]
#### 3.3.4. Upper bound
Let
\[h_{\beta}(x,y)=\begin{cases}r(x,y),&x,y\not\in S_{r}(\beta)\\ \hat{r}_{\beta}(x,y),&x\in S_{r}(\beta)\text{ or }y\in S_{r}(\beta),\text{ but }(x,y) \not\in S_{r}(\beta)^{2},\\ \min\{\hat{r}_{\beta}(x,y),\hat{r}_{\beta}(y,x)\},&x,y\in S_{r}(\beta).\end{cases} \tag{3.38}\]
Then \(h_{\beta}\) converges to \(r\) in \(L^{\infty}\) as \(\beta\uparrow C_{r}.\) We show that \(\|\mathcal{L}_{h\beta}\|\leq\beta\) for \(\beta\) close enough to \(C_{r}.\) Note that \(\|\mathrm{d}_{h_{\beta}}\|_{\infty}\leq\beta\) by construction, so we only need to show \(\lambda_{\max}(\mathcal{L}_{h_{\beta}})\leq\beta\) for \(\beta\) large enough. Let \(\lambda_{\beta}=\lambda_{\max}(\mathcal{L}_{h_{\beta}})\) and let \(\lambda^{\prime}\) be the limit of any convergent subsequence of \((\lambda_{\beta})_{k\in\mathbb{N}}\). We show that \(\lambda^{\prime}\) is an eigenvalue of \(\mathcal{L}_{h}\), so that \(\limsup_{\beta\uparrow C_{r}}\lambda_{\beta}\leq\lambda_{\max}(\mathcal{L}_{r})\leq\beta\) for \(\beta\) close enough to \(C_{r}\). Here we use Assumption (1.28).
Assume, by contradiction, that \(\lambda^{\prime}\) is not an eigenvalue of \(\mathcal{L}_{r}\). Without loss of generality, we may assume that \(\lambda_{\beta}\) converges to \(\lambda^{\prime}\). Consider \(F\colon\,L^{\infty}([0,1]^{2})\times\mathbb{R}\times L^{2}([0,1])\to L^{2}([0,1])\) given by \(F(g,\mu,u)=\mathcal{L}_{g}u-\mu u\). This map is bounded and affine in each coordinate, and hence is continuously Frechet differentiable. The Frechet derivative of \(F\) at a point \((g,\mu,u)\) is given by
\[((DF)(g,\mu,u))(f,\nu,w)=F(f,\nu,u)+F(g,\mu,w). \tag{3.39}\]
Indeed, let \((f_{k},\nu_{k},w_{k})\) such that \(\|(f_{k},\nu_{k},w_{k})\|=\|f_{k}\|_{\infty}+|\nu_{k}|+\|w_{k}\|_{2}\to 0\) as \(k\to\infty\). Then
\[\frac{\|F(g+f_{k},\mu+\nu_{k},u+w_{k})-F(g,\mu,u)-((DF)(g,\mu,u))( f_{k},\nu_{k},w_{k})\|}{\|(f_{k},\nu_{k},w_{k})\|}\] \[= \frac{\|F(f_{k},\nu_{k},w_{k})\|}{\|(f_{k},\nu_{k},w_{k})\|}\leq \frac{\|\mathcal{L}_{f_{k}}w_{k}\|_{2}+\|\nu_{k}w_{k}\|_{2}}{\|f_{k}\|_{\infty} +|\nu_{k}|+\|w_{k}\|_{2}}\leq\frac{2\|f_{k}\|_{\infty}\|w_{k}\|_{2}+|\nu_{k}| \|w_{k}\|_{2}}{\|f_{k}\|_{\infty}+|\nu_{k}|+\|w_{k}\|_{2}}\to 0,\qquad k\to\infty. \tag{3.40}\]
Note that \(F\) is not necessarily Frechet differentiable as a function \(L^{2}([0,1]^{2})\times\mathbb{R}\times L^{2}([0,1])\to L^{2}([0,1])\). Since \(\lambda^{\prime}\) is not an eigenvalue of \(\mathcal{L}_{r}\), the map
\[w\mapsto((DF)(r,\lambda^{\prime},0))(0,0,w)=F(r,\lambda^{\prime},w) \tag{3.41}\]
has a trivial kernel and so is an isomorphism on its image space. So, by the implicit function theorem for Banach spaces [22, Theorem I.5.9], there exists a neighbourhood \(U\) of \((r,\lambda^{\prime})\in L^{\infty}([0,1]^{2})\times\mathbb{R}\) and a neighbourhood \(V\) of \(\underline{0}\in L^{2}([0,1])\) such that \(F(g,\mu,u)=0\) if and only if \(u=0\) for all \((g,\mu,u)\in U\times V\). Since \(h_{\beta}\to r\) in \(L^{\infty}\) and \(\lambda_{\beta}\to\lambda^{\prime}\) as \(\beta\uparrow C_{r}\), we have that \((h_{\beta},\lambda_{\beta})\in U\) for \(\beta\geq\beta_{0}\). However, since \(\lambda_{\beta}\) is an eigenvalue of \(\mathcal{L}_{h_{\beta}}\), there exists an eigenfunction \(u_{\beta}\in L^{2}([0,1])\) with \(u_{\beta}\neq 0\) such that \(F(h_{\beta},\lambda_{\beta},u_{\beta})=0\). Since \(V\) is a neighbourhood of \(0\), we
can rescale \(u_{\beta}\) such that it lies in \(V\). This yields a contradiction, and so \(\lambda^{\prime}\) must be an eigenvalue of \(\mathcal{L}_{r}\).
Now since \(\|\mathcal{L}_{h_{\beta}}\|\leq\beta\) for \(\beta\) sufficiently close to \(C_{r}\), we have
\[\psi_{r}(\beta) \leq I_{r}(h_{\beta})\] \[=2\int_{S_{r}(\beta)\times([0,1]\setminus S_{r}(\beta))}\,\mathrm{ d}x\,\,\mathrm{d}y\,\mathcal{R}\big{(}\hat{r}_{\beta}(x,y)\mid r(x,y)\big{)}\] \[\qquad+\int_{S_{r}(\beta)^{2}}\,\mathrm{d}x\,\,\mathrm{d}y\, \mathcal{R}\big{(}\min\{\hat{r}_{\beta}(x,y),\hat{r}_{\beta}(y,x)\}\mid r(x,y) \big{)}\] \[\leq 2\int_{S_{r}(\beta)\times([0,1]\setminus S_{\beta})}\, \mathrm{d}x\,\,\mathrm{d}y\,\mathcal{R}\big{(}\hat{r}_{\beta}(x,y)\mid r(x,y) \big{)}+\int_{S_{r}(\beta)^{2}}\,\mathrm{d}x\,\,\mathrm{d}y\,\mathcal{R}\big{(} \hat{r}_{\beta}(x,y)\mid r(x,y)\big{)}\] \[\qquad+\int_{S_{r}(\beta)^{2}}\,\mathrm{d}x\,\,\mathrm{d}y\, \mathcal{R}\big{(}\hat{r}_{\beta}(y,x)\mid r(x,y)\big{)}\] \[=2\int_{S_{r}(\beta)}\mathrm{d}x\,J_{r}(x,\beta),\qquad\beta\uparrow C _{r}. \tag{3.42}\]
For the last equality, we use that \(r\) is symmetric and that \(S_{r}(\beta)^{2}\) is a symmetric domain.
**Remark 3.4**.: The above shows that, up to a constant, the last integral in (3.42) also equals the rate function in the LDP for the maximum degree. The upper bound can also be shown directly by the following computation.
The state space for the edges determining the adjacency matrix \((A_{ij})\) is \(\{0,1\}^{\binom{N}{2}}\), which we endow with the standard partial ordering. The probability distribution of the edges is the product measure \(\prod_{\{i,j\}}\mathrm{BER}(r_{N}(\frac{i}{N},\frac{j}{N}))\), which is log-convex. Note that the events \(\{N^{-1}\max_{1\leq i\leq N-1}d_{i}\ \leq\beta\}\) and \(\{N^{-1}d_{N}\leq\beta\}\) are non-increasing in the partial ordering. Hence we can use the FKG-inequality iteratively, to get
\[\mathbb{P}\left(N^{-1}\max_{i\in[N]}d_{i}\leq\beta\right)\geq\prod_{i\in[N]} \mathbb{P}(N^{-1}d_{i}\leq\beta). \tag{3.43}\]
Consequently,
\[\begin{split}&\frac{1}{\binom{N}{2}}\log\mathbb{P}\left(N^{-1} \max_{i\in[N]}d_{i}\leq\beta\right)\geq\frac{1}{\binom{N}{2}}\sum_{i=1}^{N} \log\mathbb{P}(N^{-1}d_{i}\leq\beta)\\ &=\frac{2N}{N-1}\frac{1}{N}\sum_{i\in[N]}\frac{1}{N}\log\mathbb{P }(N^{-1}d_{i}\leq\beta)=\frac{2N}{N-1}\int_{[0,1]}\mathrm{d}x\,\frac{1}{N} \log\mathbb{P}(N^{-1}d_{i_{x}}\leq\beta).\end{split} \tag{3.44}\]
By the law of large numbers, \(\lim_{N\to\infty}N^{-1}d_{i_{x}}=\int_{[0,1]}\mathrm{d}y\,r(x,y)=d_{r}(x)\ \mathbb{P}\)-a.s., and so in the limit as \(N\to\infty\) the last integral in (3.44) may be restricted to the set \(S_{r}(\beta)=\{x\in[0,1]\colon\ d_{r}(x)\geq\beta\}\), i.e.,
\[\liminf_{N\to\infty}\frac{1}{\binom{N}{2}}\log\mathbb{P}\left(N^{-1}\max_{i\in [N]}d_{i}\leq\beta\right)\geq-2\int_{S_{r}(\beta)}\mathrm{d}x\,J_{r}(x,\beta),\]
where we use that the family \(N^{-1}d_{i_{x}}\), \(N\in\mathbb{N}\), satisfies the LDP on \([0,1]\) with rate \(N\) and with rate function \(x\mapsto J_{r}(x,\beta)\), as shown in Section 3.3.1.
#### 3.3.5. Scaling of \(J_{r}(x,\beta)\)
Via Lemma 3.3 it is straightforward to show that \(J_{r}(x,d_{r}(x))=J_{r}^{\prime}(x,d_{r}(x))=0\) and \(J_{r}^{\prime\prime}(x,d_{r}(x))=\frac{1}{v_{r}(x)}\). So, by Taylor expansion,
\[J_{r}(x,\beta)=\frac{1}{2v_{r}(x)}(\beta-d_{r}(x))^{2}+\frac{1}{6}\theta^{ \prime\prime}(x,\beta_{*})(\beta-d_{r}(x))^{3} \tag{3.45}\]
for some \(\beta_{*}=\beta_{*}(x)\in[\beta,d_{r}(x)]\). Inserting this expansion into the lower and upper bounds derived above and using that \(\theta^{\prime\prime}(x,\beta)\) is bounded uniformly in \(x\) and \(\beta\), we obtain
\[\psi_{r}(\beta)\asymp\int_{S_{r}(\beta)}\mathrm{d}x\,J_{r}(x,\beta)\sim\int_{ S_{r}(\beta)}\mathrm{d}x\,\frac{1}{2v_{r}(x)}(\beta-d_{r}(x))^{2},\qquad\beta \uparrow C_{r}. \tag{3.46}\]
## 4. Proof of theorems for upward LDP
Sections 4.1-4.3 provide the proof of Theorems 1.8-1.10, respectively.
### Proof of Theorem 1.8
Proof.: We first prove the LDP for the maximum degree. Afterwards we prove the LDP for the maximal eigenvalue.
#### 4.1.1. LDP for the maximum degree
For \(x\in[0,1]\), let \(i_{x}\in\{1,\ldots,N\}\) be the index such that \(x\in[\frac{i_{x}-1}{N},\frac{i_{x}}{N})\). Let \(d_{i}\) be the degree of vertex \(i\). First note that we can sandwich
\[\sup_{x\in[0,1]}\mathbb{P}(N^{-1}d_{i_{x}}\geq\beta)\leq\mathbb{P}\left(N^{-1 }\max_{i\in[N]}d_{i}\geq\beta\right)\leq\sum_{i\in[N]}\mathbb{P}(N^{-1}d_{i} \geq\beta)\leq N\sup_{x\in[0,1]}\mathbb{P}(N^{-1}d_{i_{x}}\geq\beta). \tag{4.1}\]
Using the LDP for the single degrees derived in Section 3.3.1, we obtain
\[\begin{split}&\lim_{N\to\infty}\frac{1}{N}\log\mathbb{P}\left(N^{ -1}\max_{i\in[N]}d_{i}\geq\beta\right)=\lim_{N\to\infty}\sup_{x\in[0,1]}\frac{ 1}{N}\log\mathbb{P}\left(N^{-1}d_{i_{x}}\geq\beta\right)\\ &=\sup_{x\in[0,1]}\lim_{N\to\infty}\frac{1}{N}\log\mathbb{P}\left( N^{-1}d_{i_{x}}\geq\beta\right)=-\inf_{x\in[0,1]}J_{r}(x,\beta)=-\widehat{ \psi}_{r}(\beta).\end{split} \tag{4.2}\]
The convergence of \(\Lambda(\theta)\) in (3.22) is uniform in \(x\in[0,1]\) because \(\|r_{N}-r\|_{\infty}\to 0\) as \(N\to\infty\). Hence the convergence in the LDP for \(N^{-1}d_{i_{x}}\) is uniform in \(x\in[0,1]\). This allows us to swap the supremum and the limit in the second equality.
#### 4.1.2. LDP for \(\lambda_{\max}(L_{n})\)
By Weyl's interlacing inequalities, we have \(\lambda_{\max}(L_{N})\leq\lambda_{\max}(D_{N})-\lambda_{\min}(A_{N})\). We also know that \(\lambda_{\max}(D_{N})\leq\lambda_{\max}(L_{N})\). Hence
\[\lambda_{\max}(D_{N})\leq\lambda_{\max}(L_{n})\leq\lambda_{\max}(D_{N})- \lambda_{\min}(A_{n}). \tag{4.3}\]
Since \(r\) is non-negative definite, the smallest eigenvalue of \(T_{r}\) is \(0\). Now let \((T_{n})_{n\in\mathbb{N}}\) be a sequence of self-adjoint operators acting on \(L^{2}([0,1])\) and converging in operator norm to some operator \(T\). Then
\[|\lambda_{\min}(T_{n})-\lambda_{\min}(T)|=\left|\inf_{\|f\|_{2}=1}\langle f,T _{n}f\rangle-\inf_{\|f\|_{2}=1}\langle f,Tf\rangle\right|\leq\sup_{\|f\|_{2}=1} |\langle f,T_{n}f\rangle-\langle f,Tf\rangle|\to 0 \tag{4.4}\]
as \(n\to\infty\). Thus, the map \(h\mapsto\lambda_{\min}(\mathcal{T}_{h})\) is continuous in the cut norm. Hence, by the contraction principle, \(N^{-1}\lambda_{\min}(A_{N})=\lambda_{\min}(T_{h^{\mathcal{O}_{N}}})\) converges to \(\lambda_{\min}(T_{r})=0\) at an exponential rate with coefficient \(N^{2}\). By (4.2) and (4.3) we can sandwich, for all \(\varepsilon>0\),
\[\begin{split}-\widehat{\psi}_{r}(\beta)&=-\inf_{x \in[0,1]}J_{r}(x,\beta)=\lim_{N\to\infty}\frac{1}{N}\log\mathbb{P}\left(N^{-1} \max_{i\in[N]}d_{i}\geq\beta\right)\\ &\leq\lim_{N\to\infty}\frac{1}{N}\log\mathbb{P}\big{(}N^{-1} \lambda_{\max}(L_{N})\geq\beta\big{)}\\ &\leq\lim_{N\to\infty}\frac{1}{N}\log\left[\mathbb{P}\left(N^{-1} \max_{i\in[N]}d_{i}\geq\beta-\varepsilon\right)+\mathbb{P}\big{(}N^{-1} \lambda_{\min}(A_{N})\leq-\varepsilon\big{)}\right]\\ &=\lim_{N\to\infty}\frac{1}{N}\log\mathbb{P}\left(N^{-1}\max_{i \in[N]}d_{i}\geq\beta-\varepsilon\right)=-\inf_{x\in[0,1]}J_{r}(x,\beta- \varepsilon)=-\widehat{\psi}_{r}(\beta-\varepsilon).\end{split} \tag{4.5}\]
The result now follows from continuity of \(\widehat{\psi}_{r}\), as stated in Theorem 1.9. Note that the proof of Theorem 1.9 does not use Theorem 1.8.
### Proof of Theorem 1.9
#### 4.2.1. Proof that \(\widehat{\psi}_{r}\) is strictly increasing
Assume that \(\inf_{x\in[0,1]}\theta(x,\beta)>0\) for all \(\beta>C_{r}\). Then, by Lemma 3.3, for \(\beta_{2}>\beta_{1}>C_{r}\),
\[\begin{split}\widehat{\psi}_{r}(\beta_{2})&=\inf_{ x\in[0,1]}J_{r}(x,\beta_{2})\\ &\geq\inf_{x\in[0,1]}J_{r}(x,\beta_{1})+\theta(x,\beta_{1})( \beta_{2}-\beta_{1})\\ &\geq\widehat{\psi}_{r}(\beta_{1})+\inf_{x\in[0,1]}\theta(x,\beta _{1})(\beta_{2}-\beta_{1})>\widehat{\psi}_{r}(\beta_{1}).\end{split} \tag{4.6}\]
For the first inequality we use that \(J_{r}^{\prime}(x,\beta)=\theta(x,\beta)\) is increasing in \(\beta\). Hence, it suffices to show \(\inf_{x\in[0,1]}\theta(x,\beta)>0\) for all \(\beta>C_{r}\).
Note that the map \(r\mapsto\frac{e^{\theta}r}{1+(e^{\theta-1})r}\) is concave. Let \(\widetilde{\theta}(x,\beta)=\log\frac{\beta(1-d_{r}(x))}{(1-\beta)d_{r}(x)}\). Then, by Jensen's inequality, for all \(x\in[0,1]\),
\[\int_{[0,1]}\mathrm{d}y\,\frac{e^{\widetilde{\theta}(x,\beta)}r(x,y)}{1+(e^{ \widetilde{\theta}(x,\beta)}-1)r(x,y)}\leq\frac{e^{\widetilde{\theta}(x,\beta )}d_{r}(x)}{1+(e^{\widetilde{\theta}(x,\beta)}-1)d_{r}(x)}=\beta. \tag{4.7}\]
Recall that \(\theta(\beta,x)\) is chosen such that
\[\int_{[0,1]}\mathrm{d}y\,\frac{e^{\theta(x,\beta)}r(x,y)}{1+(e^{\theta(x,\beta )}-1)r(x,y)}=\beta. \tag{4.8}\]
This implies that \(\theta(x,\beta)\geq\widetilde{\theta}(x,\beta)\geq\log\frac{\beta(1-C_{r})}{( 1-\beta)C_{r}}\). We conclude the proof that \(\widehat{\psi}_{r}\) is strictly increasing by noting that this lower bound is strictly positive for \(\beta>C_{r}\) and is independent of \(x\).
#### 4.2.2. Continuity of \(\widehat{\psi}_{r}\)
Right-continuity of \(\widehat{\psi}_{r}\) follows from the fact that the infimum of continuous functions is right-continuous. By Lemma 3.3, \(\sup_{x\in[0,1]}\theta(x,\beta)<\infty\) for all \(\beta\geq C_{r}\)
Thus,
\[\begin{split}\widehat{\psi}_{r}(\beta-\varepsilon)&=\inf_{x \in[0,1]}J_{r}(x,\beta-\varepsilon)\\ &\geq\inf_{x\in[0,1]}[J_{r}(x,\beta)-\theta(x,\beta)\varepsilon] \\ \geq&\,\widehat{\psi}_{r}(\beta)-\sup_{x\in[0,1]} \theta(x,\beta)\varepsilon\to\widehat{\psi}_{r}(\beta),\qquad\varepsilon \downarrow 0.\end{split} \tag{4.9}\]
#### 4.2.3. Value of \(\widehat{\psi}_{r}\) at the boundary
We first show that \(\widehat{\psi}_{r}(C_{r})=0\). By Lemma 3.3,
\[\begin{split}\widehat{\psi}_{r}(C_{r})&=\inf_{x\in[0,1]}\left[J_{r}(x,d_{r}(x))+(C_{r}-d_{r}(x))\sup_{\beta\in[d_{r}(x),C_{r}]} \theta(x,\beta)\right]\\ &\leq\inf_{x\in[0,1]}(C_{r}-d_{r}(x))\sup_{\beta\in[d_{r}(x),C_{ r}]}\theta(x,\beta)=0,\end{split} \tag{4.10}\]
where we use that \(C_{r}=\sup_{x\in[0,1]}d_{r}(x)\) and that \(\theta(x,\beta)\) is bounded uniformly in \(x\) and \(\beta\). Since \(\hat{r}_{1}(x,y)\equiv 1\), we have
\[\widehat{\psi}_{r}(1)=\inf_{x\in[0,1]}J_{r}(x,1)=\inf_{x\in[0,1]}\int_{[0,1]} \mathrm{d}y\,\log\frac{1}{r(x,y)}. \tag{4.11}\]
### Proof of Theorem 1.10
Recall from Section 3.3.5 that
\[J_{r}(x,\beta)=\frac{1}{2v_{r}(x)}\,(\beta-d_{r}(x))^{2}+\frac{1}{6}\,\theta^{ \prime\prime}(x,\beta_{*})\,(\beta-d_{r}(x))^{3}, \tag{4.12}\]
for some \(\beta_{*}=\beta_{*}(x)\in[d_{r}(x),\beta]\). By Lemma 3.3, \(\theta^{\prime\prime}\) is bounded uniformly in \(x\) and also uniformly in \(\beta\) bounded away from \(1\), so it immediately follows that
\[\hat{\psi}_{r}(\beta)\leq\inf_{x\in\mathcal{D}_{r}}J_{r}(x,\beta)\leq\inf_{x \in\mathcal{D}_{r}}\frac{1}{2v_{r}(x)}(\beta-C_{r})^{2}[1+o(1)],\qquad\beta \downarrow C_{r}. \tag{4.13}\]
Let \(x_{*}=x_{*}(\beta)=\arg\min_{x\in[0,1]}J_{r}(x,\beta)\). It is clear from the above that \(\beta-C_{r}\leq\beta-d_{r}(x_{*})=O(\beta-C_{r})\). Hence
\[\hat{\psi}_{r}(\beta)=J_{r}(x_{*},\beta)\geq\frac{1}{2v_{r}(x_{*})}(\beta-C_{ r})^{2}+O((\beta-C_{r}))^{3}),\qquad\beta\downarrow C_{r}. \tag{4.14}\]
By continuity of \(r\), \(\inf_{x\in\mathcal{D}_{r}}|x-x_{*}|\downarrow 0\) as \(\beta\downarrow C_{r}\). Since also \(v_{r}\) is continuous, this implies that \(\frac{1}{2v_{r}(x_{*})}\geq\inf_{x\in\mathcal{D}_{r}}\frac{1}{2v_{r}(x)}[1+o(1)]\) as \(\beta\downarrow C_{r}\). We conclude that
\[\hat{\psi}_{r}(\beta)\geq\inf_{x\in\mathcal{D}_{r}}\frac{1}{2v_{r}(x)}(\beta- C_{r})^{2}[1+o(1)],\qquad\beta\downarrow C_{r}, \tag{4.15}\]
which settles the claim.
|
2308.03867 | From Sky to the Ground: A Large-scale Benchmark and Simple Baseline
Towards Real Rain Removal | Learning-based image deraining methods have made great progress. However, the
lack of large-scale high-quality paired training samples is the main bottleneck
to hamper the real image deraining (RID). To address this dilemma and advance
RID, we construct a Large-scale High-quality Paired real rain benchmark
(LHP-Rain), including 3000 video sequences with 1 million high-resolution
(1920*1080) frame pairs. The advantages of the proposed dataset over the
existing ones are three-fold: rain with higher-diversity and larger-scale,
image with higher-resolution and higher-quality ground-truth. Specifically, the
real rains in LHP-Rain not only contain the classical rain
streak/veiling/occlusion in the sky, but also the \textbf{splashing on the
ground} overlooked by deraining community. Moreover, we propose a novel robust
low-rank tensor recovery model to generate the GT with better separating the
static background from the dynamic rain. In addition, we design a simple
transformer-based single image deraining baseline, which simultaneously utilize
the self-attention and cross-layer attention within the image and rain layer
with discriminative feature representation. Extensive experiments verify the
superiority of the proposed dataset and deraining method over state-of-the-art. | Yun Guo, Xueyao Xiao, Yi Chang, Shumin Deng, Luxin Yan | 2023-08-07T18:39:14Z | http://arxiv.org/abs/2308.03867v2 | # From Sky to the Ground: A Large-scale Benchmark and Simple Baseline Towards Real Rain Removal
###### Abstract
Learning-based image deraining methods have made great progress. However, the lack of large-scale high-quality paired training samples is the main bottleneck to hamper the real image deraining (RID). To address this dilemma and advance RID, we construct a Large-scale High-quality Paired real rain benchmark (LHP-Rain), including 3000 video sequences with 1 million high-resolution (1920*1080) frame pairs. The advantages of the proposed dataset over the existing ones are three-fold: rain with higher-diversity and larger-scale, image with higher-resolution and higher-quality ground-truth. Specifically, the real rains in LHP-Rain not only contain the classical rain streak/veiling/occlusion in the sky, but also the **splashing on the ground** overlooked by deraining community. Moreover, we propose a novel robust low-rank tensor recovery model to generate the GT with better separating the static background from the dynamic rain. In addition, we design a simple transformer-based single image deraining baseline, which simultaneously utilize the self-attention and cross-layer attention within the image and rain layer with discriminative feature representation. Extensive experiments verify the superiority of the proposed dataset and deraining method over state-of-the-art.
## 1 Introduction
Single image deraining is to improve the imaging quality by separating rain from image background. In recent years, significant progress has been made in learning-based single image deraining by various sophisticated CNN architectures [14, 36, 53, 10] and powerful Transformer models [35, 44]. Although these state-of-the-art supervised methods have achieved impressive results on simulated datasets, a fact cannot be ignored that those competitive methods perform unsatisfactory on diverse real rainy scenes. The core reason is the domain shift issue between the simplified synthetic rain and complex real rain [50, 32, 41, 51].
To solve this problem, an intuitive idea is to try the best to make rain degradation model as real as possible [11]. The researchers formulate the rain imaging procedure into a comprehensive rain simulation model [9, 48, 12, 24, 18],
in which different visual appearance of rain streaks[9], accumulation veiling [48], haze [12, 18], and occlusion [24] factors are taken into consideration. Unfortunately, these linear simulation models still cannot well accommodate the distribution of realistic rains. For example, in Fig. 1, realistic rain streak is usually not exactly a regular line-pattern streak but possesses irregular non-uniform in terms of the intensity and width. Apart from the rain streaks, the existing rain simulation models could not handle the complicated rain splashing on the ground, which presents as dense point-shape texture, droplets or water waves, ruining visibility of traffic signs, such as lane lines, and also causes enormous negative effects for the high-level vision.
Another research line obtains the 'clean' counterpart from the realistic rainy videos [37, 20], which leverages the motion discrepancy between static image background and dynamic rain. Unfortunately, they simply employ naive filtering strategies such as percentile filter [37] and median filter [20], resulting in unsatisfactory GT with residual rain or over-smoothing phenomenon. Moreover, the number and diversity of the existing real paired rain datasets are still limited. Few datasets have considered the rain splash on the ground, which is commonly observed in the real world but still rarely mentioned in deraining community. And the number of existing video sequences and image frames are not sufficient to cover diverse rains in terms of the varied rain angle, intensity, density, length, width and so on. Last but not least, the existing realistic rainy images are mostly downloaded from the Internet with low-quality: compression, watermark, low-resolution, without annotation and so on. As such, constructing a large-scale high-quality paired realistic rain dataset is highly necessary.
In this work, we construct a new large-scale high-quality paired real rain benchmark. The strength of our benchmark is threefold. First, the LHP-Rain contains diverse rain categories with very large-scale, including 3000 video sequences with over 1 million frame pairs. Second, apart from the conventional streak and veiling, our benchmark is capable of removing the representative challenging ground splashing rain in the real world. Third, the LHP-Rain is collected by the smartphone with high-resolution (1920*1080 pixels) and abundant objects under self-driving and surveillance scenes are captured for comprehensive evaluation. Moreover, we propose a novel robust low-rank tensor recovery method (RLRTR) for video deraining, which can generate higher-quality GT with better rain removal from sky to the ground and image structure preserving. We summary the main contributions as follows:
* We construct a large-scale high-quality paired real rain benchmark for real single image deraining. To our best knowledge, LHP-Rain is the largest paired real rain dataset (3000 video sequences, 1 million frames) with high image resolution (1920*1080), and the first benchmark to claim and tackle the problem of ground splashing rain removal.
* We design a novel robust low-rank tensor recovery model for video deraining to better acquire paired GT. We provide detailed analysis to show RLRTR can better differ the rain from static background than previous datasets.
* We propose a new transformer-based single image deraining baseline, which exploits both self-attention and cross-layer attention between the rain and image layer for better representation. Extensive experiments on different real datasets verify the superiority of proposed method.
## 2 Related Work
**Real rain datasets.** At present, the researchers have mostly focused on the network architecture design, while relative fewer attention has been paid on the real rain dataset. The insufficiency of realistic rain dataset is the main bottleneck to hamper single image deraining. In Table 1, we provide a comprehensive summary of existing real rain datasets, which can be classified into two categories: rainy image only and paired rain-image. The former can be utilized via semi-supervised [40, 13] or unsupervised methods [7, 51]. The latter can be conveniently utilized by the supervised training.
The key to paired real dataset is how to acquire the pseudo-'clean' image from its rainy counterpart. There are two main ways to construct the pairs: video-based generation (SPAData [37], RealRain-1K [20]), and time-interval acquisition (RainDS [32], GT-Rain [1]). All these datasets should ensure that the camera is strictly immobile during the acquisition process. SPA-data [37] was the first presented paired real dataset which utilized the human-supervised percentile video
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Datasets & Year & Source & Sequence & Frame & Resolution & Rain Categories & Annotation & Paired \\ \hline RID/RIS[19] & 2019 & Cam/Internet & None & 4.5K & 640*368 & streak, raindrop & Object detection & - \\ \hline NR-IQA[43] & 2020 & Internet & None & 0.2K & 1000*680 & streak, veiling & None & - \\ \hline Real3000[25] & 2021 & Internet & None & 3.0K & 942*654 & streak, veiling & None & - \\ \hline FCRealRain[51] & 2022 & Camera & None & 4.0K & 4240*2400 & streak, veiling & Object detection & - \\ \hline SPA-Data[37] & 2019 & Cam/Internet & 170 & 29.5K & 256*256 & streak & None & ✓ \\ \hline RainDS[32] & 2021 & Cam & None & 1.0K & 1296*728 & streak, raindrop & None & ✓ \\ \hline GT-Rain[1] & 2022 & Internet & 202 & 31.5K & 666*339 & streak, veiling & None & ✓ \\ \hline RealRain-1K[20] & 2022 & Cam/Internet & 1120 & 1.1K & 1512*973 & streak, veiling, occlusion & None & ✓ \\ \hline
**LHP-Rain** & 2023 & Camera & **3000** & **1.0M** & 1920*1080 & streak, veiling, occlusion, **splashing** & Object detection/**Lane** & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of existing real rain datasets.
filtering to obtain the GT. Instead of generation, GT-Rain [1] collected the pairs of same scene under rainy and good weather, respectively. Similar idea has been adopt in RainDS [32] by manually mimicking rainfall with a sprinkler.
Despite the rapid development promoted by those pioneer datasets, there remain some important issues to be solved: insufficient number and rain diversity (not consider ground splashing). In this work, we contribute a large-scale high-quality paired real rain dataset (Section 3.1) with diverse rain and abundant objects. Moreover, a novel GT generation method is proposed with higher-quality pairs (Section 3.3).
**Single image deraining.** The single image deraining methods have made great progress in last decade including the optimization models [26, 22, 4], deep convolutional network [12, 36, 10] and transformer [35, 44]. Fu _et al._[9] first introduced the residual deep CNN for single image deraining. Latter, the researchers have further improved the network depth by stacking similar modules, such as the well-known recurrent [21, 49] and multi-stage progressive [33, 53] strategies. Meanwhile, the multi-scale has been widely explored to improve the representation such as the multi-scale fusion [14] and wavelet decomposition [46]. Further, the side information about the rain attribute: density [54], depth [12], directionality [38], location [47], non-local [17] have been extensively utilized to enhance the deraining performance.
Benefiting from self-attention mechanise for long-range relationships modelling, transformer-based methods have achieved significant performance for single image deraining [39, 35, 44, 6]. Very recently, Xiao _et al_.[44] proposed an image deraining transformer (IDT) with relative position enhanced and spatial-based multihead self-attention. Chen _et al_. [6] proposed a sparse Transformer architecture to solve the redundant feature issue. In this work, we propose a simple yet effective dual-branch transformer baseline which simultaneously utilizes the self-attention within rain/image layer and cross-layer attention between the rain and image layer, so as to jointly improve the discriminative disentanglement between the rain and image layer (Section 4).
## 3 Large-scale high-quality paired Benchmark
### Benchmark Collection and Statistics
Due to the difficulty and inconvenience of collecting real rain videos, the video sequences and frames of existing paired real rain datasets are still limited as shown in Table 1. In this work, we collect the real rain sequences by smartphones with 24mm lens focal length, sampled in 30 fps. The data collection process is illustrated in Fig. 3(a). Firstly, to keep the camera immobile, we employ tripod to capture real rain videos with static background (no moving object except rain). For each sequence, we record approximate 15 seconds and extract the intermediate steady 10s into our dataset. Then, we manually pick out moving object to remove unexpected disturbance. Finally, we employ the proposed RLRTR (Section 3.3) to obtain high-quality GT.
Overall, we collect 3000 video sequences with approximate 1 million frames across 8 cities from 4 countries around the world, China (2091 sequences), England (51 sequences), Philippines and Indonesia (858 sequences). We visualize the per-country image counts and location distribution of LHP-Rain in Fig. 3(b). The rainfall levels are varied from light rain (10mm/day) to rainstorm (300mm/day) due to diversity of local climates. Over 17 typical scenes are captured, including the parking lot, street, alley, playground, courtyard, and forest, etc. Besides, 2490 sequences are captured at daytime and 510 sequences are at night. More Rainy/GT pairs from LHP-Rain are displayed in Fig. 3(c). With the changing of location, backgrounds are varied from nature to city with diverse rain patterns, such as rain streak, veiling effect, occlusion and splashing. Note that, although the background is the same for each video sequence, the rain in each frame is vastly different from the appearance, including streak, veiling, occlusion and splashing. Here we separate 2100 sequences as training set, 600 sequences as validation set and the other 300 sequences as test set. To further visualize the quantity distribution of rain and scene for each sequence, a sunburst chart is illustrated in Fig. 2(a).
Figure 2: Features of the proposed benchmark LHP-Rain. (a) Distribution of rain and scene of the proposed benchmark. (b) Our proposed LHP-Rain outperforms others in terms of rain diversity and sequence amount. (c) LHP-Rain collects high-resolution and annotated rainy images without copyright, compression and blur.
### Benchmark Features
**Rain with higher-diversity and larger-scale.** We concern not only the sequence number and total frames but also the rain diversity of realistic rain, which both have great impact on the generalization for real-world rain. In Fig. 2(b), we show the statistic distribution of rain diversity and sequences/frames in typical real paired rain dataset. All datasets concern about the noticeable rain streak, especially SPA-data [37]. RainDS [32] additionally takes the raindrop into consideration with 1000 frames, while GT-Rain [1] and RealRain-1K [20] further capture the accumulation veiling artifact in heavy rain. LHP-Rain contains not only rain streak and veiling in the sky, but also challenging highlight occlusion and splashing water on the ground. To our best knowledge, the proposed LHP-Rain is the first benchmark to collect and tackle ground splashing rain in the real world, which is commonly ignored by previous existing datasets.
**Image with higher-resolution and abundant objects.** The existing datasets pay much attention to the rain, ignoring that the high-quality image is also what we really need. Unfortunately, existing realistic rainy images are generally downloaded from the Internet with various problems about the image: compression artifact, watermark, low-resolution, out-of-focus blur, without objects to name a few, which may cause challenges for high-level vision applications. In Fig. 2(c), we show the typical examples in each dataset. The image background of RealRain-1K [20] suffers from serious out-of-focus blur, since they have manually focused on the rain on purpose. GT-Rain [1] contains obvious compression artifact, since it origins from the compressed Youtube stream. There are numerous scenes with narrow views and watermark in the SPA-data [37], because they only release patches (256*256) cropped from original frames.
To improve the image quality, we personally capture high-resolution (1920*1080) realistic rain videos by smartphones. Moreover, LHP-Rain is not only designed for rain restoration, but also important for object detection and segmentation tasks under adverse weather, with abundant objects which are oriented for self-driving and video surveillance scenes. Thus, we provide annotations for object detection and lane segmentation. Five typical objects including person, car, bicycle, motorcycle and bus are annotated by bounding box with 326,961 instances totally. For lane segmentation, we annotate 24,464 lane masks to evaluate the effect of rain splashing removal. Note that the same object in different frames will be regarded as different instances because rain is inconstant and changing frame by frame.
**Higher-quality ground-truth.** The quality of GT is critical for paired real rain dataset. It is difficult to determine what is good or bad GT in an absolutely fair way. In this paper, we assume that the better the rain removal, the better the image quality is. Therefore, we employ several no reference image quality assessments: DIIVINE [29], NIQE [28] and B-FEN [43], to evaluate the image quality of the rain-free image. The former two are hand-crafted based general image quality indexes, and the last one B-FEN is the learning based index especially designed for de-raining quality assessment. We select all the video backgrounds in SPA-data [37], GT-Rain [1], RealRain-1K [20] and LHP-Rain for evaluation. In
Figure 4: The GT quality of LHP-Rain is superior to others on DIIVINE (lowest), NIQE (lowest) and B-FEN (highest).
Figure 3: Illustration of the proposed benchmark LHP-Rain. (a) Overall procedure of obtaining rainy/clean image pair. (b) Quantity and location distribution of LHP-Rain. (c) Rainy/GT samples of LHP-Rain from different locations. Scenes are varied from nature to city, over day and night with diverse rain patterns from sky to the ground.
Fig. 4, we can observe that the proposed LHP-Rain consistently obtains the best results in terms of different evaluation indexes which strongly support the higher-quality of the GT.
### Robust Low-rank Tensor Recovery Model
Given the rainy video \(\mathbf{\mathcal{O}}\in\mathbb{R}^{h\times w\times t}\), the key is how to properly obtain the paired GT. The existing methods simply employ the naive filtering technique benefiting from the temporal consistency of the static background. Due to the slight camera vibration caused by the wild wind, we further leverage an affine transformation operator \(\tau\)[52] to achieve the pixel-level alignment of each frame. Thus, a multi-frame rainy video can be described as the following formula:
\[\mathcal{O}\circ\tau=\mathcal{B}+\mathcal{R}+\mathcal{N}, \tag{1}\]
where \(\mathcal{B}\in\mathbb{R}^{h\times w\times t}\) is the rain-free video, \(\mathcal{R}\in\mathbb{R}^{h\times w\times t}\) represents the rains, \(\mathcal{N}\in\mathbb{R}^{h\times w\times t}\) denotes the random noise, and \(\tau\) denotes the affine transformation to ensure the rainy video of each frame is pixel-level aligned. In this work, we formulate the video deraining into inverse problem via the _maximum-a-posterior_ as follow:
\[\min_{\mathcal{B},\mathcal{R},\tau}\frac{1}{2}||\mathcal{B}+\mathcal{R}- \mathcal{O}\circ\tau||_{F}^{2}+\omega P_{b}(\mathcal{B})+\mu P_{r}(\mathcal{ R}), \tag{2}\]
where \(P_{b}\) and \(P_{r}\) are the prior knowledge for the image and rain, respectively, \(\omega\) and \(\mu\) are the corresponding hyper-parameters. As for the aligned rainy video, when there are no moving objects except the rain, the rain-free background image is the same for all rainy frames. That is to say, clean video \(\mathcal{B}\) has extreme _global_ low-rank property along the temporal dimension, ideally its rank is equal to one for each scene. On the other hand, the clean video \(\mathcal{B}\) also has very _non-local_ low-rank property along the spatial dimension, due to the self-similarity widely employed in image restoration [8]. Moreover, we further take the _local_ smoothness of the video \(\mathcal{B}\) into consideration via the total variation (TV) regularization [4]. Thus, the joint global-nonlocal-local prior along both the spatial and temporal dimension has been fully exploited for better representation of the static video \(\mathcal{B}\):
\[P_{b}(\mathcal{B})=\omega\sum_{i}\Big{(}\frac{1}{\lambda_{i}^{2}}||\mathcal{ S}\mathcal{B}\times\mathcal{S}Q_{i}-\mathcal{J}_{i}||_{F}^{2}+||\mathcal{J}_{i}||_{ nm}\Big{)}+\gamma||\nabla_{i}\mathcal{B}||_{1}, \tag{3}\]
where \(\mathcal{S}_{i}\mathcal{B}\in\mathbb{R}^{p^{2}\times k\times t}\) is the constructed 3-D tensor via the non-local clustering of a sub-cubic \(u_{i}\in\mathbb{R}^{p\times p\times t}\)[3], \(p\) and \(k\) are the spatial size and number of the sub-cubic respectively, \(Q_{i}\in\mathbb{R}^{d\times t}(d\ll t)\) is an orthogonal subspace projection matrix used to capture the temporal low-rank property, \(\times_{3}\) is the tensor product along the temporal dimension [16], \(\mathcal{J}_{i}\) represents the low-rank approximation variable, \(||\bullet||_{tnn}\) means the tensor nuclear norm for simplicity [3], \(\nabla_{t}\) is the difference operator, \(\gamma\) and \(\lambda_{i}\) is the regularization parameters. As for the rain \(\mathcal{R}\), we formulate it as the sparse error [42] via the \(L_{1}\) sparsity. Thus, the Eq. (2) can be expressed as:
\[\begin{split}&\Big{\{}\hat{\mathcal{B}},\hat{\mathcal{R}},\hat{ \mathcal{J}}_{i},\hat{\tau},\hat{Q}_{i}\Big{\}}=\operatorname*{arg\,min}_{ \mathcal{B},\mathcal{R},\mathcal{J}_{i},\tau}\frac{1}{2}||\mathcal{B}+ \mathcal{R}-\mathcal{O}\circ\tau||_{F}^{2}\\ &+\mu||\mathcal{R}||_{1}+\omega\sum_{i}\Big{(}\frac{1}{\lambda _{i}^{2}}||\mathcal{S}\mathcal{B}\times\mathcal{S}Q_{i}-\mathcal{J}_{i}||_{F} ^{2}+||\mathcal{J}_{i}||_{tnn}\Big{)}+\gamma||\nabla_{i}\mathcal{B}||_{1}. \end{split} \tag{4}\]
**Optimization.** Due to the difficulty of estimating multiple variables directly, we adopt the alternating minimization
Figure 5: Analysis of different video deraining results on our dataset. From left to right, the first column is the original rainy frame, and the remaining five columns represent different methods, namely Median Filter, SPA, FastDeRain, J4R-Net and the proposed RLRTR. From top to bottom, the first row shows the deraining results, the second row is the rain layer of the deraining results, the third row denotes the section line of the deraining results and the last row represents the horizontal and vertical gradient distributions of the deraining results.
scheme to solve the Eq. (4) with respect to each variable.
1) _Affine Transformation_\(\tau\): Since \(\mathcal{O}\circ\tau\) is a nonlinear geometric transform, it's difficult to directly optimize \(\tau\). A common technique is to linearize around the current estimate and iterate as follows: \(\mathcal{O}\circ\tau+\nabla\mathcal{O}\triangle\tau=\mathcal{B}+\mathcal{R}+ \mathcal{N}\)[31], where \(\nabla\mathcal{O}\) is the Jacobian of the image \(\mathcal{O}\) with respect to \(\tau\). This method iteratively approximates the original nonlinear transformation with a locally linear approximation [31].
2) _Rain Estimation_\(\mathcal{R}\): By ignoring variables independent of \(\mathcal{R}\), we can obtain following subproblem:
\[\hat{\mathcal{R}}=\arg\min_{\mathcal{R}}\frac{1}{2}||\mathcal{B}+\mathcal{R}- \mathcal{O}\circ\tau||_{F}^{2}+\mu||\mathcal{R}||_{1}. \tag{5}\]
Eq. (5) is a \(L_{1}\) minimization problem which can be easily solved by soft thresholding with closed-form solution [23].
3) _Subspace Projection_\(Q_{i}\): We enforce the orthogonal constraint on \(Q_{i}^{T}Q_{i}=I\) with the following subproblem:
\[\hat{Q}_{i}=\arg\min_{Q_{i}^{T}Q_{i}=I}\frac{1}{\lambda_{i}^{2}}||\mathcal{S} _{i}\mathcal{B}\times_{3}Q_{i}-\mathcal{J}_{i}||_{F}^{2}. \tag{6}\]
According to [45], Eq. (6) has the closed-form solution, which can be obtained by the _rank-d_ singular value decomposition of the folding matrix of \(\mathcal{S}_{i}\mathcal{B}\), where \(d\) is the measurement of the intrinsic subspace of the temporal dimension. In this work, we empirically set \(d\leq 3\).
4) _Low-rank Approximation_\(\mathcal{J}_{i}\): Dropping the irrelevant variables, we can get following subproblem:
\[\hat{\mathcal{J}}_{i}=\arg\min_{\mathcal{J}_{i}}\frac{1}{\lambda_{i}^{2}}\parallel \mathcal{S}_{i}\mathcal{B}\times_{3}Q_{i}-\mathcal{J}_{i}\parallel_{F}^{2}+|| \mathcal{J}_{i}||_{tnn}. \tag{7}\]
This is a typical tensor nuclear norm minimization problem, can be solved by singular value thresholding algorithm [2, 3].
5) _Clean Video Estimation_\(\mathcal{B}\): We fix the other variables and optimize \(\mathcal{B}\) with the following subproblem:
\[\min_{\mathcal{B}}\frac{1}{2}||\mathcal{B}+\mathcal{R}-\mathcal{O}\circ\tau|| _{F}^{2}+\omega\sum\limits_{i}\frac{1}{\lambda_{i}^{2}}||\mathcal{S}_{i} \mathcal{B}\times_{3}Q_{i}-\mathcal{J}_{i}||_{F}^{2}+\gamma||\nabla_{i} \mathcal{B}||_{1}. \tag{8}\]
Due to the non-differentiability of the \(L_{1}\) norm in Eq. (8), we apply the ADMM [23] to decouple this problem into several sub-problems with closed-form solutions. Please refer to the supplementary material for the whole algorithm details.
**Discussion.** Figure 5 illustrates the comparison results of representative video deraining methods: filter-based methods (median filter[20], SPA[37]), optimization-based (FastDeRain[15]) and learning-based methods (J4RNet[24]). The first and second rows show the video deraining of the image and rain layer, respectively. RLRTR removes almost all the rain from sky to the ground, including rain streaks and splashing, and preserve the image details well, while other methods more or less have rain streaks residual but also cause noticeable damage to the image. In third row, we randomly choose two 1D section lines of the deraining results. The section line of RLRTR is smoother with less burr than other methods. Moreover, the SPA, FastDeRain and the J4R-Net unexpectedly attenuate the spike signal of the share edge, while the proposed RLRTR has well preserve the spike signal. It is well-known the natural image is isotropic and its gradient distribution along different directions should be close to each other [34]. Compared with other results, in forth row the gradient distributions along the vertical and horizontal directions of RLRTR are most similar to each other, which further indirectly verify the naturalness of the deraining result and produce better paired clean-rainy GT.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{A\(\rightarrow\)A} & \multicolumn{2}{c}{B\(\rightarrow\)A} & \multicolumn{2}{c}{C\(\rightarrow\)A} & \multicolumn{2}{c}{A\(\rightarrow\)B} & \multicolumn{2}{c}{B\(\rightarrow\)B} & \multicolumn{2}{c}{C\(\rightarrow\)B} & \multicolumn{2}{c}{A\(\rightarrow\)C} & \multicolumn{2}{c}{B\(\rightarrow\)C} & \multicolumn{2}{c}{C\(\rightarrow\)C} \\ \cline{2-19} & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\ \hline Rainy Image & \multicolumn{4}{c}{32.60 / 0.9173} & \multicolumn{4}{c}{19.48 / 0.5849} & \multicolumn{4}{c}{29.97 / 0.8497} \\ \hline SPANet & 38.53 & 0.9875 & 22.93 & 0.8207 & 31.46 & 0.9612 & 20.01 & 0.6148 & 21.51 & 0.7145 & 19.20 & 0.5706 & 28.00 & 0.8905 & 20.10 & 0.8061 & 31.19 & **0.9346** \\ \hline PRENet & 37.05 & 0.9696 & 22.44 & 0.7713 & **32.46** & 0.9387 & 20.29 & 0.5860 & 20.65 & 0.6005 & 19.34 & 0.5530 & 27.57 & 0.8595 & 20.91 & 0.7222 & 32.13 & 0.9177 \\ \hline RCDNet & 39.74 & 0.9661 & 22.51 & 0.8392 & 32.30 & 0.9378 & 20.09 & 0.5785 & 21.04 & 0.6106 & 19.09 & 0.5264 & 25.46 & 0.7959 & 21.38 & 0.8047 & 32.34 & 0.9152 \\ \hline IORDER-E & 40.63 & 0.9794 & 23.47 & 0.7426 & 31.23 & 0.9234 & 19.98 & 0.5799 & 21.24 & 0.6854 & 18.76 & 0.4861 & 27.13 & 0.8531 & 22.14 & 0.8433 & 31.24 & 0.8847 \\ \hline MPRNet & 46.06 & 0.9894 & 24.27 & 0.8428 & 32.37 & 0.9379 & 19.87 & 0.6286 & 22.00 & 0.6515 & 19.47 & 0.5889 & 28.41 & 0.8807 & **23.82** & 0.8052 & 33.34 & 0.9309 \\ \hline GT-Rain & 37.21 & 0.9827 & **25.30** & **0.9243** & 26.46 & 0.9145 & 20.07 & **0.6941** & **22.51** & **0.7300** & **21.14** & 0.5698 & 28.62 & 0.8675 & 23.19 & 0.8098 & 32.18 & 0.9132 \\ \hline Uformer-B & **46.42** & **0.9917** & 24.08 & 0.8979 & 23.21 & **0.9667** & 19.70 & 0.6875 & 21.60 & 0.7124 & 19.10 & **0.6622** & **28.74** & **0.9262** & 22.91 & **0.8734** & **33.56** & 0.9317 \\ \hline IDT & 45.74 & 0.9889 & 23.80 & 0.8334 & 32.38 & 0.9422 & **20.34** & 0.6306 & 21.98 & 0.6536 & 19.44 & 0.5977 & 26.90 & 0.8742 & 23.34 & 0.7897 & 33.02 & 0.9310 \\ \hline
**SCD-Former** & **46.89** & **0.9941** & **26.13** & **0.9122** & **34.38** & **0.9798** & **20.98** & **0.6985** & **22.79** & **0.7684** & **21.71** & **0.6893** & **29.41** & **0.9127** & **23.56** & **0.8626** & **34.33** & **0.9468** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative comparisons with SOTA supervised methods on paired real datasets SPA-data (A), GT-Rain (B) and proposed LHP-Rain (C) under 9 different task settings. X\(\rightarrow\)Y means training on the dataset X and testing on the dataset Y. The degraded results of the three datasets are also provided. Top \(1_{st}\) and \(2_{nd}\) results are marked in **red** and **blue** respectively.
Figure 6: Overall framework of the SCD-Former. It utilizes self and cross-layer attention from rain layer to image layer which serves as side information to recover image layer.
## 4 SCD-Former: Image Deraining Baseline
The degradation procedure can be formulated as:
\[O=B+R. \tag{9}\]
It has been proved that the rain such as location [47] serving as an attention would be informative in CNN-based image restoration. In this work, we show the rain attention would also be beneficial in transformer. In Fig. 6, we design a simple two-stream Self- and Cross-attention Deraining Transformer (SCD-Former), in which the two-stream network is designed to restore rain and image layer respectively. On one hand, we utilize the self-attention in each rain/image stream independently; on the other hand, we further exploit the cross-layer attention between the rain and image streams. Thus, the rain collaboratively interactive with the image layer to further improve the discriminative representation.
**Self-attention and cross-layer attention.** In this work, we exploit self-attention on rain and image layer as Rain layer Self-Attention (RSA) and Image layer Self-Attention (ISA). Given the input feature \(X\), it will be projected into query (_Q_), key (_K_) and value (_V_) by three learnable weight matrices \(W_{q}\), \(W_{k}\) and \(W_{v}\). Then dot-product, scaling and softmax among \(Q\), \(K\) and \(V\) will be conducted. The self-attention function is defined as:
\[Attention(\textit{Q, K, V})=softmax(\frac{Q{K^{T}}}{\sqrt{d_{k}}})V. \tag{10}\]
We further design a Cross-Layer Attention (CLA) module which bridges the attention relationship between rain and image layer. The CLA conducts attention operation among \(Q\), from rain layer and _K_\({}_{b}\), _V_\({}_{b}\) from image layer as follow:
\[\textit{CLA}(Q_{r},K_{b},V_{b})=softmax(\frac{Q_{r}{K_{b}}^{T}}{\sqrt{d_{k}}} )V_{b}. \tag{11}\]
The token of rain layer serves as a query token \(Q\), to interact with the patch tokens _K_\({}_{b}\) and _V_\({}_{b}\) from the image layer through attention mechanism. By calculating the correlation degree between both layer, the highly attentive location of rain residual can be acquired, which provides an extra prior for enhanced feature representation. Note that, the CLA module has been stacked over the whole network. Compared with previous work, SCD-Former exploits not only the self-attention but also cross-layer attention of the rain and image layer for better restoration.
**Implementation details.** We train the network using the Charbonnier loss[5] supervised by the ground truth of rain and image layer:
\[\mathcal{L}=||O-B-R||_{F}^{2}+\lambda_{r}||R-\hat{R}||_{1}+\lambda_{b}||B-\hat {B}||_{1}. \tag{12}\]
The framework is implemented with two RTX 3090 GPUs. We set the hyperparameter \(\lambda_{b}\) and \(\lambda_{r}\) as 1. The images are randomly cropped into 256 * 256 for training. The learning rate of network is set as 0.0002. The Adam optimizer is adopted for optimization with a batch size of 32.
## 5 Experiments
**Datasets.** We conduct the experiments on paired datasets SPA-data[37], GT-Rain[1] and LHP-Rain. For the SPA-data, training set is cropped from 29,500 images and 1000 rainy images are used for testing. For the GT-Rain, 89 sequences are used for training and 7 sequences are used for testing. For the LHP-Rain, 2100 sequences are used for training and 300 sequences are used for testing. To evaluate the deraining performance on real scenes, we choose typical real rainy images from Internet-data. For single image deraining methods, we select the representative supervised deraining methods, including the CNN-based SPANet[37],
Figure 7: Visual comparisons on LHP-Rain. Comparing with state-of-the-arts, SCD-Former achieves more visual pleasing deraining results and it is capable of removing the highlight occlusion on the car and the splashing water on the ground.
PRNet[33], RCDNet[36], JORDER-E[47], MPRNet[27], GT-Rain[1], transformer-based Uformer[39] and IDT[44].
**Evaluation metrics.** We employ the full-reference PSNR and SSIM to evaluate the single image deraining results. Moreover, mean Average Precision (mAP) and Accuracy (Acc) are employed to evaluate object detection and lane segmentation after restoration by deraining methods.
### Quantitative Evaluation
**Deraining results on benchmarks.** We make comparisons with state-of-the-art deraining methods on three datasets SPA-data (A), GT-Rain (B) and LHP-Rain (C). The quantitative results are reported in Table 2 under the following columns: A\(\rightarrow\)A, B\(\rightarrow\)B and C\(\rightarrow\)C. It is observed that transformer-based methods perform better than most CNN-based methods except for MPRNet in terms of PSNR and SSIM because of the superior representation of self-attention. Note that SCD-Former outperforms the existing state-of-the-art methods on all benchmarks, which confirms the effectiveness of our method with both self and cross-layer attention.
### Qualitative Evaluation
**Evaluation on LHP-Rain.** To further validate the deraining performance, we compare with the qualitative results of typical methods on LHP-Rain. As shown in Fig. 7, SCD-Former achieves more visual pleasing results without rain residual and artifacts comparing with other methods, which cleans the rain streaks and veiling effect on the trees, highlight occlusion on the red car and the ground splashing water.
**Evaluation on real rainy images.** To evaluate the performance on real rainy images, we train SCD-Former on synthetic rain Rain100L[48], real rain SPA-data, GT-Rain and LHP-Rain respectively and test on real rainy images. As shown in Fig. 8, the model trained on Rain100L performs poorly due to the huge domain gap. SPA-data and GT-Rain could remove real rain in the sky partially but they cannot handle the splashing water on the ground. The model trained on LHP-Rain has the best deraining performance which simultaneously removes rain streaks, veiling and ground splashing water without destroying image details.
### Ablation Study
**Effectiveness of subspace projection.** The subspace projection is used to characterize the extreme global low-rank property along the temporal dimension. In Fig. 9, there are obvious rain residual without subspace projection, implying that it is insufficient to characterize the property of the temporal dimension relying on the local prior of the temporal smoothness. Since the temporal low-rank property is neglected, leading to the rain residual in the results.
**Effectiveness of affine transformation.** The affine transformation is exerted to guarantee the pixel-level alignment of rainy video. As shown in Fig. 9, it can be observed that background residual and distortion obviously exist in the rain layer without affine transformation, because the extreme low-rank property along the temporal dimension is affected by data alignment among each frame in the video.
**Effectiveness of cross-layer attention.** The design aims to find the correlations between rain layer and image layer. After iterations, the image layer sub-network obtains more attention on the rain features brought by cross-layer attention. In Table 3, We show that the complementary guidance from rain layer promotes the restoration of image layer.
Figure 8: Evaluation of the diversity of the LHP-Rain. We train SCD-Former on different datasets: Rain100L, SPA-data, GT-Rain and LHP-Rain, and test on other datasets. The model trained on LHP-Rain has achieved better deraining results.
\begin{table}
\begin{tabular}{c c c} \hline \hline Cross-layer attention & PSNR / SSIM \\ \hline - & 33.92 & 0.9384 \\ ✓ & **34.33** & **0.9403** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study of cross-layer attention.
Figure 9: Ablation study of the robust low-rank tensor recovery model. The first row represents the deraining results, and the second row is corresponding rain. The first column is the original rainy frame and the remaining three columns represent the model without subspace projection, without affine transformation and full RLTR.
### Discussion
**Rain diversity of different datasets.** The rain diversity of dataset can be validated by the experiment of training models on one dataset and testing on the other unseen datasets among SPA-data (A), GT-Rain (B) and LHP-Rain (C). As shown in Table 2, on one hand, the best results of C\(\rightarrow\)A methods positively improve the performance of A, while all A\(\rightarrow\)C results performing worse than degraded result of C. On the other hand, the best results of C\(\rightarrow\)B have promotion while B\(\rightarrow\)C drop severely. Therefore, the outcome proves that our LHP-Rain contains more diverse rain categories than others, because they could not handle extreme challenging cases such as occlusion and ground splashing.
**Evaluation on downstream tasks.** We further evaluate the image deraining results on high-level tasks. For object detection, we apply the official YOLOv5 model on deraning results and report the mean average precision (mAP) of different classes in Table 4, where SCD-Former reaches the best average mAP among typical objects. For lane segmentation, we choose the LaneNet[30] to predict the lane on LHP-Rain and SCD-Former has larger promotion on the segmentation accuracy. It is reasonable because SCD-Former performs well on removing ground splashing water and recovering the lane lines. The visualization results in Fig. 10 shows that the lane on the surface of ground and the bicycles could be properly predicted after deraining by SCD-Former.
**User study on benchmarks quality.** We look for 126 volunteers to anonymously vote for the benchmark with best quality. Among existing benchmarks, we randomly select 100 samples and conduct user study including: rain diversity, image quality (resolution, JEPG, blur) and GT quality. The result is listed in Table 5, where LHP-Rain consistently outperforms other benchmarks more than 50%.
## 6 Limitation
Our proposed video deraining method is limited to remove the haze in the heavy rain scenes. RLRTR is adept at separating the static background from the dynamic rain. However, due to the steadiness of mist in the short interval, which is almost motionless in the background, RLRTR cannot decompose the haze from image layer well. Fig. 11 shows examples of rain and mist in our benchmark. Although the challenging rain patterns such as rain streaks are clearly removed, the result still contains haze. We look forward to handling the static haze in the future.
## 7 Conclusion
In this paper, we propose a large-scale and high-quality paired real rain benchmark. Our proposed LHP-Rain provides diverse rain categories, especially the ground splashing rain issue which is first claimed in deraining community. The model trained on LHP-Rain could generalize well on various real rainy scenes with great rain removal performance. Moreover, the proposed low-rank tensor recovery model could generate high-quality GT and detailed analysis confirms better results than others. In addition, we propose a single deraining baseline which performs well on removing rain from sky to the ground. Extensive experiments verify the superiority of the proposed benchmark and significantly improves segmentation task after removing splashing.
**Acknowledgements.** This work was supported in part by the National Natural Science Foundation of China under Grant 61971460 and Grant 62101294, in part by JCJQ Program under Grant 2021-JCJQ-JJ-0060 and in part by the Fundamental Research Funds for the Central Universities, HUST: 2022JYCXJJ001.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Method & Det. (mAP) & Gain (Det.) & Seg. (Acc) & Gain (Seg.) \\ \hline Rainy & 0.543 & - & 0.237 & - \\ SPANet & 0.563 & +0.020 & 0.268 & +0.031 \\ PReNet & 0.560 & +0.022 & 0.255 & +0.018 \\ RCDNet & 0.556 & +0.018 & 0.361 & +0.124 \\ JORDER-E & 0.568 & +0.025 & 0.385 & +0.148 \\ MPRNet & 0.560 & +0.017 & 0.350 & +0.113 \\ Uformer-B & 0.568 & +0.025 & 0.306 & +0.069 \\ IDT & 0.570 & +0.027 & 0.365 & +0.128 \\
**SCD-Former** & **0.575** & **+0.031** & **0.449** & **+0.212** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Evaluation of high-level tasks on deraining results.
Figure 11: The limitation of RLRTR. The challenging occlusion effect could be removed by RLRTR from the background while the static haze is preserved.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & **LHP-Rain** & SPA & RealRain1K & GT-Rain \\ \hline Rain diversity & **63\%** & 8\% & 13\% & 16\% \\ Image quality & **51\%** & 14\% & 20\% & 15\% \\ GT quality & **55\%** & 15\% & 16\% & 14\% \\ \hline \hline \end{tabular}
\end{table}
Table 5: User study on benchmarks quality.
Figure 10: Evaluation of high-level tasks on lane segmentation and object detection. The performance improves significantly after removing rain streak and ground splashing from lanes and bicycles by our proposed SCD-Former. |
2305.02549 | FormNetV2: Multimodal Graph Contrastive Learning for Form Document
Information Extraction | The recent advent of self-supervised pre-training techniques has led to a
surge in the use of multimodal learning in form document understanding.
However, existing approaches that extend the mask language modeling to other
modalities require careful multi-task tuning, complex reconstruction target
designs, or additional pre-training data. In FormNetV2, we introduce a
centralized multimodal graph contrastive learning strategy to unify
self-supervised pre-training for all modalities in one loss. The graph
contrastive objective maximizes the agreement of multimodal representations,
providing a natural interplay for all modalities without special customization.
In addition, we extract image features within the bounding box that joins a
pair of tokens connected by a graph edge, capturing more targeted visual cues
without loading a sophisticated and separately pre-trained image embedder.
FormNetV2 establishes new state-of-the-art performance on FUNSD, CORD, SROIE
and Payment benchmarks with a more compact model size. | Chen-Yu Lee, Chun-Liang Li, Hao Zhang, Timothy Dozat, Vincent Perot, Guolong Su, Xiang Zhang, Kihyuk Sohn, Nikolai Glushnev, Renshen Wang, Joshua Ainslie, Shangbang Long, Siyang Qin, Yasuhisa Fujii, Nan Hua, Tomas Pfister | 2023-05-04T05:02:04Z | http://arxiv.org/abs/2305.02549v2 | # FormNetV2: Multimodal Graph Contrastive Learning for
###### Abstract
The recent advent of self-supervised pre-training techniques has led to a surge in the use of multimodal learning in form document understanding. However, existing approaches that extend the mask language modeling to other modalities require careful multi-task tuning, complex reconstruction target designs, or additional pre-training data. In FormNetV2, we introduce a centralized multimodal graph contrastive learning strategy to unify self-supervised pre-training for all modalities in one loss. The graph contrastive objective maximizes the agreement of multimodal representations, providing a natural interplay for all modalities without special customization. In addition, we extract image features within the bounding box that joins a pair of tokens connected by a graph edge, capturing more targeted visual cues without loading a sophisticated and separately pre-trained image embedder. FormNetV2 establishes new state-of-the-art performance on FUNSD, CORD, SROIE and Payment benchmarks with a more compact model size.
## 1 Introduction
Automated information extraction is essential for many practical applications, with form-like documents posing unique challenges compared to article-like documents, which have led to an abundance of recent research in the area. In particular, form-like documents often have complex layouts that contain structured objects like tables, columns, and fillable regions. Layout-aware language modeling has been critical for many successes Xu et al. (2020); Majumder et al. (2020); Lee et al. (2022).
To further boost the performance, many recent approaches adopt multiple modalities Xu et al. (2021); Huang et al. (2022); Appalaraju et al. (2021). Specifically, the image modality adds more structural information and visual cues to the existing layout and text modalities. They therefore extend the masked language modeling (MLM) from text to masked image modeling (MIM) for image and text-image alignment (TIA) for cross-modal learning. The alignment objective may also help to prime the layout modality, though it does not directly involve text layouts or document structures.
In this work, we propose FormNetV2, a multimodal transformer model for form information extraction. Unlike existing works - which may use the whole image as one representation Appalaraju et al. (2021), or image patches Xu et al. (2021), or image features of token bounding boxes Xu et al. (2020) - we propose using image features extracted from the region bounded by a _pair_ of tokens connected in the constructed graph. This allows us to capture a richer and more targeted visual component of the intra- and inter-entity information. Furthermore, instead of using multiple self-supervised objectives for each individual modality, we introduce graph contrastive learning Li et al. (2019); You et al. (2020); Zhu et al. (2021) to learn multi-modal embeddings jointly. These two additions to FormNetV1 Lee et al. (2022) enable the graph convolutions to produce better super-tokens, resulting in both improved performance and a smaller model size.
In experiments, FormNetV2 outperforms its predecessor FormNetV1 as well as the existing multimodal approaches on four standard benchmarks. In particular, compared with FormNetV1, FormNetV2 outperforms it by a large margin on FUNSD (86.35 v.s. 84.69) and Payment (94.90 v.s. 92.19); compared with DocFormer Appalaraju et al. (2021), FormNetV2 outperforms it on FUNSD and CORD with nearly 2.5x less number of parameters.
Related Work
Early works on form document information extraction are based on rule-based models or learning-based models with handcrafted features (Lebourgeois et al., 1992; O'Gorman, 1993; Ha et al., 1995; Simon et al., 1997; Marinai et al., 2005; Chiticariu et al., 2013). Later on, various deep neural models have been proposed, including methods based on recurrent nets (Palm et al., 2017; Aggarwal et al., 2020), convolutional nets (Katti et al., 2018; Zhao et al., 2019; Denk and Reisswig, 2019), and transformers (Majumder et al., 2020; Garraczek et al., 2020; Wang et al., 2022).
Recently, in addition to the text, researchers have explored the layout attribute in form document modeling, such as the OCR word reading order (Lee et al., 2021; Gu et al., 2022), text coordinates (Majumder et al., 2020; Xu et al., 2020; Garraczek et al., 2020; Li et al., 2021; Lee et al., 2022), layout grids (Lin et al., 2021), and layout graphs (Lee et al., 2022). The image attribute also provides essential visual cues such as fonts, colors, and sizes. Other visual signals can be useful as well, including logos and separating lines from form tables. Xu et al. (2020) uses Faster R-CNN (Ren et al., 2015) to extract token image features; Apalaraju et al. (2021) uses ResNet50 (He et al., 2016) to extract full document image features; Li et al. (2022) use ViT (Dosovitskiy et al., 2020) with FPN (Lin et al., 2017) to extract non-overlapping patch image features. These sophisticated image embedders require a separate pre-training step using external image datasets (e.g. ImageNet (Russakovsky et al., 2015) or PubLayNet (Zhong et al., 2019)), and sometimes depend upon a visual codebook pre-trained by a discrete variational auto-encoder (dVAE).
When multiple modalities come into play, different supervised or self-supervised multimodal pre-training techniques have been proposed. They include mask prediction, reconstruction, and matching for one or more modalities (Xu et al., 2020, 2021; Appalaraju et al., 2021; Li et al., 2021; Gu et al., 2022; Huang et al., 2022; Li et al., 2022; Pramanik et al., 2020). Next-word prediction (Kim et al., 2022) or length prediction (Li et al., 2021) have been studied to bridge text and image modalities. Direct and relative position predictions (Cosma et al., 2020; Wei et al., 2020; Li et al., 2021; Wang et al., 2022; Li et al., 2021) have been proposed to explore the underlying layout semantics of documents. Nevertheless, these pre-training objectives require strong domain expertise, specialized designs, and multi-task tuning between involved modalities. In this work, our proposed graph contrastive learning performs multimodal pre-training in a centralized design, unifying the interplay between all involved modalities without the need for prior domain knowledge.
## 3 FormNetV2
We briefly review the backbone architecture FormNetV1 (Lee et al., 2022) in Sec 3.1, introduce the multimodal input design in Sec 3.2, and detail the multimodal graph contrastive learning in Sec 3.3.
### Preliminaries
Etc.FormNetV1 (Lee et al., 2022) uses Extended Transformer Construction (ETC; Ainslie et al., 2020) as the backbone to work around the quadratic memory cost of attention for long form documents. ETC permits only a few special tokens to attend to every token in the sequence (global attention); all other tokens may only attend to \(k\) local neighbors within a small window, in addition to these special tokens (local attention). This reduces the computational complexity from \(O(n^{2})\) query-key pairs that need scoring to \(O(kn)\). Eq. (2) formalizes the computation of the attention vector \(\mathbf{a}_{0}\) for a model with one global token at index 0, and Eq. (2) formalizes computation of the attention vector \(\mathbf{a}_{i>0}\) for the rest of the tokens in the model.
\[\mathbf{a}_{0} =\texttt{attend}(\mathbf{h}_{0},[\mathbf{h}_{0},\mathbf{h}_{1}, \ldots,\mathbf{h}_{n}]) \tag{1}\] \[\mathbf{a}_{i>0} =\texttt{attend}(\mathbf{h}_{i},[\mathbf{h}_{0},\mathbf{h}_{i-k},\ldots,\mathbf{h}_{i+k}]) \tag{2}\]
Rich Attention.To address the distorted semantic relatedness of tokens created by imperfect OCR serialization, FormNetV1 adapts the attention mechanism to model spatial relationships between tokens by proposing Rich Attention, a mathematically sound way of conditioning attention on low-level spatial features without resorting to quantizing the document into regions associated with distinct embeddings in a lookup table. In Rich Attention, the model constructs the (pre-softmax) attention score (Eq. 10) from multiple components: the usual transformer attention score (Eq. 7); the order of tokens along the x-axis and the y-axis (Eq. 8); and the log distance (in number of pixels) between tokens, again along both axes (Eq. 9). The expression for a transformer head with Rich Attention on the x-axis is provided in Eqs. (3-10); we
refer the interested reader to Lee et al. (2022) for further details.
\[o_{ij} =\texttt{int}\left(x_{i}<x_{j}\right) \tag{3}\] \[d_{ij} =\ln(1+|x_{i}-x_{j}|)\] (4) \[p_{ij} =\texttt{Sigmaid}(\texttt{affine}^{(p)}([\mathbf{q_{i}};\mathbf{ k}_{j}]))\] (5) \[\mu_{ij} =\texttt{affine}^{(\mu)}([\mathbf{q_{i}};\mathbf{k}_{j}])\] (6) \[s_{ij}^{(t)} =\mathbf{q}_{i}^{\top}\mathbf{k}_{j}\] (7) \[s_{ij}^{(o)} =o_{ij}\ln(p_{ij})+(1-o_{ij})\ln(1-p_{ij})\] (8) \[s_{ij}^{(d)} =-\frac{\theta^{2}(d_{ij}-\mu_{ij})^{2}}{2}\] (9) \[s_{ij} =s_{ij}^{(t)}+s_{ij}^{(o)}+s_{ij}^{(d)} \tag{10}\]
Gcn.Finally, FormNetV1 includes a graph convolutional network (GCN) contextualization step _before_ serializing the text to send to the ETC transformer component. The graph for the GCN locates up to \(K\) neighbors for each token - defined broadly by geographic "nearness" - before convolving their token embeddings to build up supertoken representations as shown in Figure 1. This allows the network to build a weaker but more complete picture of the layout modality than Rich Attention, which is constrained by local attention.
The final system was pretrained end-to-end with a standard masked language modeling (MLM) objective. See Sec A.3 in Appendix for more details.
### Multimodal Input
In FormNetV2, we propose adding the image modality to the model in addition to the text and layout modalities that are already used in FormNetV1 (Sec 3.3 in Lee et al. (2022)). We expect that image features from documents contain information absent from the text or the layout, such as fonts, colors, and sizes of OCR words.
To do this, we run a ConvNet to extract dense image features on the whole document image, and then use Region-of-Interest (RoI) pooling (He et al., 2017) to pool the features within the bounding box that joins a pair of tokens connected by a GCN edge. Finally, the RoI pooled features go through another small ConvNet for refinement. After the image features are extracted, they are injected into the network through concatenation with the existing layout features at edges of the GCN. Figure 2 illustrates how all three modalities are utilized in this work and Sec 4.2 details the architecture.
Most of the recent approaches (Table 1) that incorporate image modality extract features from either (a) the whole image as one vector, (b) non-overlapping image patches as extra input tokens to transformers, or (c) token bounding boxes that are added to the text features for all tokens.
However, form document images often contain OCR words that are relatively small individually and are densely distributed in text blocks. They also contain a large portion of the background region without any texts. Therefore, the aforementioned method (a) only generates global visual representations with large noisy background regions but not
Figure 1: Graph of a sample region from a form. Token bounding boxes are identified, and from them the graph is constructed. Nodes are labeled and the graph structure is shown abstracted away from its content.
Figure 3: Image features are extracted from bounding boxes (red) that join pairs of tokens connected by edges to capture (a) similar patterns within an entity, or (b) dissimilar patterns or separating lines between entities.
Figure 2: Multimodal graph representations are composed from three modalities: text at node-level; concatenation of layout and image at edge-level.
targeted entity representations; method (b) tends to be sensitive to the patch size and often chops OCR words or long entities to different patches, while also increasing computational cost due to the increased token length; and method (c) only sees regions within each token's bounding box and lacks context between or outside of tokens.
On the other hand, the proposed edge-level image feature representation can precisely model the relationship between two nearby, potentially related "neighbor" tokens and the surrounding region, while ignoring all irrelevant or distracting regions. Figure 3 demonstrates that the targeted RoI image feature pooling through the union bounding box can capture any similar patterns (e.g. font, color, size) within an entity (left) or dissimilar patterns or separating lines between entities (right). See Sec 4.4 for detailed discussion.
### Multimodal Graph Contrastive Learning
Previous work in multimodal document understanding requires manipulating multiple supervised or self-supervised objectives to learn embeddings from one or multiple modalities during pre-training. By contrast, in FormNetV2, we propose utilizing the graph representation of a document to learn multimodal embeddings with a contrastive loss.
Specifically, we first perform stochastic graph corruption to sample two corrupted graphs from the original input graph of each training instance. This step generates node embeddings based on partial contexts. Then, we apply a contrastive objective by maximizing agreement between tokens at node-level. That is, the model is asked to identify which pairs of nodes across all pairs of nodes - within the same graph and across graphs - came from the same original node. We adopt the standard normalized temperature-scaled cross entropy (NT-Xent) loss formulation (Chen et al., 2020; Wu et al., 2018; Oord et al., 2018; Sohn, 2016) with temperature 0.1 in all experiments.
To build a centralized contrastive loss that unifies the interactions between multiple input modalities, we corrupt the original graph at both graph topology level and graph feature level. Topology corruption includes edge dropping by randomly removing edges in the original graph. Feature corruption includes applying dropping to all three modalities: dropping layout and image features from edges and dropping text features from nodes. Note that we only corrupt the graph in the GCN encoder and keep the ETC decoder intact to leverage the semantically meaningful graph representation of the document during graph contrastive learning.
To further diversify the contexts in two corrupted graphs and reduce the risk of training the model to over-rely on certain modalities, we further design an inductive graph feature dropping mechanism by adopting imbalanced drop-rates of modalities between the two corrupted graphs. Precisely, for a given modality, we discard \(p\) percent of the features in the first corrupted graph and discard \(1-p\) percent of the features in the second corrupted graph. Experiments in Sec 4.4 show that \(p=0.8\) works best empirically and the inductive feature dropping mechanism provides further performance boost over the vanilla version. We stipulate that this boom-and-bust approach to regularization allows the model to learn rich, complex representations that take full advantage of the model's capacity without becoming overly dependent on specific feature interactions. Figure 4 illustrates the overall
Figure 4: Multimodal graph contrastive learning. Two corrupted graphs are sampled from an input graph by corruption of graph topology (edges) and attributes (multimodal features). The system is trained to identify which pair of nodes across all pairs of corrupted nodes (including within the same graph) came from the same node.
process.
The proposed graph contrastive objective is also general enough in principle to adopt other corruption mechanisms Zhu et al. (2020); Hassani and Khasahmadi (2020); You et al. (2020); Velickovic et al. (2019). The multimodal feature dropping provides a natural playground to consume and allow interactions between multiple input modalities in one single loss design. It is straightforward to extend the framework to include more modalities without the need for hand crafting specialized loss by domain experts. To the best of our knowledge, we are the first to use graph contrastive learning during pre-training for form document understanding.
## 4 Evaluation
### Datasets
Funsd.FUNSD Jaume et al. (2019) contains a collection of research, marketing, and advertising forms that vary extensively in their structure and appearance. The dataset consists of 199 annotated forms with 9,707 entities and 31,485 word-level annotations for 4 entity types: header, question, answer, and other. We use the official 75-25 split for the training and test sets.
Cord.CORD Park et al. (2019) contains over 11,000 Indonesian receptions from shops and restaurants. The annotations are provided in 30 fine-grained semantic entities such as store name, quantity of menu, tax amount, discounted price, etc. We use the official 800-100-100 split for training, validation, and test sets.
Sroie.The ICDAR 2019 Challenge on Scanned Receipts OCR and key Information Extraction (SROIE) Huang et al. (2019) offers 1,000 whole scanned receipt images and annotations. 626 samples are for training and 347 samples are for testing. The task is to extract four predefined entities: company, date, address, or total.
Payment.We use the large-scale payment data Majumder et al. (2020) that consists of roughly 10,000 documents and 7 semantic entity labels from human annotators. We follow the same evaluation protocol and dataset splits used in Majumder et al. (2020).
### Experimental Setup
We follow the FormNetV1 Lee et al. (2022) architecture with a slight modification to incorporate multiple modalities used in the proposed method. Our backbone model consists of a 6-layer GCN encoder to generate structure-aware super-tokens, followed by a 12-layer ETC transformer decoder equipped with Rich Attention for document entity extraction. The number of hidden units is set to 768 for both GCN and ETC. The number of attention heads is set to 1 in GCN and 12 in ETC. The maximum sequence length is set to 1024. We follow Ainslie et al. (2020); Lee et al. (2022) for other hyper-parameter settings. For the image embedder architecture, see Sec A.1 in Appendix.
Pre-training.We pre-train FormNetV2 using two unsupervised objectives: Masked Language Modeling (MLM) Taylor (1953); Devlin et al. (2019) and the proposed multimodal Graph Contrastive Learning (GCL).
Different from BERT Devlin et al. (2019), here MLM has access to layout and image modalities during pre-training similar to Appalaraju et al. (2021); Xu et al. (2021, 2020). Nevertheless, the layout and image features are constructed at edge level instead of at node level, supplementing the text features for better underlying representation learning without directly leaking the trivial information.
GCL provides a natural playground for effective interactions between all three modalities from a document in a contrastive fashion. For each graph representation of a document, we generate two corrupted views by edge dropping, edge feature dropping, and node feature dropping with dropping rates {0.3, 0.8, 0.8}, respectively. The weight matrices in both GCN and ETC are shared across the two views.
We follow Appalaraju et al. (2021); Xu et al. (2021, 2020) and use the large-scale IIT-CDIP document collection Lewis et al. (2006) for pre-training, which contains 11 million document images. We train the models from scratch using Adam optimizer with batch size of 512. The learning rate is set to 0.0002 with a warm-up proportion of 0.01. We find that GCL generally converges faster than MLM, therefore we set the loss weightings to 1 and 0.5 for MLM and GCL, respectively.
Note that we do not separately pre-train or load a pre-trained checkpoint for the image embedder as done in other recent approaches shown in Table 1. In fact, in our implementation, we find that using sophisticated image embedders or pre-training with natural images, such as ImageNet Russakovsky et al. (2015), do not improve the final downstream
entity extraction F1 scores, and they sometimes even degrade the performance. This might be because the visual patterns presented in form documents are drastically different from natural images that have multiple real objects. The best practice for conventional vision tasks (classification, detection, segmentation) might not be optimal for form document understanding.
**Fine-tuning.** We fine-tune all models for the downstream entity extraction tasks in the experiments using Adam optimizer with batch size of 8. The learning rate is set to 0.0001 without warm-up. The fine-tuning is conducted on Tesla V100 GPUs for approximately 10 hours on the largest corpus. Other hyper-parameters follow the settings in Lee et al. (2022).
### Benchmark Results
Table 1 lists the results that are based on the same evaluation protocal1.
Footnote 1: Micro-F1 for FUNSD, CORD, and SROIE by following the implementation in Xu et al. (2021); macro-F1 for Payment Majumder et al. (2020).
As the field is actively growing, researchers have started to explore incorporating additional
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline
**Dataset** & **Method** & **P** & **R** & **F1** & **F1\({}^{\dagger}\) & **Modality** & **Image Embedder** & **\#Params** \\ \hline FUNSD & SPADE (Huang et al., 2021) & - & - & 70.5 & - & T+L & - & 110M \\ UniLMv2 (Bao et al., 2020) & 67.80 & 73.91 & 70.72 & - & T & - & 355M \\ LayoutLMv1 (Xu et al., 2020) & 75.36 & 80.61 & 77.89 & - & T+L & - & 343M \\ DocFormer (Appalariari et al., 2021) & 81.33 & 85.44 & 83.33 & - & T+L+I & ResNet50 & 502M \\ FormNetV1 (Lee et al., 2022) & 85.21 & 84.18 & 84.69 & - & T+L & - & 217M \\ \hline LayoutLMv1 (Xu et al., 2020) & 76.77 & 81.95 & 79.27 & - & T+L+I & ResNet101 & 160M \\ LayoutLMv2 (Xu et al., 2021) & 83.24 & 85.19 & 84.20 & - & T+L+I & ResNeXt101-FPN & 426M \\ DocFormer (Appalariariari et al., 2021) & 82.29 & 86.94 & 84.55 & - & T+L+I & ResNet50 & 536M \\ StructuralLM (Li et al., 2021a) & - & - & 85.14 & T+L & - & 355M \\ LayoutLMv3 (Huang et al., 2022) & 81.35 & 83.75 & 82.53 & 92.08 & T+L+I & Tokenization & 368M \\ \hline FormNetV2 (ours) & 85.78 & 86.94 & **86.35** & 92.51 & T+L+I & 3-layer ConvNet & 204M \\ \hline CORD & SPADE (Huang et al., 2021) & - & - & 91.5 & - & T+L & - & 110M \\ UniLMv2 (Bao et al., 2020) & 91.23 & 92.89 & 92.05 & - & T & - & 355M \\ LayoutLMv1 (Xu et al., 2021) & 94.32 & 95.54 & 94.93 & - & T+L & - & 343M \\ DocFormer (Appalariari et al., 2021) & 96.46 & 96.14 & 96.30 & - & T+L+I & ResNet50 & 502M \\ FormNetV1 (Lee et al., 2022) & 98.02 & 96.55 & 97.28 & - & T+L & - & 345M \\ \hline LayoutLMv2 (Xu et al., 2021) & 95.65 & 96.37 & 96.01 & - & T+L+I & ResNeXt101-FPN & 426M \\ TILT (Powalski et al., 2021) & - & - & 96.33 & - & T+L+I & U-Net & 780M \\ DocFormer (Appalariari et al., 2021) & 97.25 & 96.74 & 96.99 & - & T+L+I & ResNet50 & 536M \\ LayoutLMv3 (Huang et al., 2022) & 95.82 & 96.03 & 95.92 & 97.46 & T+L+I & Tokenization & 368M \\ \hline FormNetV2 (ours) & 97.74 & 97.00 & **97.37** & 97.70 & T+L+I & 3-layer ConvNet & 204M \\ \hline SROIE & UniLMv2 (Bao et al., 2020) & - & - & 94.88 & - & T & - & 355M \\ LayoutLMv1 (Xu et al., 2021) & 95.24 & 95.24 & 95.24 & - & T+L & - & 343M \\ LayoutLMv2 (Xu et al., 2021) & 99.04 & 96.61 & 97.81 & - & T+L+I & ResNeXt101-FPN & 426M \\ \hline FormNetV2 (ours) & 98.56 & 98.05 & **98.31** & - & T+L+I & 3-layer ConvNet & 204M \\ \hline Payment & NeuralScoring Majumder et al. (2020) & - & - & 87.80 & - & T+L & - & - \\ FormNetV1 (Lee et al., 2022) & 92.70 & 91.69 & 92.19 & - & T+L & - & 217M \\ \hline FormNetV2 (ours) & 94.11 & 95.71 & **94.90** & - & T+L+I & 3-layer ConvNet & 204M \\ \hline \hline \end{tabular}
\end{table}
Table 1: Entity-level precision, recall, and F1 score comparisons on four standard benchmarks. “T/L/I” denotes “text/layout/image” modality. The proposed FormNetV2 establishes new state-of-the-art results on all four datasets. FormNetV2 significantly outperforms the most recent DocFormer Appalariou et al. (2021) and LayoutLMv3 Huang et al. (2022) while using a 38% and 55% sized model, respectively. Note that LayoutLMv3 Huang et al. (2022) and StructuralLM Li et al. (2021a) use segment-level layout positions that incorporate ground truth entity bounding boxes, which is less practical for real-world applications. We nevertheless report our results under the same protocol in column F1\({}^{\dagger}\). See Sec 4.3 and Sec A.2 in Appendix for details.
Figure 5: **Model Size vs. Entity Extraction F1 Score** on FUNSD benchmark. The FormNetV2 family significantly outperforms other recent approaches – FormNetV2 achieves highest F1 score (86.35%) while using a 2.6x smaller model than DocFormer (84.55%; Appalaraju et al., 2021). FormNetV2 also outperforms FormNetV1 Lee et al. (2022) by a large margin (1.66 F1) while using fewer parameters.
information into the system. For example, LayoutLMv3 Huang et al. (2022) and StructuralLM Li et al. (2021) use segment-level layout positions derived from ground truth entity bounding boxes - the {Begin, Inside, Outside, End, Single} schema information Ratinov and Roth (2009) that determine the spans of entities are given to the model, which is less practical for real-world applications. We nevertheless report our results under the same protocol in column \(\mathrm{F}\mathrm{I}^{\dagger}\) in Table 1. We also report LayoutLMv3 results without ground-truth entity segments for comparisons.
Furthermore, UDoc Gu et al. (2022) uses additional paragraph-level supervision returned by a third-party OCR engine EasyOCR2. Additional PubLayNet Zhong et al. (2019) dataset is used to pre-train the vision backbone. UDoc also uses different training/test splits (626/247) on CORD instead of the official one (800/100) adopted by other works. ERNIE-mmLayout Wang et al. (2022) utilizes a third-party library spaCy3 to provide external knowledge for the Common Sense Enhancement module in the system. The F1 scores on FUNSD and CORD are 85.74% and 96.31% without the external knowledge. We hope the above discussion can help clarify the standard evaluation protocol and decouple the performance improvement from modeling design vs. additional information.
Footnote 2: [https://github.com/JaidedAI/EasyOCR](https://github.com/JaidedAI/EasyOCR)
Footnote 3: spacy.io
Figure 5 shows model size vs. F1 score for the recent approaches that are directly comparable. The proposed method significantly outperforms other approaches in both F1 score and parameter efficiency: FormNetV2 achieves highest F1 score (86.35%) while using a 38% sized model than DocFormer (84.55%; Appalaraju et al. (2021)). FormNetV2 also outperforms FormNetV1 Lee et al. (2022) by a large margin (1.66 F1) while using fewer parameters. Table 1 shows that FormNetV2 outperforms LayoutLMv3 Huang et al. (2022) and StructuralLM Li et al. (2021) with a considerable performance leap while using a 55% and 57% sized model, respectively. From Table 1 we also observe that using all three modalities (text+layout+image) generally outperforms using two modalities (text+layout), and using two modalities (text+layout) outperforms using one modality (text) only across different approaches.
### Ablation Studies
We perform studies over the effect of image modality, graph contrastive learning, and decoupled graph corruption. The backbone for these studies is a 4-layer 1-attention-head GCN encoder followed by a 4-layer 8-attention-head ETC transformers decoder with 512 hidden units. The model is pre-trained on the 1M IIT-CDIP subset. All other hyperparameters follow Sec 4.2.
**Effect of Image Modality and Image Embedder.** Table 2 lists results of FormNetV1 (a) backbone only, (b) with additional tokens constructed from image patches4, and (c) with the proposed image feature extracted from edges of a graph. The networks are pre-trained with MLM only to showcase the impact of input with image modality.
Footnote 4: We experiment with 32x32 image patch size, resulting in additional 256 image tokens to the model.
We observe that while (b) provides slight F1 score improvement, it requires 32% additional parameters over baseline (a). The proposed (c) approach achieves a significant F1 boost with less than 1% additional parameters over baseline (a). Secondly, we find the performance of more advanced image embedders He et al. (2016) is inferior to the 3-layer ConvNet used here, which suggests that these methods may be ineffective in utilizing image modality. Nevertheless, the results demonstrate the importance of image modality as part of the multimodal input. Next we will validate the importance of an effective multimodal pre-training mechanism through graph contrastive learning.
Effect of Graph Contrastive Learning.The graph corruption step (Figure 4) in the proposed multimodal graph contrastive learning requires corruption of the original graph at both topology and feature levels. Considering the corruption happens in multiple places: edges, edge features, and node features, a naive graph corruption implementation would be to use the same drop-rate value everywhere. In Figure 6(a)(b), we show the downstream entity extraction F1 scores on FUNSD and CORD datasets by varying the dropping rate value during the graph contrastive pre-training. The selected
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Method** & **FUNSD** & **CORD** & **\#Params** \\ \hline FormNetV1 & 82.53 & 95.16 & 81.7M \\ FormNetV1+Image Patch & 82.65 & 95.43 & 107.0M \\ FormNetV1+Edge Image (ours) & 83.13 & 95.85 & 82.3M \\ \hline \hline \end{tabular}
\end{table}
Table 2: F1 with different image modality setups.
dropping rate is shared across all aforementioned places.
Results show that the proposed multimodal graph contrastive learning works out of the box across a wide range of dropping rates. It demonstrates the necessity of multimodal corruption at both topology level and feature level - it brings up to 0.66% and 0.64% F1 boost on FUNSD and CORD respectively, when the model is pre-trained on MLM plus the proposed graph contrastive learning over MLM only. Our method is also stable to perturbation of different drop-rates.
We observe less or no performance improvement when extreme drop-rates are used; for example, dropping 10% edges and features or dropping 90% edges and features. Intuitively, dropping too few or too much information provides either no node context changes or too few remaining node contexts in different corrupted graphs for effective contrastive learning.
Effect of Decoupled Graph Corruption.In this study, we investigate whether decoupling the drop-rate in different places of graph corruption can learn better representations during pre-training and bring further improvement to the downstream entity extraction tasks. Specifically, we select different dropping rates for all four different places: edge, layout and image features at edge level, and text features at node level. At feature level (layout, image, text), when one of the corrupted graphs selects dropping rate \(p\) for a certain feature, the other corrupted graph will use the complement of the selected dropping rate \(1-p\) for the same feature as introduced in Sec 3.3. This inductive multimodal contrastive design creates stochastically imbalanced information access to the features between two corrupted views. It provides more diverse contexts at node level in different views and makes the optimization of the contrastive objective harder, ideally generating more semantically meaningful representations between the three modalities.
Figure 6(c)(d) show the downstream entity extraction F1 scores on FUNSD and CORD datasets by pre-training with three different edge dropping rates and three different feature dropping rates. We observe that decoupling the dropping rate at various levels further boosts the performance on both datasets - it brings another 0.34% and 0.07% F1 boost on FUNSD and CORD respectively, when decoupled dropping rates are used over the non-decoupled ones.
We also observe nonlinear interactions between different dropping rates at edge level and feature level. The best performing feature dropping rate might be sub-optimal when a different edge dropping rate is applied. This is noteworthy but not surprising behavior, since different edge dropping rates would drastically change the graph topology (and therefore the node embeddings). We expect the amount of information needed for maximizing the agreement of node contexts between two corrupted graphs to be different when the graph topology is altered. Nevertheless, we find that low edge dropping rates (e.g. 0.3) generally perform better than high edge dropping rates, and therefore select a low edge dropping rate in our final design.
Visualization.We visualize (Vig, 2019) the local-to-local attention scores of a CORD example for model pre-trained with MLM only and MLM+GCL but before fine-tuning in Figure 7(a). We observe that with GCL, the model can identify more meaningful token clusterings, leveraging multimodal in
Figure 6: **Entity Extraction F1 Score vs. Graph Corruption Mechanism** on FUNSD and CORD benchmarks. (a)(b) show results using the same drop-rate across modalities. The proposed multimodal graph contrastive learning improves MLM pretraining at almost all drop-rates; (c)(d) show results using different drop-rates across modalities. The decoupled dropping mechanism permits further boosts to the F1 scores over non-decoupled counterparts. See Sec 4.4 for discussion.
put more effectively.
We also show sample model outputs that do not match the human-annotated ground truth in Figure 7(b). The model confuses between 'header' and 'other' on the top of the form and between 'question' and 'answer' for the multiple choice questions on the bottom half of the form. More visualization can be found in Figure 9 in Appendix.
## 5 Conclusion
FormNetV2 augments an existing strong FormNetV1 backbone with image features bounded by pairs of neighboring tokens and the graph contrastive objective that learns to differentiate between the multimodal token representations of two corrupted versions of an input graph. The centralized design sheds new light to the understanding of multimodal form understanding.
## 6 Limitations
Our work follows the general assumption that the training and test set contain the same list of pre-defined entities. Without additional or necessary modifications, the few-shot or zero-shot capability of the model is expected to be limited. Future work includes exploring prompt-based architectures to unify pre-training and fine-tuning into the same query-based procedure.
## 7 Ethics Consideration
We have read and compiled with the ACL Code of Ethics. The proposed FormNetV2 follows the prevailing large-scale pre-training then fine-tuning framework. Although we use the standard IIT-CDIP dataset for pre-training in all experiments, the proposed method is not limited to using specific datasets for pre-training. Therefore, it shares the same potential concerns of existing large language models, such as biases from the pre-training data and privacy considerations. We suggest following a rigorous and careful protocol when preparing the pre-training data for public-facing applications.
|
2306.13114 | A Reference-less Quality Metric for Automatic Speech Recognition via
Contrastive-Learning of a Multi-Language Model with Self-Supervision | The common standard for quality evaluation of automatic speech recognition
(ASR) systems is reference-based metrics such as the Word Error Rate (WER),
computed using manual ground-truth transcriptions that are time-consuming and
expensive to obtain. This work proposes a multi-language referenceless quality
metric, which allows comparing the performance of different ASR models on a
speech dataset without ground truth transcriptions. To estimate the quality of
ASR hypotheses, a pre-trained language model (LM) is fine-tuned with
contrastive learning in a self-supervised learning manner. In experiments
conducted on several unseen test datasets consisting of outputs from top
commercial ASR engines in various languages, the proposed referenceless metric
obtains a much higher correlation with WER scores and their ranks than the
perplexity metric from the state-of-art multi-lingual LM in all experiments,
and also reduces WER by more than $7\%$ when used for ensembling hypotheses.
The fine-tuned model and experiments are made available for the
reproducibility: https://github.com/aixplain/NoRefER | Kamer Ali Yuksel, Thiago Ferreira, Ahmet Gunduz, Mohamed Al-Badrashiny, Golara Javadi | 2023-06-21T21:33:39Z | http://arxiv.org/abs/2306.13114v1 | A Reference-Less Quality Metric for Automatic Speech Recognition via Contrastive-Learning of a Multi-Language Model with Self-Supervision
###### Abstract
The common standard for quality evaluation of automatic speech recognition (ASR) systems is reference-based metrics such as the Word Error Rate (WER), computed using manual ground-truth transcriptions that are time-consuming and expensive to obtain. This work proposes a multi-language referenceless quality metric, which allows comparing the performance of different ASR models on a speech dataset without ground truth transcriptions. To estimate the quality of ASR hypotheses, a pre-trained language model (LM) is fine-tuned with contrastive learning in a self-supervised learning manner. In experiments conducted on several unseen test datasets consisting of outputs from top commercial ASR engines in various languages, the proposed referenceless metric obtains a much higher correlation with WER scores and their ranks than the perplexity metric from the state-of-art multi-lingual LM in all experiments, and also reduces WER by more than \(7\%\) when used for ensembling hypotheses. The fine-tuned model and experiments are made available for the reproducibility: [https://github.com/aixplain/NoRefER](https://github.com/aixplain/NoRefER)
Kamer Ali Yuksel, Thiago Ferreira, Ahmet Gunduz, Mohamed Al-Badrashiny, Golara Javadi aiXplain Inc., Los Gatos, CA, USA
Referenceless Quality Estimation, Speech Recognition, Self-Supervised Learning, Contrastive Learning
## 1 Introduction
Automatic speech recognition (ASR) is a rapidly evolving field that has been actively researched for over six decades. ASR systems have numerous practical applications, including voice assistants, dictation software, and call centers. ASR has become an essential technology for many businesses and individuals, allowing for hands-free interaction and translating spoken language into text. Traditionally, the evaluation of ASR systems has been based on comparing the system's outputs with ground-truth transcripts, also known as references. Reference-based metrics, such as the Word-Error-Rate (WER), are calculated by comparing the outputs of the ASR system with the ground-truth transcripts and determining the number of errors made. The most significant limitation of these metrics is that they require ground-truth transcripts, which may not always be available, and the quality of the reference transcript can affect the accuracy of the evaluation. Instead, referenceless evaluation metrics for ASR might use the audio and output features to estimate the resulting quality.
Recently, many efforts have been made to train regression or ordinal classification models for ASR quality estimation based on supervised learning of speech and language features [2, 3, 4, 5]. Fan _et al._[6] proposed using a bidirectional transformer language model conditional on speech features for ASR quality estimation. They designed a neural zero-inflated Beta regression layer, which closely models the empirical distribution of WER, and reported results in WER prediction using the metrics of Pearson correlation and mean absolute error (MAE). Ali and Renals [7] used a multistream end-to-end architecture with acoustic, lexical, and phonotactic features for estimating WER without having access to the ASR system. In another study, Sheshadri _et al._[8] proposed a BERT-based architecture with speech features for estimating WER through balanced ordinal classification. Neither of these referenceless ASR quality estimators was based only on language features nor trained without having references. Finally, Namazifar _et al._[9] took advantage of the robustness of warped language models against transcription noise for correcting transcriptions of spoken language. They achieved up to 10% reduction in WER of automatic and manual transcriptions. However, they did not use their method for ref
Figure 1: NoRefER fine-tunes a pre-trained language model in a self-supervised learning manner with contrastive learning.
erenceless ASR quality estimation, while the distance with improved transcription could be used as a quality estimator.
The WMT Quality Estimation Shared Task [10] is a well-known evaluation framework for quality estimation (QE) metrics in machine translation (MT). The task recently also includes ranking the quality of machine-generated translations without access to reference translations. This is done by training quality estimation models on parallel sentences with human-annotated quality scores. As an outcome of the WMT Shared Task, various referenceless QE metrics have emerged in the MT domain, including COMET-QE [11]. COMET-QE is a contrastive learning method that fine-tunes a pre-trained language model for MT quality estimation to distinguish between high and low quality parallel MT hypotheses. However, the fine-tuning of COMET-QE relies on and is limited by the existence of a human-evaluation dataset or ground-truth references. In our work, self-supervision in contrastive learning is achieved via known quality relationships instead of costly human annotations for a training dataset.
This work introduces NoRefER (Fig. 1), a novel multi-language referenceless quality metric for ASR systems that can be applied to ASR hypotheses without ground-truth transcriptions. The main objective of this research is to provide an evaluation metric that overcomes the limitations of traditional reference-based metrics and can be applied to speech datasets that lack ground truth. NoRefER metric is obtained by fine-tuning a multi-language language model (LM) with self-supervised contrastive learning using a Siamese architecture [12]. For fine-tuning the LM with self-supervision, a training dataset of ASR hypothesis pairs is formed from the pairwise combinations of unique outputs from OpenAI's Whisper ASR model [1] in six compression levels where the higher the compression level, the lower quality is expected (Fig. 2). The intra-sample and inter-sample pair-wise quality ranking decisions of the referenceless metric are validated on several blind test datasets in various languages in comparison with the perplexity metric from XLM-RoBERTa-Large [13].
## 2 Methodology
The proposed method fine-tunes a pre-trained language model with contrastive learning using a Siamese network architecture for pair-wise ranking decisions. This is done over unique pair combinations from the outputs of ASR models in multiple compression levels for training and validation. The self-supervised part of NoRefER exploits known quality relationships between multiple compression levels. For the training and validation of the proposed referenceless quality metric with self-supervision (without having ground-truth transcriptions), unique outputs of an ASR model [1] in multiple compression levels are used to form pairwise combinations that can be utilized for contrastive learning. The compression level is considered a proxy for quality, with higher compression levels resulting in lower-quality transcriptions. Fig. 2 shows the creation of the dataset of pairs, which later is fed into the Siamese network for contrastive learning. The process of extracting unique pair combinations involves selecting two ASR hypotheses (one with higher quality and one with lower quality) for the same speech and combining them into a single pair. The extracted pairs are shuffled and placed into mini-batches to create the training and validation sets for fine-tuning the proposed Siamese network, after dropping the ones with the existing exact reverse pair. The WER in-between paired hypotheses are used for weighting the training and validation loss for each pair; so that the model is penalized more for incorrect pair-wise ranking decisions when the distance between two hypotheses is high (as it is more acceptable to make a mistake when they are close). For the training and validation of the proposed referenceless quality metric with self-supervision (without having ground-truth transcriptions), unique outputs of an ASR model [1] in multiple compression levels are used to form pairwise combinations that can be utilized for contrastive learning. The compression level is considered a proxy for quality, with higher compression levels resulting in lower-quality transcriptions.
Figure 2: Training and validation dataset generation process. To generate a referenceless set of negative and positive pairs for self-supervised contrastive learning, six compression levels of OpenAI’s Whisper [1] is used as ASR models (\(V_{1-6}\)) to compute different quality outcomes (\(O_{1-6}\)). All pair-wise combinations of these outputs formed the training and validation dataset.
Fig. 2 shows the creation of the dataset of pairs, which later is fed into the Siamese network for contrastive learning. The process of extracting unique pair combinations involves selecting two ASR hypotheses (one with higher quality and one with lower quality) for the same speech and combining them into a single pair. The extracted pairs are shuffled and placed into mini-batches to create the training and validation sets for fine-tuning the proposed Siamese network, after dropping inconsistent ones, for which the exact reverse pair also exist. The WER in-between paired hypotheses are used for weighting the training and validation loss for each pair; so that the model is penalized more for incorrect pair-wise ranking decisions when the distance between two hypotheses is high (as it is more acceptable to make a mistake when they are close).
The proposed method consists of a pre-trained cross-lingual LM with the Siamese network architecture, followed by a simple dense encoder to reduce the embeddings produced by the LM to a single scalar logit, which is used to compare both outputs of the Siamese network (Fig. 1). The pre-trained LM in this architecture is MiniLMv2, a smaller and (2.7x) faster language understanding model (with only 117M parameters), which is distilled from XLM-RoBERTa-Large [13] having 560M parameters. The dense encoder has two linear layers with dropout ratios of 10% and a non-linear activation in-between. This pre-trained LM is fine-tuned on a pair-wise ranking task with contrastive learning, a self-supervised learning method that trains a model to distinguish between positive and negative examples in a given task. For NoRefER, this task compares the pairs generated from ASR outputs as previously explained and predicts the one with higher quality. The contrastive learning process uses the shared network to take a pair as input and output a logit for each. The produced logits are subtracted from each other, and a Sigmoid activation is applied to their difference to produce a probability for binary classification of their qualities. At the test time the trained language model will use one-forward pass and then will apply Sigmoid to the output. The Adafactor optimizer [15] is utilized with its default parameters and 1e-5 learning rate for fine-tuning the LM on this pair-wise ranking task using Binary Cross-Entropy (Log-Loss) weighted by the WER in-between pairs. This contrastive learning process helps the LM learn a high-level representation of pairs that is discriminative of their quality.
## 3 Experiments
The referenceless metric was trained and validated on a large corpus that combined unique outputs from all publicly available compression levels of OpenAI's Whisper ASR model [1] for each audio sample available at CMU MOSEI and MOSEAS datasets [16, 17] containing a total of 134 hours of speech from Youtube videos of 2,645 speakers, where almost half of that was in English, and the remaining duration was uniformly consisting of French, Spanish, Portuguese, and German speeches. The self-supervised training was composed of unique pairs of speech transcripts, where the quality of one transcript in each pair was known to be higher than the other based on the compression level. There were 800340 self-supervised parallel ASR hypothesis pairs after removing inconsistent pairs. When tested on a validation set comprising 20% of this corpus, which contains randomly selected speakers and is stratified for languages, the proposed referenceless metric achieves \(77\%\) validation accuracy in pair-wise ranking. This accuracy demonstrates that the referenceless metric can provide reliable quality comparisons between different outputs from the same ASR model without ground truth. The trained referenceless metric was then blind-tested on multiple speech datasets: Common Voice (English, French, Spanish) [18] and Libri-Speech (English) [19]. Transcription hypotheses are obtained from top commercial ASR engines (AWS, AppTek, Azure, Deepgram, Google, and OpenAI's Whisper-Large) for each speech segment in those ASR datasets.
As a baseline, the referenceless metric was compared with the perplexity metric from the-state-of-art multi-lingual LM, XLM-RoBERTa Large [14]. Given a model and an input text sequence, perplexity measures how likely the model is to generate the input text sequence [20]. The lower the perplexity, the more confident the language model is in its predictions,
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \multirow{2}{*}{**Test Dataset - Language**} & \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**Correlation with WER ranking**} & \multicolumn{3}{c}{**Correlations with WER score itself**} \\ \cline{3-8} & & **Pearson** & **Spearman** & **Kendall** & **Pearson** & **Spearman** & **Kendall** \\ \hline \hline \multirow{2}{*}{Common Voice - English} & NoRefER & 0.56 & 0.48 & 0.55 & 0.42 & 0.33 & 0.24 \\ & XLMR-Large & 0.26 & 0.22 & 0.26 & 0.02 & 0.21 & 0.15 \\ \hline \multirow{2}{*}{Common Voice - French} & NoRefER & 0.48 & 0.40 & 0.48 & 0.38 & 0.33 & 0.24 \\ & XLMR-Large & 0.20 & 0.17 & 0.20 & 0.02 & 0.20 & 0.14 \\ \hline \multirow{2}{*}{Common Voice - Spanish} & NoRefER & 0.58 & 0.52 & 0.58 & 0.49 & 0.40 & 0.30 \\ & XLMR-Large & 0.25 & 0.22 & 0.25 & -0.01 & 0.20 & 0.14 \\ \hline \multirow{2}{*}{Libri-Speech - English} & NoRefER & 0.42 & 0.35 & 0.42 & 0.30 & 0.13 & 0.09 \\ & XLMR-Large & 0.22 & 0.17 & 0.21 & -0.06 & 0.13 & 0.09 \\ \hline \end{tabular}
\end{table}
Table 1: The proposed referenceless metric’s performance on Common Voice and Libri-Speech datasets in different languages, against the perplexity obtained from XLM-RoBERTa [14], regarding correlation coefficients with WER score and rankings. |
2306.04468 | Refined parameters of the HD 22946 planetary system and the true orbital
period of planet d | Multi-planet systems are important sources of information regarding the
evolution of planets. However, the long-period planets in these systems often
escape detection. HD 22946 is a bright star around which 3 transiting planets
were identified via TESS photometry, but the true orbital period of the
outermost planet d was unknown until now. We aim to use CHEOPS to uncover the
true orbital period of HD 22946d and to refine the orbital and planetary
properties of the system, especially the radii of the planets. We used the
available TESS photometry of HD 22946 and observed several transits of the
planets b, c, and d using CHEOPS. We identified 2 transits of planet d in the
TESS photometry, calculated the most probable period aliases based on these
data, and then scheduled CHEOPS observations. The photometric data were
supplemented with ESPRESSO radial velocity data. Finally, a combined model was
fitted to the entire dataset. We successfully determined the true orbital
period of the planet d to be 47.42489 $\pm$ 0.00011 d, and derived precise
radii of the planets in the system, namely 1.362 $\pm$ 0.040 R$_\oplus$, 2.328
$\pm$ 0.039 R$_\oplus$, and 2.607 $\pm$ 0.060 R$_\oplus$ for planets b, c, and
d, respectively. Due to the low number of radial velocities, we were only able
to determine 3$\sigma$ upper limits for these respective planet masses, which
are 13.71 M$_\oplus$, 9.72 M$_\oplus$, and 26.57 M$_\oplus$. We estimated that
another 48 ESPRESSO radial velocities are needed to measure the predicted
masses of all planets in HD 22946. Planet c appears to be a promising target
for future atmospheric characterisation. We can also conclude that planet d, as
a warm sub-Neptune, is very interesting because there are only a few similar
confirmed exoplanets to date. Such objects are worth investigating in the near
future, for example in terms of their composition and internal structure. | Z. Garai, H. P. Osborn, D. Gandolfi, A. Brandeker, S. G. Sousa, M. Lendl, A. Bekkelien, C. Broeg, A. Collier Cameron, J. A. Egger, M. J. Hooton, Y. Alibert, L. Delrez, L. Fossati, S. Salmon, T. G. Wilson, A. Bonfanti, A. Tuson, S. Ulmer-Moll, L. M. Serrano, L. Borsato, R. Alonso, G. Anglada, J. Asquier, D. Barrado y Navascues, S. C. C. Barros, T. Bárczy, W. Baumjohann, M. Beck, T. Beck, W. Benz, N. Billot, F. Biondi, X. Bonfils, M. Buder, J. Cabrera, V. Cessa, S. Charnoz, Sz. Csizmadia, P. E. Cubillos, M. B. Davies, M. Deleuil, O. D. S. Demangeon, B. -O. Demory, D. Ehrenreich, A. Erikson, V. Van Eylen, A. Fortier, M. Fridlund, M. Gillon, V. Van Grootel, M. Güdel, M. N. Günther, S. Hoyer, K. G. Isaak, L. L. Kiss, M. H. Kristiansen, J. Laskar, A. Lecavelier des Etangs, C. Lovis, A. Luntzer, D. Magrin, P. F. L. Maxted, C. Mordasini, V. Nascimbeni, G. Olofsson, R. Ottensamer, I. Pagano, E. Pallé, G. Peter, G. Piotto, D. Pollacco, D. Queloz, R. Ragazzoni, N. Rando, H. Rauer, I. Ribas, N. C. Santos, G. Scandariato, D. Ségransan, A. E. Simon, A. M. S. Smith, M. Steller, Gy. M. Szabó, N. Thomas, S. Udry, J. Venturini, N. Walton | 2023-06-07T14:40:05Z | http://arxiv.org/abs/2306.04468v1 | # Refined parameters of the HD 22946 planetary system
###### Abstract
Context:Multi-planet systems are important sources of information regarding the evolution of planets. However, the long-period planets in these systems often escape detection. These objects in particular may retain more of their primordial characteristics compared to close-in counterparts because of their increased distance from the host star. HD 22946 is a bright (\(G=8.13\) mag) late F-type star around which three transiting planets were identified via Transiting Exoplanet Survey Satellite (TESS) photometry, but the true orbital period of the outermost planet d was unknown until now.
Aims:We aim to use the Characterising Exoplanet Satellite (CHEOPS) space telescope to uncover the true orbital period of HD 22946d and to refine the orbital and planetary properties of the system, especially the radii of the planets.
Methods:We used the available TESS photometry of HD 22946 and observed several transits of the planets b, c, and d using CHEOPS. We identified two transits of planet d in the TESS photometry, calculated the most probable period aliases based on these data, and then scheduled CHEOPS observations. The photometric data were supplemented with ESPRESSO (Echelle SPectrograph for Rocky Exoplanets and Stable Spectroscopic Observations) radial velocity data. Finally, a combined model was fitted to the entire dataset in order to obtain final planetary and system parameters.
Results:Based on the combined TESS and CHEOPS observations, we successfully determined the true orbital period of the planet d to be \(47.42489\pm 0.00011\) d, and derived precise radii of the planets in the system, namely \(1.362\pm 0.040\) R\({}_{\oplus}\), \(2.328\pm 0.039\) R\({}_{\oplus}\), and \(2.607\pm 0.060\) R\({}_{\oplus}\) for planets b, c, and d, respectively. Due to the low number of radial velocities, we were only able to determine 3\(\sigma\) upper limits for these respective planet masses, which are \(13.71\) M\({}_{\oplus}\), \(9.72\) M\({}_{\oplus}\), and \(26.57\) M\({}_{\oplus}\). We estimated that another 48 ESPRESSO radial velocities are needed to measure the predicted masses of all planets in HD 22946. We also derived stellar parameters for the host star.
Conclusions:Planet c around HD 22946 appears to be a promising target for future atmospheric characterisation via transmission spectroscopy. We can also conclude that planet d, as a warm sub-Neptune, is very interesting because there are only a few similar confirmed exoplanets to date. Such objects are worth investigating in the near future, for example in terms of their composition and internal structure.
Conclusions:Planet c around HD 22946 appears to be a promising target for future atmospheric characterisation via transmission spectroscopy. We can also conclude that planet d, as a warm sub-Neptune, is very interesting because there are only a few similar confirmed exoplanets to date. Such objects are worth investigating in the near future, for example in terms of their composition and internal structure.
## 1 Introduction
Multi-planet systems are important from many viewpoints. Not only are they susceptible of relatively straightforward confirmation as bona fide planets (Lissauer et al. 2012), they also allow intra-planetary comparisons to be made for planets which formed under the same conditions; see for example Weiss et al. (2018). The majority of the known multi-planet systems were found by space-based exoplanet transit surveys. This is because, while giant hot-Jupiters are relatively easy to observe with ground-based photometry, the detection of smaller planets, for example, Earths, super-Earths, and sub-Neptunes, which are typically found in multi-planet systems, requires the precise photometry of space-based observatories such as TESS (Ricker 2014).
Mutual gravitational interactions in some multi-planet systems can provide constraints on the planet masses through tran
sit time variations (TTVs); see for example Nesvorny & Morbidelli (2008). Alternatively, radial velocity (RV) observations are needed to put constraints on the masses of planets (Mayor & Queloz, 1995). Even where masses cannot be determined, mass upper limits can provide proof that the studied objects are of planetary origin; see for example Hord et al. (2022), Wilson et al. (2022), or Stefansson et al. (2020). Mass determination can then help constrain the internal structure of the planet bodies, and break degeneracies in atmospheric characterisation follow-up studies. If precise planet radii are also determined from transit photometry, this allows the planet internal density to be calculated and the planetary composition to be estimated; see for example Delrez et al. (2021) and Lacedelli et al. (2021, 2022). Precise planetary parameters also allow the planets to be put in the context of population trends, such as the radius (Fulton et al., 2017; Van Eylen et al., 2018; Martinez et al., 2019; Ho & Van Eylen, 2023) and density (Luque & Palle, 2022) valleys.
Long-period planets in multiple-planet systems often escape detection, especially when their orbital periods are longer than the typical observing duration of photometric surveys (e.g. \(\sim\) 27 d for TESS). However, detecting such planets is also important. For example, the increased distance from their host stars means that, when compared with close-in planets, they may retain more of their primordial characteristics, such as unevaporated atmospheres (Owen, 2019) or circumplanetary material (Dobos et al., 2021). Due to the limited observing duration of the TESS primary mission, which observed the majority of the near-ecliptic sectors for only 27 days, planets on long periods produce only single transits. However, thanks to its extended mission, TESS re-observed the same fields two years later, and in many cases was able to re-detect a second transit; see for example Osborn et al. (2022). These 'duotransit' cases require follow-up in order to uncover the true orbital period due to the gap, which causes a set of aliases, \(P\in(t_{\rm tr,2}-t_{\rm tr,1})/(1,2,3,\ldots,N_{\rm max})\), where \(t_{\rm tr,1}\) and \(t_{\rm tr,2}\) are the first and the second observed mid-transit times, respectively. The longest possible period is the temporal distance between the two mid-transit times, \(P_{\rm max}=(t_{\rm tr,2}-t_{\rm tr,1})\), and the shortest possible period is bounded by the non-detection of subsequent transits.
In addition to ground-based telescopes, the CHEOPS space observatory (Benz et al., 2021) can be used to follow-up duotransit targets and to determine their true orbital periods and other characteristics. For example, the periods of two young sub-Neptunes orbiting BD+40 2790 (TOI-2076, TIC-27491137) were found using a combination of CHEOPS and ground-based photometric follow-up observations (Osborn et al., 2022). Furthermore, these combined observations uncovered the TTVs of two planets, and also improved the radius precision of all planets in the system. CHEOPS observations also recovered orbital periods of duotransits in HIP 9618 (Osborn et al., 2023), TOI-5678 (Ulmer-Moll et al., 2023), and HD 15906 (Tuson et al., 2023) systems. In the present study, we investigated the HD 22946 system with a similar aim. HD 22946 (TOI-411, TIC-100990000) is a bright (\(G=8.13\) mag) late F-type star with three transiting planets. The planetary system was discovered and validated only recently by Cacciapuoti et al. (2022); hereafter C22. The authors presented several parameters of the system, including the radii and mass limits of the planets. They found that planet b is a super-Earth with a radius of \(1.72\pm 0.10\) R\({}_{\oplus}\), while planets c and d are sub-Neptunes with radii of \(2.74\pm 0.14\) R\({}_{\oplus}\) and \(3.23\pm 0.19\) R\({}_{\oplus}\), respectively. The 3\(\sigma\) upper mass limits of planets b, c, and d were determined --based on ESPRESSO spectroscopic observations (see Sect. 2.3)-- to be 11 M\({}_{\oplus}\), 14.5 M\({}_{\oplus}\), and 24.5 M\({}_{\oplus}\), respectively. As TESS recorded several transits during observations in sector numbers 3, 4, 30, and 31, the discoverers easily derived the orbital periods of the two inner planets, b and c, which are about 4.040 d and 9.573 d, respectively. The orbital period of planet d was not found by C22. The authors determined its presence through a single transit found in sector number 4 and obtained its parameters from this single transit event. Its depth and the host brightness make planet d easily detectable with CHEOPS, and therefore HD 22946 was observed several times with this instrument within the Guaranteed Time Observations (GTO) programmes CH_PR110048 and CH_PR100031, with the main scientific goals being to uncover the true orbital period of planet d and to refine the parameters of the HD 22946 system based on CHEOPS and TESS observations via joint analysis of the photometric data, supplemented with ESPRESSO spectroscopic observations of HD 22946.
The present paper is organised as follows. In Sect. 2, we provide a brief description of observations and data reduction. In Sect. 3, we present the details of our data analysis and our first results, including stellar parameters, period aliases of HD 22946d from the TESS data, and a search for TTVs. Our final results based on the combined TESS, CHEOPS, and RV model are described and discussed in Sect. 4. We summarise our findings in Sect. 5.
## 2 Observations and data reduction
### TESS data
HD 22946 was observed during four TESS sectors: numbers 3, 4, 30, and 31 (see Table 1). The time gap between the two observing seasons is almost two years. These TESS data were downloaded from the Mikulski Archive for Space Telescopes1 in the form of Presearch Data Conditioning Simple Aperture Photometry (PDCSAP) flux. These data, containing 61 987 data points, were obtained from two-minute integrations and were initially smoothed by the PDCSAP pipeline. This light curve is subjected to more treatment than the simple aperture photometry (SAP) light curve, and is specifically intended for detecting planets. The pipeline attempts to remove systematic artifacts while keeping planetary transits intact. The average uncertainty of the PDCSAP data points is 310 ppm.
Footnote 1: See [https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html](https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html).
During these TESS observing runs, 23 transits of planet b were recorded, and the transit of planet c was observed eight
\begin{table}
\begin{tabular}{c c c} \hline \hline Time interval of observation & Sector No. & Transits \\ \hline \hline \multicolumn{3}{c}{HD 22946b} \\
2018-09-20 – 2018-10-18 & 03 & 5 \\
2018-10-18 – 2018-11-15 & 04 & 6 \\
2020-09-22 – 2020-10-21 & 30 & 6 \\
2020-10-21 – 2020-11-19 & 31 & 6 \\ \hline \multicolumn{3}{c}{HD 22946c} \\
2018-09-20 – 2018-10-18 & 03 & 2 \\
2018-10-18 – 2018-11-15 & 04 & 2 \\
2020-09-22 – 2020-10-21 & 30 & 2 \\
2020-10-21 – 2020-11-19 & 31 & 2 \\ \hline \multicolumn{3}{c}{HD 22946d} \\
2018-10-18 – 2018-11-15 & 04 & 1 \\
2020-09-22 – 2020-10-21 & 30 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Log of TESS photometric observations of HD 22946.
times in total (see more details in Table 1). As in C22, we also initially recognised a transit-like feature in the sector number 4 data at \(t_{\rm{w,1}}=2\) 458 425.1657 \(\rm{BJD_{TDB}}\) through visual inspection of the light curve. Given 65%-80% of single transits from the TESS primary mission will re-transit in the extended mission sectors (see Cooke et al., 2019, 2021), we subsequently visually inspected the light curve once the TESS year 3 data were available and found a second dip at \(t_{\rm{w,2}}=2\) 459 136.5357 \(\rm{BJD_{TDB}}\) in the sector number 30 data with near-identical depth and duration. Given the high prior probability of finding a second transit, the close match in transit shape between events, and the high quality of the data (i.e. minimal systematic noise elsewhere in the light curve), we concluded that this signal is a bona fide transit event and that the transits in sector numbers 4 and 30 are very likely caused by the same object, that is, by planet d.
Outliers were cleaned using a 3\(\sigma\) clipping, where \(\sigma\) is the standard deviation of the light curve. With this clipping procedure, we discarded 300 data points out of 61 987, which is \(\sim 0.5\%\) of the TESS data. Subsequently, we visually inspected the dataset in order to check the effect of the outlier removal, which we found to be reasonable. As TESS uses as time stamps Barycentric TESS Julian Date (i.e. \(\rm{BJD_{TDB}}-2\) 457 000.0), during the next step we converted all TESS time stamps to \(\rm{BJD_{TDB}}\).
### CHEOPS data
HD 22946 was observed five times with the CHEOPS space telescope. This is the first European space mission dedicated primarily to the study of known exoplanets. It consists of a telescope with a mirror of 32 cm in diameter based on a Ritchey-Chretien design. The photometric detector is a single-CCD camera covering the wavelength range from 330 to 1100 mm with a field of view of 0.32 deg\({}^{2}\). The payload design and operation have been optimised to achieve ultra-high photometric stability, achieving a photometric precision of 20 ppm on observations of a G5-type star in 6 hours, and 85 ppm observations of a K5-type star in 3 hours (Benz et al., 2021). The CHEOPS observations were scheduled based on the existing TESS observations of planets b and c, and mainly based on the observed transit times of planet d (see Sect. 2.1). The marginal probability for each period al alias of planet d was calculated using the MonoTools package (see Sect. 3.2). We were not able to observe all the highest-probability aliases, because some were not visible during the two-week period of visibility. Within the program number CH_PR110048, we therefore planned to observe the three highest-probability aliases of planet d with CHEOPS, but due to observability constraints and conflicts with other observations, only two visits2 of planet d aliases were scheduled. Its true orbital period was confirmed during the second observation. The remaining three visits were scheduled in the framework of the program number CH_PR100031. Based on these CHEOPS observations, three transits of planet b were recorded during visits 1, 3, and 5, the transit of planet c was observed twice during visits 2 and 4, and a single transit of planet d (in multiple transit feature with planet c) was detected during the CHEOPS visit 4. Further details about these observations can be found in Table 2.
Footnote 2: A visit is a sequence of successive CHEOPS orbits devoted to observing a given target.
From the CHEOPS detector, which has \(1024\times 1024\) pixels, a \(200\times 200\) pixels subarray is extracted around the target point-spread function (PSF), which is used to compute the photometry. This type of photometry product was processed by the CHEOPS Data Reduction Pipeline (DRP) version 13.1.0 (Hoyer et al., 2020). It performs several image corrections, including bias-, dark-, and flat-corrections, contamination estimation, and background-star correction. The DRP pipeline produces four different light-curve types for each visit, but we initially analysed only the decontaminated 'OPTIMAL' type, where the aperture radius is automatically set based on the signal-to-noise ratio (\(S/N\)). In addition to the subarrays, there are imagettes available for each exposure. The imagettes are frames of 30 pixels in radius centred on the target, which do not need to be co-added before download owing to their smaller size. We used a tool specifically developed for photometric extraction of imagettes using point-spread function photometry, called PIPE3; see for example Szabo et al. (2021, 2022). The PIPE photometry has a \(S/N\) comparable to that of DRP photometry, but has the advantage of shorter cadence, and therefore we decided to use this CHEOPS product in this work. The average uncertainty of the PIPE data points is 160 ppm.
Footnote 3: See [https://github.com/alphapsa/PIPE](https://github.com/alphapsa/PIPE).
The PIPE CHEOPS observations were processed using the dedicated data decorrelation and transit analysis software called pycheops4(Maxted et al., 2022). This package includes downloading, visualising, and decorrelating CHEOPS data, fitting transits and eclipses of exoplanets, and calculating light-curve noise. We first cleaned the light curves from outlier data points using the pycheops built-in function clip_outliers, which removes outliers from a dataset by calculating the mean absolute deviation (\(MAD\)) from the light curve following median smoothing, and rejects data greater than the smoothed dataset plus the \(MAD\) multiplied by a clipping factor. The clipping factor equal to five was reasonable in our cases, which we checked visually. With this clipping procedure, we discarded 30 data points out of 3195, which is \(\sim 0.9\%\) of the CHEOPS data. The next step was the extraction of the detrending parameters. During this procedure, the software gives a list of the parameters necessary for the detrending. The most important decorrelation is subtraction of the roll-angle effect. In order to keep the cold plate radiators facing away from the Earth, the spacecraft rolls during its orbit. This means that the field of view rotates around the pointing direction. The target star remains stationary within typically 1 pixel, but the rotation of the field of view produces a variation of its flux from the nearby sources in phase with the roll angle of the spacecraft (Bonfanti et al., 2021). The extracted detrending parameters were co-fitted with the transit model (see Sect. 3.3).
Footnote 4: See [https://github.com/pmaxted/pycheops](https://github.com/pmaxted/pycheops).
### ESPRESSO/VLT data
We acquired 14 high-resolution spectra of the host star HD 22946 using the ESPRESSO spectrograph (Pepe et al., 2014) mounted at the 8.2 m Very Large Telescope (VLT) at Paranal Observatory (Chile). The observations were carried out between 10 February 2019 and 17 March 2019 under the observing program number 0102.C-0456 (PI: V. Van Eylen) and within the KESPRINT5 project. We used the high-resolution (HR) mode of the spectrograph, which provides a resolving power of \(R\approx 134\) 000. We set the exposure time to 600 s, leading to a \(S/N\) per pixel at 650 nm ranging between 120 and 243. Daytime ThAr spectra and simultaneous Fabry-Perot exposures were taken to determine the wavelength solution and correct for possible nightly instrumental drifts, respectively. We reduced the ESPRESSO spectra using the dedicated data-reduction software and extracted the RVs by cross-correlating the echelle spectra
with a G2 numerical mask. We list the ESPRESSO RV measurements in Table 3. The average uncertainty of the RV data points is \(\sim 0.00015\) km s\({}^{-1}\).
We co-added the individual ESPRESSO spectra prior to carrying out the spectroscopic analysis presented in Sect. 3.1. To this aim, we Doppler-shifted the data to a common reference wavelength by cross-correlating the ESPRESSO spectra with the spectrum with the highest \(S/N\). We finally performed a \(S/N\)-weighted co-addition of the Doppler-shifted spectra, while applying a sigma-clipping algorithm to remove possible cosmic-ray hits and outliers. The co-added spectrum has a \(S/N\) of \(\sim 900\) per pixel at 650 nm.
## 3 Data analysis and first results
### Stellar parameters
The spectroscopic stellar parameters (the effective temperature \(T_{\rm eff}\), the surface gravity \(\log g\), the microturbulent velocity \(v_{\rm mic}\), and the metallicity [Fe/H]; see Table 4) were derived using the ARES and MOOG codes, following the same methodology as described in Sousa et al. (2021), Sousa (2014), and Santos et al. (2013). We used the latest version of the ARES code6(Sousa et al., 2007, 2015) to measure the equivalent widths of iron lines on the combined ESPRESSO spectrum. We used a minimisation procedure to find ionisation and excitation equilibrium and converge to the best set of spectroscopic parameters. This procedure makes use of a grid of Kurucz model atmospheres (Kurucz, 1993a) and the radiative transfer code M00G(Sneden, 1973).
Footnote 6: The last version, ARES v2, can be downloaded at [https://github.com/sousasag/ARES](https://github.com/sousasag/ARES).
To derive the radius of the host star HD 22946, we used a Markov-Chain Monte Carlo (MCMC) modified infrared flux method. This enables us to calculate the bolometric flux using stellar atmospheric models defined by our spectral analysis to build spectral energy distributions (SEDs) that are compared with broadband fluxes and uncertainties from the most recent data releases for the following bandpasses: _Gaia_, \(G_{\rm BP}\), and \(G_{\rm RP}\), 2MASS \(J\), \(H\), and \(K\), and _WISE_\(W1\) and \(W2\)(Skrutskie et al., 2006; Wright et al., 2010; Gaia Collaboration et al., 2021). From the bolometric flux, we then determine stellar effective temperature and angular diameter; this latter is converted to a radius using the offset-corrected _Gaia_ parallax Lindegren et al. (2021). We used Bayesian modeling averaging of the atlas (Kurucz, 1993b; Castelli & Kurucz, 2003) and phoenix(Allard, 2014) catalogues to produce a weighted averaged posterior distribution of the stellar radius in order to account for uncertainties in stellar atmospheric modelling. We find a value of \(R_{\rm s}=1.117\pm 0.009\) R\({}_{\odot}\), which is in 3\(\sigma\) agreement with the value of \(1.157\pm 0.025\) R\({}_{\odot}\) presented by the discoverers.
We finally determined the stellar mass \(M_{\rm s}\) and stellar age \(t_{\rm s}\) using two different sets of stellar evolutionary models, namely PARSEC7 v1.2S(Marigo et al., 2017) and CLES(Code Liegeois d'Evolution Stellaire), see Scuflaire et al. (2008). More specifically, we employed the isochrone-placement algorithm developed by Bonfanti et al. (2015, 2016) to interpolate the input parameters (\(T_{\rm eff}\), [Fe/H], \(R_{\rm s}\)) within pre-computed grids of PARSEC v1.2S isochrones and tracks to derive a first pair of mass and age. A second pair of mass and age values, instead, was retrieved by inputting \(T_{\rm eff}\), [Fe/H], and \(R_{\rm s}\) directly in the CLES code, which generates the best-fit stellar evolutionary track following the Levenberg-Marquardt minimisation scheme, as described in Salmon et al. (2021). After carefully checking the mutual consistency of the two respective pairs of outcomes through the \(\chi^{2}\)-based methodology presented in Bonfanti et al. (2021), we finally merged (i.e. summed) the two \(M_{\rm s}\) and \(t_{\rm s}\) results and obtained \(M_{\rm s}=1.098\pm 0.040\) M\({}_{\odot}\) and \(t_{\rm s}=2.5\pm 1.0\) Gyr. The
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Visit & Start date & End date & File & CHEOPS & Integration & Number \\ No. & [UTC] & [UTC] & key & product & time [s] & of frames \\ \hline \hline
1 & 2021-10-17 03:22 & 2021-10-17 14:40 & CH\_PR100031\_TG021201 & Subarray & \(2\times 20.0\) & 629 \\
1 & 2021-10-17 03:22 & 2021-10-17 14:40 & CH\_PR100031\_TG021201 & Imagettes & 20.0 & 1258 \\
2 & 2021-10-18 08:14 & 2021-10-18 19:04 & CH\_PR100031\_TG021101 & Subarray & \(2\times 20.0\) & 637 \\
2 & 2021-10-18 08:14 & 2021-10-18 19:04 & CH\_PR100031\_TG021101 & Imagettes & 20.0 & 1274 \\
3 & 2021-10-25 07:08 & 2021-10-25 19:49 & CH\_PR1001048\_TG010001 & Subarray & \(2\times 20.4\) & 708 \\
3 & 2021-10-25 07:08 & 2021-10-25 19:49 & CH\_PR110048\_TG010001 & Imagettes & 20.4 & 1416 \\
4 & 2021-10-28 02:12 & 2021-10-28 13:50 & CH\_PR110048\_TG010101 & Subarray & \(2\times 20.4\) & 666 \\
4 & 2021-10-28 02:12 & 2021-10-28 13:50 & CH\_PR110048\_TG010101 & Imagettes & 20.4 & 1332 \\
5 & 2021-10-29 08:48 & 2021-10-29 18:14 & CH\_PR100031\_TG021202 & Subarray & \(2\times 20.0\) & 555 \\
5 & 2021-10-29 08:48 & 2021-10-29 18:14 & CH\_PR100031\_TG021202 & Imagettes & 20.0 & 1110 \\ \hline \hline \end{tabular} 1
[FOOTNOTE:1]Footnote 1:
mass parameter value of the host star agrees within the uncertainty with the value provided in the discovery paper, which is \(1.104\pm 0.012\) M\({}_{\odot}\). However, the planet host seems to be younger than previously presented by C22. The discoverers obtained a value of \(5.0\pm 1.0\) Gyr. More parameter values, including from this work, are compared with the discovery-paper parameter values in Table 4.
### Period aliases of HD 22946d from the TESS data
In order to determine each possible period alias and to schedule CHEOPS observations of planet d, we first performed a period analysis of the available TESS data. For this purpose, we used the MonTools package8(Osborn et al., 2022), which is able to model transit light curves in case of multiple transits, duotransits, and monotransits, as well as multiple systems with combinations of such candidates, with both radial velocities and transit photometry. The package calculates a marginalised probability distribution across all allowed aliases for a given transit model by combining priors for each alias. The probabilities are estimated based on two major assumptions, namely that short-period orbits are highly favoured over long-period ones due to a combination of geometric probability and window function, and that planets in multi-planet systems have low eccentricities (Kipping et al., 2013; Kipping, 2018; Van Eylen & Albrecht, 2015). More details about this software can be found in Osborn et al. (2022).
Footnote 8: See [https://github.com/hposborn/MonTools](https://github.com/hposborn/MonTools).
The TESS data described in Sect. 2.1 were used during the fitting procedure using MonoTools. In the case of planet b, we set as input parameters the reference mid-transit time of \(T_{\rm c}=2\ 458\ 385.7318\ {\rm BJD_{TDB}}\), the orbital period of \(P_{\rm orb}=4.040330\pm 0.000010\) d, the transit duration (transit width) of \(W=3.4\) hr, and the transit depth of \(D=134\) ppm. In the case of planet c, the inputs were \(T_{\rm c}=2\ 458\ 386.1878\ {\rm BJD_{TDB}}\), \(P_{\rm orb}=9.573117\pm 0.000020\) d, \(W=3.8\) hr, and \(D=389\) ppm. For planet d, we set as input parameters the two mid-transit times detected by TESS, namely \(t_{\rm tr,i}=2\ 458\ 425.1657\ {\rm BJD_{TDB}}\) and \(t_{\rm tr,i}=2\ 459\ 136.5357\ {\rm BJD_{TDB}}\), the transit duration of \(W=6.5\) hr and the transit depth of \(D=478\) ppm. These parameters were calculated from the TESS data alone.
The orbital period aliases of planet d with a probability of \(p>1\%\) are listed in Table 5. The software MonoTools forecasted that a transit of planet d with the orbital period alias number 2 would take place on 25 October 2021, with a mid-transit time of \(T_{\rm c}=2\ 459\ 513.1441\ {\rm BJD_{TDB}}\). This forecasted event was observed during the third CHEOPS visit (see Table 2), but the expected transit of planet d did not happen; only the transit of planet b was recorded that time. After this observation, we were able to exclude the period alias of \(P=41.8454\) d from the list of possible aliases. The next forecast predicted a transit of planet d on 28 October 2021, with a mid-transit time of \(T_{\rm c}=2\ 459\ 515.9338\ {\rm BJD_{TDB}}\), which means that, in this case, the alias number 4 (see Table 5) was preferred as its true orbital period. This forecasted event was observed with CHEOPS during its fourth visit. This time, the transit of planet d was successfully detected together with a transit of planet c, confirming that the period alias of \(P=47.4248\) d is the true orbital period of planet d. This result also confirms that the second transit-like feature of planet d, observed by TESS in sector number 30, was a real transit event and not an instrumental artifact as considered by C22. Alternatively, the dip observed at 2 459 136.5357 \({\rm BJD_{TDB}}\) was a mixture of instrumental effects and the transit of planet d. With this gathered knowledge about the true orbital period of planet d, we were able to combine CHEOPS and TESS photometric observations and RV measurements in order to improve the orbital and planetary parameters of the HD 22946 system, which were previously obtained only from the TESS and RV data by the discoverers.
### CHEOPS, TESS, and RV combined model
In order to produce accurate planetary parameters for all three planets, we built a combined model using all available data, that is, TESS photometry (described in Sect. 2.1), CHEOPS photometry (described in Sect. 2.2), and ESPRESSO RVs (described
\begin{table}
\begin{tabular}{l l l} \hline \hline Parameter [unit] & Value & Source \\ \hline \hline Name & HD 22946 & – \\ TOI ID & 411 & G2021 \\ TIC ID & 10099000 & S2018 \\ _Gaia_ DR3 ID & 4848767461548943104 & G2022 \\ RA (J2016) [deg] & 54.819528 & G2022 \\ Dec (J2016) [deg] & \(-42.76304\) & G2022 \\ \(T\) (TESS) [mag] & \(7.757\pm 0.006\) & S2018 \\ \(G\) (_Gaia_) [mag] & \(8.13\pm 0.69\) & G2022 \\ \(J\) [mag] & \(7.250\pm 0.027\) & C2003 \\ \(H\) [mag] & \(7.040\pm 0.044\) & C2003 \\ \(K\) [mag] & \(6.981\pm 0.029\) & C2003 \\ _T_eff [K] & \(6040\pm 48\) & C2022 \\ _T_eff [K] & \(6169\pm 64\) & This work \\ _R\({}_{\rm s}\)_[\(R_{\odot}\)] & \(1.157\pm 0.025\) & C2022 \\ _R\({}_{\rm s}\)_[\(R_{\odot}\)] & \(1.117\pm 0.009\) & This work \\ _M\({}_{\rm s}\)_[\(M_{\odot}\)] & \(1.104\pm 0.012\) & C2022 \\ _M\({}_{\rm s}\)_[\(M_{\odot}\)] & \(1.098^{+0.040}_{-0.039}\) & This work \\ log \(g\) [cgs] & \(4.26\pm 0.15\) & C2022 \\ log \(g\) [cgs] & \(4.47\pm 0.11\) & This work \\ [Fe/H] [dex] & \(-0.14\pm 0.07\) & C2022 \\ [Fe/H] [dex] & \(-0.08\pm 0.04\) & This work \\ _t\({}_{\rm s}\)_[Gyr] & \(5.0\pm 1.0\) & C2022 \\ _t\({}_{\rm s}\)_[Gyr] & \(2.5\pm 1.0\) & This work \\ _v\({}_{\rm mic}\)_ [km s\({}^{-1}\)] & \(1.25\pm 0.03\) & This work \\ \hline \hline \end{tabular} 1
\end{table}
Table 4: Fundamental parameters of the exoplanet host HD 22946d.
\begin{table}
\begin{tabular}{c c c} \hline \hline Alias & Period alias (_P_) & Probability (_p_) \\ No. & [d] & [\%] \\ \hline \hline
1 & 39.5206 & 17.420 \\
2 & 41.8454 & 20.078 \\
3 & 44.4607 & 20.341 \\
4 & 47.4248 & 18.113 \\
5 & 50.8122 & 13.445 \\
6 & 54.7209 & 7.061 \\
7 & 59.2809 & 2.756 \\
8 & 64.6701 & \(\sim 1.0\) \\ \hline \hline \end{tabular} 1
\end{table}
Table 5: Orbital period aliases of the planet HD 22946d.
in Sect. 2.3). The combined model was built using the PyMC3 package9(Salvatier et al., 2016), which performs Hamiltonian Monte Carlo (HMC) sampling, with Keplerian orbits modeled with exoplanet package10(Foreman-Mackey et al., 2021). We used Gaussian processes (GPs) to model the stellar variability present in the TESS light curve, opting for a simple harmonic oscillator (SHO) kernel implemented in the celerite package (Foreman-Mackey et al., 2017) and a quality factor \(Q=1/\sqrt{2}\), as is common for quasi-periodic stellar variability. In order to speed up sampling, we binned the TESS data to 30 minute bins far from transits, keeping 2 minute data near transit. As we have reasonable prior knowledge from theoretical analyses for the expected stellar limb-darkening (LD) parameters for HD 22946, we used these as priors in the analysis. We used the quadratic LD law and interpolated tables of coefficients calculated for the TESS (Claret, 2018) and CHEOPS (Claret, 2021) passbands using the derived stellar parameters of \(T_{\rm eff}=6169\) K and \(\log g=4.47\) (cgs). In order to guard against systematic errors, we inflated the \(\sigma\) for each parameter prior to 0.1.
Footnote 9: See [https://pypi.org/project/pymc3/](https://pypi.org/project/pymc3/).
Footnote 10: See [https://pypi.org/project/exoplanet/](https://pypi.org/project/exoplanet/).
Even though the PIPE light curves for HD 22946 have fewer systematic features than the DRP light curves, they can still include flux variations due to the influence of various external factors. Therefore, we can improve the light curve by decorrelating the flux data against metadata generated for the instrument and target. To decipher which decorrelation vectors provide improvement, we ran an initial PyMC3 model for each CHEOPS visit using all available ancillary data - sin and cos of rollangle, background flux, \(x\) and \(y\) centroid positions, onboard temperature and time (which also fits short-timescale stellar variability). These parameters are normalised to have \(\mu=0.0\) and \(\sigma=1.0\), and decorrelation parameters are given normal priors with \(\mu=0.0\) and \(\sigma\) set by the root-mean-square (RMS) noise for each CHEOPS visit. For each visit model, we also included parameters for any planetary transits present in order to ensure the transits would not bias the model. After HMC sampling, we assessed each decorrelation parameter using the average and standard deviations, keeping only those parameters with a Bayes Factor of BF \(>1\). Despite this detrending, short-timescale variation can also be present as a function of roll angle (\(\varphi\)). Pure detrending against sin and cos of roll angle removes the largest amplitude systematic trends at low frequencies. These are those closest in timescale to the transit feature, and so a simpler detrending technique for such timescales guards against over-fitting of the transit. However, the CHEOPS light curve typically also contains systematic noise correlated with roll angle that is at a lower amplitude and higher frequency. This is not therefore adequately removed by simple sin and cos decorrelation. It is this noise that a more flexible GP is better able to model. We therefore also included a GP to model the variation of flux with roll-angle effects. To do this, we first found any potential large jumps in \(\varphi\) and made sure the time series was continuous between these jumps (i.e. by moving the zero point and 'wrapping around'). We then transformed the input data such that it is continuous in \(x\) --by sorting by \(\varphi\) rather than time. Once again, we used a SHO kernel from celerite with quality factor \(Q\) set at \(1/\sqrt{2}\). As we expected the morphology of the variations to be preserved for all CHEOPS visits, we used a single shared kernel. We found that the linear decorrelation is the most important, decreasing the log likelihood by a factor of 1400, but the GP is responsible for a reduction of a further 450, which means that use of a GP to model roll-angle flux behaviour is well justified.
As multi-planet systems typically have low eccentricities \(e\)(Van Eylen et al., 2019), and we lack the high number of RVs capable of resolving any differences in \(e\), we chose to fit only circular orbits. In order to guard against unphysical negative values, we used broad log-normal priors for the key transit and RV amplitude parameters, that is, for \(R_{\rm p}/R_{\rm s}\) (planet-to-star radius ratio) and \(K\) (RV semi-amplitude). The quantities derived in Sect. 3.1 are used as priors on the stellar parameters in the model. For all datasets --CHEOPS, TESS, and ESPRESSO--, we in
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Parameter [unit] & Description & HD 22946b & HD 22946c & HD 22946d \\ \hline \hline \(T_{\rm c}\) [BJD\({}_{\rm TDB}\)] & reference mid-transit time & 2 458 385.7321\({}^{+0.0002}_{-0.0021}\) & 2 459 16.6086\({}^{+0.00009}_{-0.00072}\) & 2 459 136.53720\({}^{+0.0008}_{-0.0003}\) \\ \(P_{\rm orb}\) [d] & orbital period & 4.040295.000015 & 9.573083.0 \(\pm\) 0.000014 & 47.42489\({}^{+0.0000}_{-0.00011}\) \\ \(b\) & impact parameter & \(0.21^{+0.11}_{-0.13}\) & 0.504\({}^{+0.024}_{-0.026}\) & 0.456\({}^{+0.025}_{-0.028}\) \\ \(a/R_{\rm s}\) & scaled semi-major axis & 1.03 \(\pm\) 0.12 & 19.61\({}^{+0.22}_{-0.02}\) & 57.00 \(\pm\) 0.66 \\ \(a\) [au] & semi-major axis & 0.05727\({}^{+0.0008}_{-0.0008}\) & 0.1017\({}^{+0.011}_{-0.0014}\) & 0.2958\({}^{+0.004}_{-0.0025}\) \\ \(R_{\rm p}/R_{\rm s}\) & planet-to-star radius ratio & 0.01119\({}^{+0.0003}_{-0.0002}\) & 0.01912\({}^{+0.00006}_{-0.00027}\) & 0.0214\({}^{+0.0004}_{-0.00045}\) \\ \(t_{\rm far}\) [d] & transit duration & 0.1281\({}^{+0.0003}_{-0.0003}\) & 0.1535\({}^{+0.0014}_{-0.00027}\) & 0.2701\({}^{+0.0003}_{-0.0003}\) \\ \(R_{\rm p}\) [R\({}_{\rm R}\)] & planet radius & 1.362 \(\pm\) 0.040 & 2.328\({}^{+0.028}_{-0.039}\) & 2.607 \(\pm\) 0.060 \\ \(M_{\rm p}\) [M\({}_{\rm a}\)] & planet mass\({}^{\star}\) & 13.71 & 9.72 & 26.57 \\ \(M_{\rm peak}\) [M\({}_{\rm b}\)] & estimated planet mass\({}^{\star}\) & 2.42 \(\pm\) 0.12 & 6.04 \(\pm\) 0.17 & 7.32 \(\pm\) 0.28 \\ \(M_{\rm peak}\) [M\({}_{\rm b}\)] & estimated planet mass\({}^{\circ}\) & 2.61 \(\pm\) 0.27\({}^{\dagger}\) & 6.61 \(\pm\) 0.17\({}^{\ddagger}\) & 7.90 \(\pm\) 0.28\({}^{\ddagger}\) \\ \(p_{\rm p}\) [g cm\({}^{-3}\)] & planet density\({}^{\star}\) & 18.96 & 3.15 & 10.80 \\ \(K\) [m s\({}^{-1}\)] & RV semi-amplitude\({}^{\star}\) & 5.05 & 2.70 & 4.31 \\ \(I_{\rm p}\) [W m\({}^{-2}\)] & insolation flux & 673 884\({}^{+32244}_{-3140}\) & 213 337\({}^{+10 270}_{-10 006}\) & 25 261\({}^{+1126}_{-1184}\) \\ \(T_{\rm surf}\) [K] & surface temperature\({}^{\circ}\) & 1241 \(\pm\) 14 & 931 \(\pm\) 11 & 546 \(\pm\) 6 \\ TSM & transmission spectroscopy metric1 & \multicolumn{1}{c}{} \\ \hline \hline \end{tabular}
\end{table}
Table 6: Best-fitting and derived system and planetary parameters of the HD 22946 planetary system.
cluded a jitter term using a wide log-normal prior. We then sampled the combined model using the PyMC_ext'sample' function, which is specifically written for astrophysical applications, and allows us to group independent dataset parameters (e.g. the CHEOPS visit-specific decorrelation parameters) together, thereby speeding up sampling greatly. We used ten chains, tuning each for 1300 steps before sampling for a further 1800, resulting in 18 000 unique samples. The sample have effective sample sizes in the thousands, and the gelmin-rubin statistics are below 1.01 for all parameters, suggesting they are sufficiently uncorrelated and unbiased. The full list of fitted GP hyperparameters and detrending parameters with the corresponding best-fitting values can be found in Appendix A.1. The best-fitting and derived parameters of the system are described and discussed in Sect. 4.
### Search for transit-timing variations
In order to look for potential TTVs, we also ran a combined model using unconstrained timing for each planetary transit thanks to the TTVorbit function of exoplanet, and an independent analysis using the Allesfitter software11(Gunther & Daylan, 2019, 2021), applying a nested sampling fit. Although C22 already performed such an analysis and found no obvious sign of TTVs in the system, we repeated this procedure, but in this case using the CHEOPS data as well. This means mainly that we included three transits of planet d in the analysis and used a longer time baseline. We used the same dataset as in Sect. 3.3, which was co-fitted with a GP using the celerite SH0 kernel in both cases. All planetary and system parameters were fixed as derived previously, only the GP hyperparameters, the detrending parameters, and the observed-minus-calculated (O-C) parameters for individual mid-transit times were fitted. Both solutions are consistent with a linear ephemeris, which means we did not find any indication of a quadratic trend in the data, in agreement with the conclusion made by the discoverers. As an illustration, the obtained O-C diagram of the mid-transit times for planets b, c, and d from the Allesfitter package is depicted in Fig. 1. We can see that the O-C values are scattered around O-C = 0.0 d, which means that no significant TTVs are present in the system.
Footnote 11: See [https://www.allesfitter.com/home](https://www.allesfitter.com/home).
## 4 Final results and discussion
The best-fitting and derived parameters from the combined model are listed in Table 6, and the model posteriors of the host star are summarised in Appendix A.2. The fitted TESS light curves from sector numbers 3, 4, 30, and 31 are depicted in the panels of Figs. 2 and 3. The CHEOPS individual observations overplotted with the best-fitting models are shown in the panels of Fig. 4. The RV observations fitted with a spectroscopic orbit are depicted in Fig. 5.
Here, we present new ephemerides of the planetary orbits, which we calculated based on the combined model. Thanks to the combined TESS and CHEOPS observations, we were able to improve the reference mid-transit times and the orbital periods of the planets compared to the discovery values. C22 derived the orbital period parameter values of \(P_{\rm orb,b}=4.04301^{+0.000023}_{-0.000042}\) d and \(P_{\rm orb,c}=9.573096^{+0.000026}_{-0.000023}\) d, and expected an orbital period of \(P_{\rm orb}=46\pm 4\) d for planet d, which was estimated based on the transit duration and depth along with stellar mass and radius through Kepler's third law, assuming circular orbits. We confirmed this prediction, finding an orbital period for planet d of \(P_{\rm orb}=47.42489\pm 0.00011\) d. The improved ratios of the orbital periods are \(P_{\rm orb,c}/P_{\rm orb,b}=2.37\) and \(P_{\rm orb,d}/P_{\rm orb,c}=4.95\). Based on the _Kepler_ database, the adjacent planet pairs in multiple systems show a broad overall peak between period ratios of 1.5 and 2, followed by a declining tail to larger period ratios. In addition, there appears to be a sizeable peak just interior to the period ratio 5 (Steffen & Hwang, 2015); therefore, we can say that the period ratios in HD 22946 fall into statistics and the seemingly large orbital gap between planets c and d is not anomalous.
In the combined model, we determined the impact parameter \(b\), which is the projected relative distance of the planet from the stellar disk centre during the transit midpoint in units of R\({}_{\rm s}\). Converting these parameter values to the orbit inclination angle values we can obtain \(i=88.90^{+0.16}_{-0.05}\) deg, \(i=88.52^{+0.08}_{-0.07}\) deg, and \(i=89.54^{+0.02}_{-0.03}\) deg for planets b, c, and d, respectively. For comparison, we note that the corresponding discovery values are \(i_{\rm b}=88.3^{+1.2}_{-1.2}\) deg and \(i_{\rm c}=88.57^{+0.86}_{-0.53}\) deg. The inclination angle of planet d was not determined by C22. According to the improved parameter values, it seems that only the orbits of planets b and c are well aligned. Planet d is probably not in the same plane as planets b and c.
Based on the combined TESS and CHEOPS photometry observations, we redetermined the radii of the planets, which are \(1.362\pm 0.040\) R\({}_{\rm\oplus}\), \(2.328\pm 0.039\) R\({}_{\rm\oplus}\), and \(2.607\pm 0.060\) R\({}_{\rm\oplus}\) for planets b, c, and d, respectively. The CHEOPS observations are an added value, because compared to the corresponding parameter values presented in C22 (\(R_{\rm p,b}=1.72\pm 0.10\) R\({}_{\rm\oplus}\), \(R_{\rm p,c}=2.74\pm 0.14\) R\({}_{\rm\oplus}\), and \(R_{\rm p,d}=3.23\pm 0.19\) R\({}_{\rm\oplus}\)), there is a noticeable improvement in radius precision. Using TESS and CHEOPS photometry observations, the uncertainties on the planet radius parameter values were decreased by \(\sim 50\%\), \(68\%\), and \(61\%\) for planets b, c, and d, respectively. We also note that the parameter values from this work are in stark contrast to those derived by C22; these authors found significantly larger radii, that is, larger by \(\sim 21\%\), \(15\%\), and \(19\%\) for planets b, c, and d, respectively. We believe this may be due to a misunderstanding of the LimbparkLightCurve function in exoplanet. The function requires the planetary radius \(R_{\rm p}\) in solar radii rather than the planet-to-star radius ratio \(R_{\rm p}/R_{\rm s}\). When misused, the result is an inflation of \(R_{\rm p}/R_{\rm s}\) and \(R_{\rm p}\) values by a factor of \(R_{\rm s}/R_{\rm\oplus}\), which in this case is a factor of about \(15\%\)-\(21\%\). This mistake can be
Figure 1: Observed-minus-calculated (O-C) diagram of mid-transit times of the planets HD 22946b, HD 22946c, and HD 22946d obtained using the Allesfitter package. The O-C values are consistent with a linear ephemeris, which means no significant TTVs are present in the system.
seen most clearly in C22, when comparing the models shown in Figure 5 with the implied depths in Table 4 (likely derived from the radius ratio), which are inflated by this factor. Such a mistake was also evident during the reanalysis of the BD+40 2790 (TOI-2076) system (Osborn et al., 2022).
According to the radius valley at \(\sim 1.5-2.0\) R\({}_{\oplus}\), which separates super-Earths and sub-Neptunes (Fulton et al., 2017; Van Eylen et al., 2018; Martinez et al., 2019; Ho & Van Eylen, 2023), and based on the refined planet radii, we find that planet b is a super-Earth, and planets c and d are similar in size and are sub-Neptunes, in agreement with C22. It is well known that
Figure 2: TESS observations of the transiting planets HD 22946b, HD 22946c, and HD 22946d from sector numbers 3, 4, and 30 overplotted with the best-fitting model. This model was derived based on the entire CHEOPS and TESS photometric dataset and the RV observations from ESPRESSO via joint analysis of the data. The left-hand panels show the non-detrended data overplotted with the full model, while the right-hand panels show the detrended data overplotted with the transit model. We averaged the TESS data for better visualisation of the transit events using a running average with steps and width of 0.009 and 0.09 d, respectively. We note that an interruption in communications between the instrument and spacecraft occurred at 2 458 418.54 \(\rm{BJD_{TDB}}\), resulting in an instrument turn-off until 2 458 421.21 \(\rm{BJD_{TDB}}\). No data were collected during this period.
small exoplanets have bimodal radius distribution separated by the radius valley. Potential explanations focus on atmospheric-escape-driven mechanisms, such as photo-evaporation; see for example Owen (2019). The models showed that those planets that have radius below 1.5 R\({}_{\oplus}\) were planets that initially had hydrogen/helium atmospheres, but ultimately lost them due to atmospheric escape, while those just above 2.0 R\({}_{\oplus}\) had hydrogen/helium atmosphere masses of \(\sim\) 1% of the core mass. Having HD 22946 planets on either side of the valley means that planet b could be a photo-evaporated version of planets c and d. Recently, Luque & Palle (2022) presented a brand new approach, arguing that the density of planets might provide more information than planet radii alone and proposing that a density gap separates rocky from water-rich planets. For M dwarf systems, these authors found that rocky planets form within the ice line while water worlds formed beyond the ice line and migrated inwards. Given that theoretical models predict similar results for stars of other types, this scenario could also be possible in the case of the planets orbiting HD 22946.
Due to the low number of RVs, here we present only the 3\(\sigma\) upper limits for the planet masses in agreement with the discoverers. C22 obtained the 3\(\sigma\) upper mass limits of about 11 M\({}_{\oplus}\), 14.5 M\({}_{\oplus}\), and 24.5 M\({}_{\oplus}\) for planets b, c, and d, respectively, from the same spectroscopic observations. The 3\(\sigma\) upper limits for the planet masses from this work are \(M_{\oplus}\)= 13.71 M\({}_{\oplus}\), \(M_{\oplus,c}\) = 9.72 M\({}_{\oplus}\), and \(M_{\oplus,d}\) = 26.57 M\({}_{\oplus}\). Similarly to the discoverers, we obtained very different upper mass limits for planets c and d, although they have similar planet radii, which could be due to a somewhat different internal structure of these planets. Applying the relations of Chen & Kipping (2017) and Otegi et al. (2020), we also re-estimated the planet masses, which were previously forecasted by the discoverers as \(6.29\pm 1.30\) M\({}_{\oplus}\), \(7.96\pm 0.69\) M\({}_{\oplus}\), and \(10.53\pm 1.05\) M\({}_{\oplus}\) for planets b, c, and d, respectively. The improved parameter values are presented in Table 6. Furthermore, taking into account the estimated planet masses calculated based on the relations of Otegi et al. (2020), we predicted the number of additional RV measurements required to achieve a 3\(\sigma\) detection on each mass using the Radial Velocity Follow-up Calculator12 (RVFC; see Cloutier et al. (2018)), and the RV simulator(Wilson et al. in preparation). Based on these simulations, we obtained that another 27, 24, and 48 ESPRESSO RVs are needed to measure the predicted masses of planets b, c, and d, respectively. The expected RV semi-amplitudes assuming the estimated planet masses are \(K_{\rm b}=1.10\pm 0.12\) m s\({}^{-1}\), \(K_{\rm c}=2.08\pm 0.10\) m s\({}^{-1}\), and \(K_{\rm d}=1.46\pm 0.08\) m s\({}^{-1}\).
Footnote 12: See [http://maestria.astro.umontreal.ca/rvfc/](http://maestria.astro.umontreal.ca/rvfc/).
C22 also probed the planets from the viewpoint of future atmospheric characterisation using the transmission spectroscopy metric (TSM); see Eq. 1 in Kempton et al. (2018). The authors obtained the TSM values of \(65\pm 10\), \(89\pm 16\), and \(67\pm 14\) for planets b, c, and d, respectively. We revised these values based on the results from the present work. The improved TSM values (see Table 6) do not satisfy the recommended value of TSM \(>90\) for planets with a radius of \(1.5<R_{\rm p}<10\) R\({}_{\oplus}\). On the other hand, given that this threshold is set very rigorously, in agreement with the discoverers, we can note that planet c could be a feasible target for transmission spectroscopy observations with future atmospheric characterisation missions, such as the planned _Ariel_ space observatory (Tinetti et al., 2021).
Finally, we discuss the relevance of planet d among the known population of similar exoplanets. HD 22946d represents a warm sub-Neptune. Based on the NASA Exoplanet Archive13(Akeson et al., 2013), there are 5272 confirmed exoplanets up to 22 February 2023, but only 63 planets out of 5272 are sub-Neptune sized (\(1.75<R_{\rm p}<3.5\) R\({}_{\oplus}\)) and transiting bright stars (\(G\leq 10\) mag). Only 7 planets out of 63 have orbital periods longer than 30 days and only 4 planets out of 7 have an equilibrium temperature of below 550 K. Three planets have a lower insolation flux than planet d, namely TOI-2076d (Osborn et al., 2022), HD 28109d (Dransfield et al., 2022), and HD 191939 (Badenas-Agusti et al., 2020). HD 22946d is therefore an interesting target for future follow-up observations. One of the questions to be answered in the near future is the composition and internal structure of sub-Neptune-type planets. Using CHEOPS observations, we determined the radius of planet d with high accuracy. Its true mass could be determined with another 48 ESPRESSO RV measurements according to the estimate we present above. A combination of mass and radius gives the overall density, which will be an important step forward towards understanding sub-Neptunes.
Footnote 13: See [https://exoplanetarchive.ipac.caltech.edu/index.html](https://exoplanetarchive.ipac.caltech.edu/index.html).
## 5 Conclusions
Based on the combined TESS and CHEOPS observations, we refined several parameters of the HD 22946 planetary system.
Figure 3: As in Fig. 2, but for the TESS sector number 31.
First of all, we improved the ephemerides of the planetary orbits in comparison with the discovery values. We can confirm that planets b and c have short orbital periods below 10 days, namely \(4.040295\pm 0.000015\) d and \(9.573083\pm 0.000014\) d, respectively. The third planet, HD 22946d, has an orbital period of \(47.42489\pm 0.00011\) d, which we were able to derive based on additional CHEOPS observations. Furthermore, based on the combined TESS and CHEOPS observations, we derived precise radii for the planets, which are \(1.362\pm 0.040\) R\({}_{\oplus}\), \(2.328\pm 0.039\) R\({}_{\oplus}\), and \(2.607\pm 0.060\) R\({}_{\oplus}\) for planets b, c, and d, respectively. On the one hand, we can confirm the conclusion of the discoverers that the planetary system consists of a super-Earth, and planets c and d are sub-Neptunes. On the other hand, we find the planet radii values to be in tension with the values presented in the discovery paper, which is very probably due to misuse of the software by the discoverers. The low number of ESPRESSO RV measure
Figure 4: Individual CHEOPS observations of the transiting planets HD 22946b, HD 22946c, and HD 22946d. The observed light curves are overplotted with the best-fitting model. This model was derived based on the entire CHEOPS and TESS photometric dataset and the RV observations from ESPRESSO via joint analysis of the data. The left-hand panels show the non-detrended data overplotted with the full model, while the right-hand panels show the detrended data overplotted with the transit model. In the case of the fourth CHEOPS visit, as the multiple transit feature, the individual transit models of planets c and d are also shown in addition to the summed model.
ments allowed us to derive only the 3\(\sigma\) upper limits for the planet masses, which are 13.71 M\({}_{\oplus}\), 9.72 M\({}_{\oplus}\), and 26.57 M\({}_{\oplus}\) for planets b, c, and d, respectively.
We also investigated the planets from the viewpoint of possible future follow-up observations. First of all, we can conclude that more RV observations are needed to improve the planet masses in this system. The applied spectroscopic observations allowed us to derive precise stellar parameters of the host star and to fit an initial spectroscopic orbit to the RV data, but there is ample room for improvement in this way. We estimated that another 48 ESPRESSO RVs are needed to measure the predicted masses of all planets in HD 22946. Planet c could be a suitable target for future atmospheric characterisation via transmission spectroscopy. We can also conclude that planet d as a warm sub-Neptune is very interesting, because there are only a few similar confirmed exoplanets to date. Thanks to the synergy of TESS and CHEOPS missions, there is a growing sample of planets, such as HD 22946d. Such objects are worth investigating in the near future, for example in order to investigate their composition and internal structure. Finally, we can mention that future photometric and/or spectroscopic observations could also be oriented to searching for further possible planets in this system.
###### Acknowledgements.
We thank the anonymous reviewer for the helpful comments and suggestions. CHEOPS is an ESA mission in partnership with Switzerland with important contributions to the payload and the ground segment from Austria, Belgium, France, Germany, Hungary, Italy, Portugal, Spain, Sweden, and the United Kingdom. The CHEOPS Consortium would like to gratefully acknowledge the support received by all the agencies, offices, universities, and industries involved. Their flexibility and willingness to explore new approaches were essential to the success of this mission. This paper includes data collected with the TESS mission, obtained from the MAST data archive at the Space Telescope Science Institute (STScI). Funding for the TESS mission is provided by the NASA Explorer Program. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. This research has made use of the Exoplanet Follow-up Observation Program (ESoForP); DOI: 10.261-3624/ESo(F05) website, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. ZG acknowledges the support of the Hungarian National Research, Development and Innovation Office (NKFHH) grant K-125015, the PRODEX Experiment Agreement No. 4000137122 between the ELTE Eovos L'enard University and the European Space Agency (ESA-D/SCI-LE-2021-0025), the VEGA grant of the Slovak Academy of Sciences No. 2003/12/5. The Slovak Research and Development Agency contract No. Avry-20-0148, and the support of the city of the Szombatiehy. GyMSz acknowledges the support of the Hungarian National Research, Development and Innovation Office (NKFHH) grant K-125015, a PRODEX Institute Agreement between the ELTE Eovos L'enard University and the European Space Agency (ESA-D/SCI-LE-2021-0025), the Lendulet LP2018-7/2021 grant of the Hungarian Academy of Science and the support of the city of Sromblaumbsky. ABr was supported by the SNSA. ACC acknowledges support from STFC consolidated grant numbers ST/R000824/1 and ST/V000861/1, and UKSA grant number ST/R003203/1. B-O. D. acknowledges support from the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract number MB22.00046. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (project Fox Aces; grant agreement No 724427). It has also been carried out in the frame of the National Centre for Competence in Research Planets supported by the Swiss National Science Foundation (SNSF). DE acknowledges financial support from the Swiss National Science Foundation for project 200021_200726. DG gratefully acknowledges financial support from the CRT foundation under Grant No. 2018.2323 "Gascoson vocok?" Unveiling the nature of small worlds". This work was also partially supported by a grant from the Simons Foundation (PI Queloz, grant number 327127). This work has been carried out within the framework of the NCCP Planets supported by the Swiss National Science Foundation under grants 51NF40_182901 and 51NF40_205606. IRI acknowledges support from the Spanish Ministry of Science and Innovation and the European Regional Development Fund through grant PGC2018-098153-B-C33, as well as the support of the Generalitat de Catalunya/CERCA programme. This work was granted access to the HPC resources of MesoPSL financed by the Region Ile de France and the project Equip@Meso (reference ANR-10-EQPX-29-01) of the programme Investisements d'Avenir supervised by the Agence Nationale pour la Recherche. KGI and MNG are the ESA CHEOPS Project Scientists and are responsible for the ESA CHEOPS Guest Observers Programme. They do not participate, in or contribute to, the definition of the Guaranteed Time Programme of the CHEOPS mission through which observations described in this paper have been taken, nor to any aspect of target selection for the programme. The Belgian participation to CHEOPS has been supported by the Belgian Federal Science Policy Office (BELSO) in the framework of the PRODEX Program, and by the University of l'ige through an ARC grant for Concerted Research Actions financed by the Wallonia-Brussels Federation: L.D. is an FR.S-FNRS Postdoctoral Researcher. LMS gratefully acknowledges financial support from the CRT foundation under Grant No. 2018.2323 "Gasocus or rocky". Unveiling the nature of small worlds'. This project was supported by the CNES, MF and CMP gratefully acknowledge the support of the Swedish National Space Agency (DNRG/519, 174/18), M.G. is an FR.S-FNRS Senior Research Associate. ML acknowledges support of the Swiss National Science Foundation under grant number PCEFP. 2194576 grant number ST/R000438/1. This work was supported by FCT - Fundacao para a Ciencia e a Tecnologia through national funds and by FEDER through COMPETE2002- Programa Operacional Comorbidities e Tecnologia (PID/04434/2019, UIDB/04434/2020, UIDP/04434/2020, PTDC/FIS-AST/32113/2017 & POCI-01-0145-FEDER-032113, PTDC/FIS-AST/28953/107 & POCI-01415-FEDER-028953, PTDC/FIS-AST/28798/971 & POCI-01-0145-FEDER-028987, O.D.S. is supported in the form of work contract (DL 57/2016/CP1364/CT0004) funded by national funds through FCT. PM acknowledges support from STFC research grant number ST/R0010401/1. We acknowledge support from the Spanish Ministry of Science and Innovation and the European Regional Development Fund through grants ESP2016-80435-T-26, FSE2018-0435-C-2-R, PGC2018-098153-B-C3, PGC2018-098153-B-C3, SF2017-8876-C5-1-R, MDM-2017-0737 Unidad de Excelencia Maria de Maeztu-Centro de Astrobiologia (IATA-CSIC), as well as the support of the Generalitat de Catalunya/CERCA programme. The 30C activities have been supported by the ESA contract No. 4000124370. SH gratefully acknowledges CNES funding through FCT contracts nr. IF/01321/204/CP1215/CT0004. S.G.S. acknowledges support from FCT support from FCT through FCT contract nr. CEECIND/00826/2018 and POP/FEE (ECEC). ACC and INV acknowledge support from STFC consolidated ST/R000824/1 and ST/V000861/1, and UKSA grant number ST/R003203/1. V.G. is an FR.S-FNRS Research Associate. XB, SC, DG, MF and JL acknowledge their role as ESA-appointed CHEOPS science team members. YA and MH acknowledge the support of the Swiss National Fund under grant 200020_172746. LBo, VNa, IPA, GP! RRA and G5 acknowledge support from CHEOPS ASI-INAF agreement in 2019-29H-0.0. NCS acknowledges support from the European Research Council through the grant agreement 101052347 (FIERCE). This work was supported by FCT - Fundacao para a Ciencia e a Tecnologia through national funds and by FEDER through
Figure 5: RV observations taken at ESPRESSO fitted with a spectroscopic orbit (red line). The 1\(\sigma\) and 2\(\sigma\) uncertainties of the model are plotted as coloured areas. The uncertainties of the individual RV data points correspond to a 3\(\sigma\) interval.
COMPETE2020 - Programa Operacional Competitividade Internacionalizacao by these grants: UIDB04434/2020; UIDP/04434/2020. AT thanks the Science and Technology Facilities Council (STFC) for a PhD studentship. P.E.C. is funded by the Austrian Science Fund (FWF) Erwin Schroedinger Fellowship, program J4595-N.
|
2305.04138 | On the usefulness of linear types for correct nonce use enforcement
during compile time | Cryptographic algorithms and protocols often need unique random numbers as
parameters (e.g. nonces). Failure to satisfy this requirement lead to
vulnerable implementation and can result in security breach. We show how linear
types and static type checking can be used to enforce the correct generation of
a new unique random number for each function invocation. | Richard Ostertág | 2023-05-06T21:33:09Z | http://arxiv.org/abs/2305.04138v1 | # On the usefulness of linear types for correct
###### Abstract
Cryptographic algorithms and protocols often need unique random numbers as parameters (e.g. nonces). Failure to satisfy this requirement lead to vulnerable implementation and can result in security breach. We show how linear types and static type checking can be used to enforce the correct generation of a new unique random number for each function invocation.
Keywords:Secure coding Nonce Linear types Rust.
## 1 Motivation
The security of various cryptographic constructions relies on unique or even unpredictable values. Examples are nonces in cryptographic protocols, initialization vectors in modes of symmetric encryption, salts in password-based key derivation functions etc. These values are often generated as a random numbers of prescribed length.
Programmers, which are not experts in cryptography, may believe that it is not strictly necessary to generate a new random number every time. Programmers can be lazy and provide some numeric constant instead of a new random number for each use. After all, the cryptographic construction will "correctly"1 work even with this fixed numeric constant. However, if the no-reuse principle is not followed, it can lead to a serious security vulnerability in the resulting application (which is not visible at first glance). Well known example is forbidden attack for AES-GCM [1], but e.g. see also [2].
Footnote 1: Depending on the construction it can for example still correctly encrypt and decrypt messages.
We show how to implement a cryptographic library that would allow the compiler to detect incorrect (i.e. repeated) use of such one-time random numbers at compile time. We will divide this task into two parts:
1. In the first part, we ensure that the function expecting a random number gets as an argument a random number generated by an "approved" method. E.g. a true random number generated by a special hardware device and not just software generated pseudorandom number or we can enforce usage of any chosen specific software implementation.
For this first part, we will utilise abstract data types with a hidden data constructor.
2. In the second part, we will ensure that once the generated random number is used, it cannot be reused for the second time. For the second part, we will use linear types. We will illustrate on the Rust programming language, but the idea can be used in any programming language with linear types.
## 2 Abstract data types
An abstract data type (ADT) is defined by its behaviour (e.g. operations like insert or delete). However, the implementation details are hidden from its users. The implementers have the flexibility to use data structures internally or even make changes to their approach in the future. As long as the external behaviour (interface) remains unchanged, all existing code that uses this ADT can function without requiring any updates to adapt to any modifications made to the internal implementation.
ADTs are commonly used and supported in many standard programming languages, for example C++, Java, Pascal. ADTs are usually realised as modules or objects concealing internal implementation and exposing only the public interface. For example, if we want to realise the stack (FIFO data structure) as ADT, we will provide public functions like push, pop,...and type Stack for variables holding values of this ADT. But the important aspect is, that we do not provide the client with any information on how the stack is internally implemented. It may be a linked list or an array or something totally different. We also do not provide any external means for creating a new stack (because external users do not know the internal details of the Stack type). The only possibility to create a new stack is to call some function from the module, which returns a new Stack value (or create a new instance if objects are used instead of modules).
ADT are useful for constraining access and preventing invalid states. By creating the stack as ADT, the implementer of the module can maintain strict control over its representation. A client has no way to accidentally (or maliciously) alter any of the stack representation invariants.
We can use this technique to create a nonce module in Rust with Nonce abstract data type (see Listing 1). We have created a public struct type Nonce with a private random value of type u128. The client can not directly create structs with any private fields. In this case for example it is invalid to write let nonce = Nonce { val: 42 }. The only way for the client to create a nonce is to call a public constructor method let mut nonce = nonce::Nonce::new(). Because the client needs to call the new method we can guarantee that on line 10 we, as implementers, choose the right system function to generate a new random number (e.g. we may use hardware RNG).
While abstract types are a powerful means of controlling the structure and creation of data, they are not sufficient to limit the ordering and number of uses
of values and functions. As another example, we can mention e.g. files. There is no (static) way to prevent a file from being read after it has been closed. Also, we cannot stop a client from closing a file twice or forgetting to close a file at all. In our case, there is no static way to stop the client from using one nonce value multiple times just with ADT. But this can be enforced in programming languages with linear types.
### Linear types
Before presenting our proposed solution (using linear types), we want to quickly recapitulate what linear types are [5] and how they are implemented in the well-known Rust programming language [3].
Linear types are a special case of substructural type systems, which are particularly useful for constraining interfaces that provide access to system resources such as files, locks and as we will show, we can constrain random number reuse. Substructural type systems augment standard type abstraction mechanisms with the ability to control the number and order of uses of a data structure or operation, which is exactly what we need.
### Structural Properties
Lets discuss three basic _structural_ properties. The first property, _exchange_, indicates that the order in which we write down variables in the context is irrelevant. A corollary of exchange is that if we can type check a term with the context \(\Gamma\), then we can type check that term with any permutation of the variables in \(\Gamma\).
\[\underbrace{\overbrace{\Gamma_{1},x\!:\!\tau_{x},y\!:\!\tau_{y},\Gamma_{2}}^{ \text{context}}\vdash e\!:\!\tau}_{\underbrace{\Gamma_{1},y\!:\!\tau_{y},x\!: \!\tau_{x},\Gamma_{2}}_{\text{permutated context}}}\] (Exchange)
The second property, _weakening_, indicates that adding extra, unneeded assumptions to the context, does not prevent a term from type checking.
\[\overbrace{\Gamma,x\!:\!\tau_{x}\vdash e\!:\!\tau}_{\text{unneeded assumption}}\] (Weakening)
Finally, the third property, _contraction_, states that if we can type check a term using two identical assumptions (\(x_{2}\!:\!\tau_{x_{1}}\) and \(x_{3}\!:\!\tau_{x_{1}}\)) then we can check the same term using a single assumption.
\[\frac{\Gamma,x_{2}\!:\!\tau_{x},x_{3}\!:\!\tau_{x}\vdash e\!:\!\tau}{\Gamma,x _{1}\!:\!\tau_{x_{1}}\vdash[x_{2}\mapsto x_{1},x_{3}\mapsto x_{1}]e\!:\!\tau}\] (Contraction)
### Substructural Type Systems
A _substructural type system_ is any type system that is designed so that one or more of the structural properties do not hold [5]. Different substructural type systems arise when different properties are withheld.
#### 2.3.1 Linear type systems
ensure that every variable is used exactly once by allowing exchange but not weakening or contraction.
#### 2.3.2 Affine type systems
ensure that every variable is used at most once by allowing exchange and weakening, but not contraction.
#### 2.3.3 Relevant type systems
ensure that every variable is used at least once by allowing exchange and contraction, but not weakening.
#### 2.3.4 Ordered type systems
ensure that every variable is used exactly once and in the order in which it is introduced. They do not allow any of the structural properties.
The picture below can serve as a mnemonic for the relationship between these systems. The system at the bottom of the diagram (the ordered type system) admits no structural properties. As we proceed upwards in the diagram, we add structural properties: E stands for exchange; W stands for weakening; and C stands for contraction. It might be possible to define type systems containing other combinations of structural properties, such as contraction only or weakening only, but so far researchers have not found applications for such combinations [5]. Consequently, they are excluded them from the diagram.
### Rust
Ownership is Rust's most unique feature. It enables Rust to make memory safety guarantees without needing a garbage collector. The feature is straightforward to explain. In Rust, the memory is managed through a system of ownership with a set of rules, that the compiler checks at compile time. None of the ownership features slow down the program while it is running (unlike garbage collection).
#### 2.4.1 Ownership rules
* Each value in Rust has a variable that is called its _owner_.
* There can be only one owner at a time.
* When the owner goes out of scope, the value will be dropped (memory will be deallocated).
We will demonstrate these rules on Listing 2. On line 2 we create a string and assign its value into variable s1. This variable is now the only owner of the string. Then on line 4 we move value from variable s1 to new owner - variable s2. Now s2 is the only owner of the string value. That is the reason, why we can not use variable s1 on line 5 to borrow the string value to println! function. But we could use s2 for this. When s2 comes out of scope the string value can be deallocated from memory.
```
1fnborrowing(){
2lets1=String::from("Hello");
3//move^"occursbecause'String'doesnotimplementthe'Copy'trait
4lets2=s1;//valuemovedhere
5println!("{},world!",s1);//valueborrowedhereaftermove
6}
```
Listing 2: Example of ownership rules
Figure 1: Relationship between linear and other substructural type systems.
This is illustrated in Fig. 2. After assigning to s2 the value from s1, variable s2 points to the same memory on the heap, but s1 can not be used for dereferencing anymore. This is used primarily for memory management without the need for a garbage collector or explicit deallocation. We will use these ownership rules for constraining nonce usage.
## 3 The solution
The solution in Rust is syntactically very simple because it is well aligned with Rust syntax. Usually, when functions in Rust take arguments, they are passed as references (with & before variable name). This way value is not moved to the parameter from the local variable (it is just borrowed). But we prevent this in Nonce type, because we do not implement Copy trait. Traits are similar to interfaces in other languages. To read more about traits see for example [4].
```
1fnneed_new_random_u128_every_time(nonce:nonce::Nonce){
2let_tmp=nonce.get();
3println!("Nonceparamvalue:{}",nonce.get());
4println!("Nonceparamvalue:{}",*nonce);
5}
```
Listing 3: Example of function with nonce as argument
Figure 2: Memory representation of variables s1 and s2
On Listing 1.3 we implement function need_new_random_u128_every_time to demonstrate function signature for functions that require new random value for every call. The body of the function is not significant, but we demonstrate, that the nonce value can be used repeatedly inside library implementation, which is often needed. We also implement Deref trait, so * can be used on line 4 instead of longer nonce.get() from the line above.
When function need_new_random_u128_every_time is called, then value ownership is moved from the local variable to the argument and thus local variable can not be used anymore. As an example, if in Listing 1.4 we comment out line 7, we will get compile time error "value used here after move" on the next line.
```
1fnmain(){
2//structswithprivatefields
3//canbecreatedonlyusingpublicconstructors
4letmutnonce=nonce::Nonce::new();
5need_new_random_u128_every_time(nonce);
6
7nonce=nonce::Nonce::new();
8need_new_random_u128_every_time(nonce);
9
10need_new_random_u128_every_time(nonce::Nonce::new());
11}
```
Listing 1.4: Example of nonce usage
## 4 Conclusion
We have demonstrated how to use ADT and linear types in Rust for enforcing the freshness of nonces for library function calls. In Rust, the syntax is very straightforward. This solution can be implemented also in other languages with linear types, like Haskell, which experimentally supports linear types from version 9.0.1. But syntax, in this case, is not so clear as in Rust.
#### Acknowledgements
This publication is the result of support under the Operational Program Integrated Infrastructure for the project: Advancing University Capacity and Competence in Research, Development and Innovation (AC-CORD), co-financed by the European Regional Development Fund.
|
2307.12123 | A New Bayesian Huberised Regularisation and Beyond | Robust regression has attracted a great amount of attention in the literature
recently, particularly for taking asymmetricity into account simultaneously and
for high-dimensional analysis. However, the majority of research on the topics
falls in frequentist approaches, which are not capable of full probabilistic
uncertainty quantification. This paper first proposes a new Huberised-type of
asymmetric loss function and its corresponding probability distribution which
is shown to have the scale-mixture of normals. Then we introduce a new Bayesian
Huberised regularisation for robust regression. A by-product of the research is
that a new Bayesian Huberised regularised quantile regression is also derived.
We further present their theoretical posterior properties. The robustness and
effectiveness of the proposed models are demonstrated in the simulation studies
and the real data analysis. | Sanna Soomro, Keming Yu, Yan Yu | 2023-07-22T16:43:34Z | http://arxiv.org/abs/2307.12123v1 | # A New Bayesian Huberised Regularisation and Beyond
###### Abstract
Robust regression has attracted a great amount of attention in the literature recently, particularly for taking asymmetricity into account simultaneously and for high-dimensional analysis. However, the majority of research on the topics falls in frequentist approaches, which are not capable of full probabilistic uncertainty quantification. This paper first proposes a new Huberised-type of asymmetric loss function and its corresponding probability distribution which is shown to have the scale-mixture of normals. Then we introduce a new Bayesian Huberised regularisation for robust regression. A by-product of the research is that a new Bayesian Huberised regularised quantile regression is also derived. We further present their theoretical posterior properties. The robustness and effectiveness of the proposed models are demonstrated in the simulation studies and the real data analysis.
_Keywords:_ Asymmetric Huber loss function, Bayesian elastic net, Bayesian lasso, Quantile regression, Robustness
## 1 Introduction
Robust regression methods have a wide range of applications and attracted a great amount of attention in the literature recently, particularly for taking asymmetricity into account simultaneously and for high-dimensional analysis, such as the adaptive Huber regression (Sun et al. (2020)) and asymmetric Huber loss and asymmetric Tukey's biweight loss functions for robust regression (Fu and Wang (2021)). The Lasso (Tibshirani (1996)) and the Elastic Net (Zou and Hastie (2005)) are some popular choices for regularising regression coefficients. The former has the ability to automatically set irrelevant coefficients to zero. The latter retains this property and the effectiveness of the ridge penalty, and it deals with highly correlated variables more effectively. Robust regularisation methods for quantile regression provide a promising technique for variable selection and model estimation in presence of outliers or heavy-tailed errors (Li and Zhu (2008); Wu and Liu (2009); Belloni and Chernozhukov (2011); Su and Wang (2021)). However, the majority of research on the topics falls in frequentist approaches, which are not capable of full probabilistic uncertainty quantification. Quantile regression, particularly Bayesian quantile regression enjoys some of robustness such as median more robust than mean, but has different modelling aims from robust regression.
Exploring unconditional Bayesian regularisation prior, such as the Bayesian lasso (Park and Casella (2008)) and the Bayesian elastic net (Li and Lin (2010)), for robust regression is not straightforward. Several issues may arise. The joint posterior may be multimodal, which slows down the convergence of the Gibbs sampler and the point estimates may be computed through multiple modes, which lead to the inaccurate estimators (Kyung et al. (2010); Park and Casella (2008)). The choices of the hyperparameters in gamma priors of regularisation parameters may also have strong influences on the posterior estimates. For the former, it was firstly observed by Park and Casella (2008) in the Bayesian lasso. For the latter, it is common to employ invariant prior on scale parameter (Berger (1985)). Cai and Sun (2021) address these two issues by introducing the scale parameter to the Bayesian lasso and its generalisation for quantile regression. Moreover, Kawakami and Hashimoto (2023) use the scale parameter of the hyperbolic loss function (Park and Casella (2008)) to propose the Bayesian Huberised lasso, which is the robust version of Bayesian lasso. Along this line, we will propose Bayesian Huberised regularisation in this paper.
Quantile regression introduced by Koenker and Bassett (1978) is a useful supplement to ordinary mean regression in statistical analysis, owing to its robustness property and its ability to offer unique insights into the relation between the response variables and the predictors that are not available in doing mean regression. Recently, the Bayesian approaches for variable selection in quantile regression have also attracted much attention in research area (Li et al. (2010); Alhamzawi et al. (2012); Alhamzawi and Yu (2012); Alhamzawi and Yu (2013); Chen et al. (2013); Reich and Smith (2013); Alhamzawi (2016); Alshaybaee et al. (2017); Adlouni et al. (2018); Alhamzawi et al. (2019)). In Bayesian quantile regression, the error distribution would usually be assumed to follow asymmetric Laplace distribution proposed by Yu and Moyeed (2001) that guaranteed posterior consistency of Bayesian estimators (Sriram et al. (2013)) and robustness (Yu and Moyeed (2001)). Furthermore, Alhamzawi et al. (2012) adopt the inverse gamma prior density to the penalty parameters and treated its hyperparameters as unknown and estimated them along with other parameters. This allows the different regression coefficients to have different penalisation parameters, which improves the predictive accuracy. Quantile regression, particularly Bayesian quantile regression enjoys some of robustness such as median more robust than mean, but has different modelling aims from robust regression.
Therefore, this paper first proposes a new Huberised-type of asymmetric loss function and its corresponding probability distribution, which is shown to have the scale-mixture of normals. Then we introduce a new Bayesian Huberised regularisation for robust regression. Furthermore, by taking advantage of the good quantile property of this probability distribution, we develop Bayesian Huberised lasso quantile regression and Bayesian Huberised elastic net quantile regression. This results in the proposed models covering both Bayesian robust regularisation and Bayesian quantile regularisation. Besides, Cai and Sun (2021) emphasise that the posterior impropriety does exist in Bayesian lasso quantile regression and its generalisation when the prior on regression coefficients is independent of the scale parameter. Thus, we will discuss some properties of the Bayesian Huberised regularised quantile regression, including posterior propreity and posterior unimodality. The approximate Gibbs sampler of Kawakami and Hashimoto (2023) is adopted to enable the data-dependent estimation of the tuning robustness parameter in the fully Bayesian hierarchical model. The advantage of this sampling step is that it does not
require cross validation evaluation of tuning parameters (see Alhamzawi (2016) for example) nor the rejection steps, such as the inversion method and adaptive rejection sampling algorithm (see Alhamzawi et al. (2019) for example). We demonstrate the effectiveness and robustness of the Bayesian Huberised regularised quantile regression model through simulation studies following by real data analysis.
The remainder of this paper is as follows. In Section 2, we define a Huberised asymmetric loss function with its corresponding probability density function and derive a scale mixture of normal representation for Bayesian inference. Section 3 presents the Bayesian Huberised regularisation including the Bayesian Huberised lasso (Kawakami and Hashimoto (2023)) and the Bayesian Huberised elastic net. This results in a new robust Bayesian regularised quantile regression. In Section 4 and 5, a wide range of simulation studies and three real data examples were conducted. In Section 6, we draw the conclusions.
## 2 Huberised Asymmetric Loss Function
The lasso and elastic net estimates are all regularised estimates and the differences among them are only at their penalty terms. Specifically, they are all solutions to the following form of minimization problem for regularised quantile regression
\[\min_{\mathbf{\beta}}\sum_{i=1}^{n}\rho_{\tau}(y_{i}-\mathbf{x}_{i}\mathbf{\beta})+ \lambda_{1}g_{1}(\mathbf{\beta})+\lambda_{2}g_{2}(\mathbf{\beta})\,, \tag{1}\]
for some \(\lambda_{1},\lambda_{2}\geq 0\), penalty functions \(g_{1}(\cdot)\) and \(g_{2}(\cdot)\), \(\rho_{\tau}(x)=x(\tau-I(X<0))\) is the check loss function and \(I(\cdot)\) is the indicator function. The lasso corresponds to \(\lambda_{1}>0\), \(\lambda_{2}=0\), \(g_{1}(\mathbf{\beta})=\|\mathbf{\beta}\|\) and \(g_{2}(\mathbf{\beta})=0\). The elastic net corresponds to \(\lambda_{1}=\lambda_{3},\lambda_{2}=\lambda_{4}>0\), \(g_{1}(\mathbf{\beta})=\|\mathbf{\beta}\|\) and \(g_{2}(\mathbf{\beta})=\|\mathbf{\beta}\|_{2}^{2}\).
Letting \(\tau=0.5\), the first term of Equation (1) reduces to \(\sum_{i=1}^{n}|y_{i}-\mathbf{x}_{i}\mathbf{\beta}|\) and the corresponding method is called the least absolute deviation (LAD) regression, which is known to be robust against outliers in response variables. However, the LAD regression might underestimate regression coefficients for non-outlying observations. To remedy this problem, the Huber loss function is used and it is defined as
\[L_{\delta}^{Huber}(x)=\left\{\begin{array}{ll}\frac{1}{2}x^{2},&|x|\leq \delta\,,\\ \delta(|x|-\delta/2),&|x|>\delta\,,\end{array}\right. \tag{2}\]
where \(\delta>0\) is a robustness parameter and it is practically set as \(\delta=1.345\)(Huber (1964)). The behaviour of this loss function is such that it is quadratic for small values of \(x\) and becomes linear when \(\epsilon\) exceeds \(\delta\) in magnitude.
Clearly, the Huber loss function has non-differentiable points and it has limited scope in applications. Li et al. (2020) propose two generalised Huber loss functions, which are Soft Huber and Nonconvex Huber. They are attractive alternatives to the Huber loss function because they are analogous to the pseudo Huber loss function and they have a normal scale mixture property resulting in a broader range of
Bayesian applications. The Soft Huber loss function can be defined as
\[L^{SH}_{\zeta_{1},\zeta_{2}}(x)=\sqrt{\zeta_{1}\zeta_{2}}\left(\sqrt{1+\frac{x^{ 2}}{\zeta_{2}}}-1\right)\,, \tag{3}\]
and the Nonconvex Huber loss function as
\[L^{NH}_{\zeta_{1},\zeta_{2}}(x)=\sqrt{\zeta_{1}\zeta_{2}}\left(\sqrt{1+\frac{ |x|}{\zeta_{2}}}-1\right)\,, \tag{4}\]
where \(\zeta_{1},\zeta_{2}>0\) are non-negative hyperparameters. Here, the Soft Huber loss bridges the \(\ell_{1}\) (absolute) loss and the \(\ell_{2}\) (squared) loss. On the other hand, the Nonconvex Huber loss bridges the \(\ell_{1/2}\) loss and the \(\ell_{1}\) loss. By letting \(\eta=\sqrt{\zeta_{1}\zeta_{2}}\) and \(\rho^{2}=\sqrt{\frac{\zeta_{2}}{\zeta_{1}}}\), the Soft Huber loss function becomes the hyperbolic loss function, that is,
\[L^{Hyp}_{\eta,\rho^{2}}(x)=\sqrt{\eta\left(\eta+\frac{x^{2}}{\rho^{2}}\right) }-\eta\,, \tag{5}\]
where \(\eta>0\) is the robustness parameter and \(\rho^{2}>0\) is a scale parameter. Park and Casella (2008) used this hyperbolic loss function to formulate the Bayesian Huberised lasso, which has proven to be robust to outliers.
When the error distribution is asymmetric or contaminated by asymmetric outliers, the estimators obtained from Equations (2), (3), (4) and (5) may result in inconsistency of predictions of a conditional mean given the regressors (Fu and Wang (2021)).
Therefore, we propose the Huberised-type asymmetric loss function by letting \(\eta=\sqrt{\zeta_{1}\zeta_{2}}\) and \(\rho^{2}=\sqrt{\frac{\zeta_{2}}{\zeta_{1}}}\) in Equation (4) and it is given by
\[L^{Asy}_{\eta,\rho^{2},\tau}(x)=\sqrt{\eta\left(\eta+\frac{x}{\rho^{2}}\left( \tau-I(x<0)\right)\right)}-\eta\,.\]
The corresponding density function is
\[f(x|\mu,\eta,\rho^{2},\tau)=\frac{\eta\tau(1-\tau)e^{\eta}}{2\rho^{2}(\eta+1)} \exp\left\{-\sqrt{\eta\left(\eta+\frac{x-\mu}{\rho^{2}}\left(\tau-I(x<0) \right)\right)}\right\}\,, \tag{6}\]
where \(\mu\in\mathbb{R}\) is a location parameter. Here, \(\rho^{2}\) acts as a scale parameter and \(\eta\) acts as a shape parameter of this density function.
The following proposition states that the parameters \(\mu\) and \(\tau\) in (6) satisfy: \(\mu\) is the \(\tau\)th quantile of the distribution.
**Proposition 2.1**: _If a random variable \(X\) follows the density function in (6) then we have \(P(X\leq\mu)=\tau\) and \(P(X>\mu)=1-\tau\)._
**Proof:** The proof can be found in Appendix A.1. \(\Box\)
To observe the behaviour of the proposed loss function, we set
\(\eta=\sqrt{\zeta_{2}}\left(\sqrt{\zeta_{2}}+\sqrt{\zeta_{2}+1}\right)\) and \(\rho^{2}=\frac{\sqrt{\zeta_{2}}}{\sqrt{\zeta_{2}+\sqrt{\zeta_{2}+1}}}\) then we have the following limits,
\[\lim_{\zeta_{2}\to 0}\,L^{Asy}_{\eta,\rho^{2},\tau}(x)=\sqrt{x\left(\tau-I(x<0) \right)}\quad\text{and}\quad\lim_{\zeta_{2}\to\infty}\,L^{Asy}_{\eta,\rho^{2}, \tau}(x)=x\left(\tau-I(x<0)\right)\,,\]
which suggests that the proposed loss bridges the quantile loss function. Daouia et al. (2018) use the quantile loss function for tail expectiles to estimate alternative measures to the value at risk and marginal expected shortfall, which are two instruments of risk protection of utmost importance in actuarial science and statistical finance. Ehm et al. (2016) show that any scoring function that is consistent for a quantile or an expectile functional can be represented as a mixture of elementary or extremal scoring functions that form a linearly parameterised family. However, in this paper, we show a totally new way to achieve it, and our proposed loss is a novel representative of asymmetric least squares (Daouia et al. (2019)). Figure 1 illustrates the asymmetric shape behaviour for five different values of \(\tau\) (\(0.1,0.25,0.5,0.75,0.9\)). From the figure, \(L^{Asy}_{\eta,\rho^{2},\tau}(x)\) approaches the square root of the quantile loss function, as \(\eta\to 0\), and \(L^{Asy}_{\eta,\rho^{2},\tau}(x)\) approaches the quantile loss function, as \(\eta\to\infty\).
Kawakami and Hashimoto (2023) discussed that it is essential to choose the right value of hyperparameters of \(\eta\) and \(\rho^{2}\) where \(\rho^{2}\) can easily be estimated by a Gibbs sampler in a Bayesian model whereas the estimation of \(\eta\) is difficult. They proposed the approximate Gibbs sampler to enable the data-dependent estimation of \(\eta\). This paper will also adopt their approximate Gibbs sampler.
To fully enable the Gibbs sampling algorithm for Bayesian modelling, the density function in (6) has a scale mixture of normal representation with exponential and generalised inverse Gaussian densities. Suppose that a random variable \(X\) has a probability density function \(f(x|\theta)\) and unknown parameter \(\theta\) that satisfies
\[f(x|\theta)=\int\phi(x|\mu,\sigma)\,\pi(\sigma|\theta)\,d\sigma\,, \tag{7}\]
where \(\phi(\cdot)\) is the mixing distribution and \(\pi(\cdot)\) is some density function that is defined on \((0,\infty)\), then \(X\) or its \(f(x|\theta)\) is a scale mixture of a normal distribution. It has many applications in statistics, finance and, particularly in Bayesian inference. Probability distribution with a scale mixture of normal expression could be grouped into two groups: symmetric probability distributions (Andrews and Mallows (1974); West (1987)) and asymmetric probability distributions (Reed and Yu (2009); da Silva Ferreira et al. (2011); Kozumi and Kobayashi (2011)). Therefore, the following proposition provides an alternative stochastic representation, which is a normal scale-mixture.
**Theorem 2.1**: _If the model error \(\epsilon_{i}=y_{i}-\mathbf{x}_{i}\boldsymbol{\beta}\) follows the density function (6), then we can represent \(\epsilon_{i}\) as scale mixture of normals given by_
\[f( \epsilon_{i};\tau,\eta,\rho^{2})\] \[\propto\iint N\left(\epsilon_{i};(1-2\tau)v_{i},4v_{i}\sigma_{i} \right)E\left(v_{i};\frac{\tau(1-\tau)}{2\sigma_{i}}\right)GIG\left(\sigma_{i };1,\frac{\eta}{\rho^{2}},\eta\rho^{2}\right)dv_{i}d\sigma_{i}\,,\] \[i=1,\ldots,n\,, \tag{8}\]
_where \(GIG(x|\nu,c,d)\) denotes the GIG distribution and its density is specified by_
\[f_{GIG}(x)=\frac{(c/d)^{\nu}}{2K_{1}(cd)}x^{\nu-1}\exp\left(-\frac{1}{2}(c^{2}x+d ^{2}x^{-1})\right)\,,\quad v>0\,, \tag{9}\]
Figure 1: The asymmetrical behaviour of the proposed loss function for \(\tau\)=0.1 (short dashed), 0.25 (normal dashed), 0.5 (solid), 0.75 (short-normal dashed), and 0.9 (long dashed) for different values of \(\eta\) and \(\rho^{2}\).
and \(K_{\nu}(\cdot)\) is the modified Bessel function of the second kind at index \(\nu\) (Barndorff-Nielsen and Shephard (2001))._
**Proof:** The proof can be found in Appendix A.2. \(\Box\)
## 3 Bayesian Huberised Regularised Quantile Regression Model
### Bayesian Huberised Lasso Quantile Regression
In this paper, we consider a Bayesian analogous of Huberised regularised quantile regression model. Kawakami and Hashimoto (2023) showed that the unconditional Laplace prior of \(\mathbf{\beta}\) (Park and Casella (2008)) would lead to multimodality of a posterior density and resolved this issue by introducing \(\rho^{2}\) as a scale parameter to formulate the Bayesian Huberised lasso, that is,
\[\pi(\mathbf{\beta}|\rho^{2},\lambda_{1})=\prod_{j=1}^{k}\frac{\lambda_{1}}{2\sqrt {\rho^{2}}}\exp\left\{-\frac{\lambda_{1}|\mathbf{\beta}_{j}|}{\sqrt{\rho^{2}}} \right\}\,. \tag{10}\]
By using the scale mixture of normal representation of Laplace distribution Andrews and Mallows (1974), the Bayesian Huberised lasso can be expressed as
\[\mathbf{\beta}|\mathbf{s},\rho^{2}\sim N(\mathbf{0},\rho^{2}\mathbf{\Lambda}),\quad s_{j} |\lambda_{1}\sim Exp\left(\frac{\lambda_{1}^{2}}{2}\right)\,,\quad j=1,\dots, k\,,\]
where \(\mathbf{s}=\left(s_{1},\dots,s_{k}\right)^{T}\) and \(\mathbf{\Lambda}=\text{diag}\left(s_{1},\dots,s_{k}\right)\).
Therefore, with the Bayesian Huberised lasso, we present the following hierarchical model using the scale mixture of normal representation in Theorem 2.1:
\[\mathbf{y}|\mathbf{X},\mathbf{\beta},\mathbf{\sigma},\mathbf{v} \sim N(\mathbf{X}\mathbf{\beta}+(1-2\tau)\mathbf{v},\mathbf{V}),\] \[\sigma_{i}|\rho^{2},\eta \sim GIG\left(1,\frac{\eta}{\rho^{2}},\eta\rho^{2}\right)\,,\quad i =1,\dots,n\,,\] \[v_{i}|\sigma_{i} \sim Exp\left(\frac{\tau(1-\tau)}{2\sigma_{i}}\right)\,,\quad i =1,\dots,n\,,\] \[\beta_{j}|s_{j},\rho^{2} \sim N(0,\rho^{2}s_{j})\,,\quad j=1,\dots,k\,,\] \[s_{j}|\lambda_{1}^{2} \sim Exp\left(\frac{\lambda_{1}^{2}}{2}\right)\,,\quad j=1,\dots, k\,,\] \[\rho^{2} \sim\pi(\rho^{2})\propto\frac{1}{\rho^{2}}\,,\] \[\eta,\lambda_{1}^{2} \sim\text{Gamma}(\lambda_{1}^{2};a,b)\text{Gamma}(\eta;c,d)\,,\]
where \(\mathbf{V}=\text{diag}(4\sigma_{1}v_{1},\dots,4\sigma_{n}v_{n})\). As a prior of \(\rho^{2}\), we assume the improper scale invariant prior, that is proportional to \(\frac{1}{\rho^{2}}\), but a proper inverse gamma prior can also be employed, for example. Similar to Kawakami and Hashimoto (2023) and Cai and Sun (2021), Proposition 3.1 shows that using the improper prior on \(\rho^{2}\) will lead to a proper posterior density. Baeed on this proposition, Subsection 4.1 will show that the unconditional prior on \(\mathbf{\beta}\) can result in multimodality of the joint posterior. We further impose
a gamma prior on \(\lambda_{1}^{2}\) and \(\eta\). We set hyperparameters \(a=b=c=d=1\) for simulation studies and real data analysis. The sensitivity analysis of hyperparameters is detailed in Subsection 4.2.
As for the Gibbs sampler, the full conditional distribution of \(\boldsymbol{\beta}\) is a multivariate normal distribution and those of \(\boldsymbol{\sigma}\), \(\mathbf{v}\), \(\mathbf{s}\) and \(\rho^{2}\) are generalised inverse Gaussian distributions. The full conditional distribution of \(\lambda_{1}^{2}\) is a Gamma distribution. The approximate Gibbs sampler is used for \(\eta\). Appendix B.1 gives the details of the full conditional posterior distributions for the Gibbs sampling algorithm.
**Proposition 3.1**: _Let \(\rho^{2}\sim\pi(\rho^{2})\propto\frac{1}{\rho^{2}}\) (improper scale invariant prior). For fixed \(\lambda_{1}>0\) and \(\eta>0\), the posterior distribution is proper for all \(n\)._
**Proof:** The proof can be found in Appendix A.3. \(\Box\)
**Proposition 3.2**: _Under the conditional prior for \(\boldsymbol{\beta}\) given \(\rho^{2}\) and fixed \(\lambda_{1}>0\) and \(\eta>0\), the joint posterior \((\boldsymbol{\beta},\rho^{2}|\mathbf{y})\) is unimodal with respect to \((\boldsymbol{\beta},\rho^{2})\)._
**Proof:** The proof can be found in Appendix A.4. \(\Box\)
### Bayesian Huberised Elastic Net Quantile Regression
We also present the Bayesian Huberised elastic net, that is,
\[\pi(\boldsymbol{\beta}|\rho^{2},\lambda_{3},\lambda_{4})=\prod_{j=1}^{k}C \left(\tilde{\lambda}_{3},\lambda_{4}\right)\frac{\lambda_{3}}{2\sqrt{\rho^{2 }}}\exp\left\{-\frac{\lambda_{3}|\beta_{j}|}{\sqrt{\rho^{2}}}-\frac{\lambda_{ 4}\beta_{j}^{2}}{\rho^{2}}\right\}, \tag{11}\]
where \(C\left(\tilde{\lambda}_{3}\,\lambda_{4}\right)=\Gamma^{-1}\left(\frac{1}{2}, \tilde{\lambda}_{3}\right)\left(\tilde{\lambda}_{3}\right)^{-1/2}\exp\left\{- \tilde{\lambda}_{3}\right\}\) is the normalising constant and \(\tilde{\lambda}_{3}=\frac{\lambda_{3}^{2}}{4\lambda_{4}}\). The computations of the normalising constant is detailed in Appendix B of Li et al. (2010). Note that by letting \(\rho^{2}=1\), Equation (11) reduces to the original Bayesian elastic net (Li and Lin (2010)).
By using the scale mixture property (Andrews and Mallows (1974)), the Bayesian Huberised elastic net can be expressed as a scale mixture of normal with truncated gamma density:
\[\pi(\boldsymbol{\beta}|\rho^{2},\lambda_{3},\lambda_{4})= \prod_{j=1}^{k}\int_{0}^{\infty}\Gamma^{-1}\left(\frac{1}{2}, \tilde{\lambda}_{3}\right)\sqrt{\frac{2\lambda_{4}t_{j}}{2\pi\rho^{2}(t_{j}-1 )}}\sqrt{\frac{\tilde{\lambda}_{3}}{t_{j}}}\] \[\times N\left(\beta_{j};0,\frac{\rho^{2}(t_{j}-1)}{2\lambda_{4}t_ {j}}\right)\exp\left\{-\tilde{\lambda}_{3}t_{j}\right\}I(t_{j}>1)d\boldsymbol{ t}\,.\]
With the Bayesian Huberised elastic net, we have the following hierarchical model:
\[\mathbf{y}|\mathbf{X},\boldsymbol{\beta},\boldsymbol{\sigma}, \mathbf{v} \sim N(\mathbf{X}\boldsymbol{\beta}+(1-2\tau)\mathbf{v},\mathbf{V})\,,\] \[\sigma_{i}|\rho^{2},\eta \sim GIG\left(1,\frac{\eta}{\rho^{2}},\eta\rho^{2}\right)\,,\quad i =1,\ldots,n\,,\] \[v_{i}|\sigma_{i} \sim Exp\left(\frac{\tau(1-\tau)}{2\sigma_{i}}\right)\,,\quad i=1, \ldots,n\,,\] \[\beta_{j}|t_{j},\lambda_{4},\rho^{2} \sim N\left(0,\frac{2\rho^{2}(t_{j}-1)}{\lambda_{4}t_{j}}\right) \,,\quad j=1,\ldots,k\,,\] \[t_{j}|\tilde{\lambda}_{3} \sim\Gamma^{-1}\left(\frac{1}{2},\tilde{\lambda}_{3}\right)\sqrt {\frac{\tilde{\lambda}_{3}}{t_{j}}}\exp\left\{-\tilde{\lambda}_{3}t_{j}\right\} I(t_{j}>1)\,,\quad j=1,\ldots,k\,,\] \[\rho^{2} \sim\pi(\rho^{2})\propto\frac{1}{\rho^{2}}\,,\] \[\tilde{\lambda}_{3},\lambda_{4},\eta \sim\text{Gamma}(\tilde{\lambda}_{3};a_{1},b_{1})\text{Gamma}( \lambda_{4};a_{2},b_{2})\text{Gamma}(\eta;a_{3},b_{3})\,,\]
where \(a_{1},a_{2},a_{3},b_{1},b_{2},b_{3}\geq 0\) are hyperparameters, they are set to \(1\) for simulation studies and real data analysis and \(\Gamma(\cdot,\cdot)\) is the upper incomplete gamma function.
Appendix B.2 gives the details of the full conditional posterior distributions for the Gibbs sampling algorithm. The full conditional distributions are all well-known distributions except the full conditional distributions of \(\tilde{\lambda}_{3}\) and \(\eta\) and the Metropolis-Hasting algorithm is employed on \(\tilde{\lambda}_{3}\). We also present Proposition 3.4 for the use of improper prior on \(\rho^{2}\) and provide demonstration of the unconditional prior on \(\boldsymbol{\beta}\) in Subsection 4.1.
**Proposition 3.3**: _Let \(\rho^{2}\sim\pi(\rho^{2})\propto\frac{1}{\rho^{2}}\) (improper scale invariant prior). For fixed \(\lambda_{3}>0\), \(\lambda_{4}>0\) and \(\eta>0\), the posterior distribution is proper for all \(n\)._
* The proof can be found in Appendix A.5. \(\Box\)
**Proposition 3.4**: _Under the conditional prior for \(\boldsymbol{\beta}\) given \(\rho^{2}\) and fixed \(\lambda_{3}>0\), \(\lambda_{4}>0\) and \(\eta>0\), the joint posterior \((\boldsymbol{\beta},\rho^{2}|\mathbf{y})\) is unimodal with respect to \((\boldsymbol{\beta},\rho^{2})\)._
* The proof can be found in Appendix A.6. \(\Box\)
### Approximate Gibbs Sampler for Estimation of \(\eta\)
In this subsection, we will briefly discuss the approximate Gibbs sampler for the data-dependent estimation of \(\eta\) that is proposed by Kawakami and Hashimoto (2023). Notice that in a Bayesian Huberised regularised quantile regression model, the full conditional distribution of \(\eta\) is
\[\pi(\eta|\boldsymbol{\sigma},\rho^{2})\propto\frac{1}{K_{1}(\eta) }\eta^{a-1}\exp\left\{-\eta\left(\frac{1}{2}\sum_{i=1}^{n}\left(\frac{\sigma_{i }}{\rho^{2}}+\frac{\rho^{2}}{\sigma_{i}}\right)+b\right)\right\}, \tag{12}\]
where \(a=c\) and \(b=d\) in case of Bayesian Huberised lasso quantile regression and \(a=a_{3}\) and \(b=b_{3}\) in case of Bayesian Huberised elastic net quantile regression. Since the right side of Equation (12) contains
the modified Bessel function of the second kind, the full conditional distribution of \(\eta\) does not have a conjugacy property. However, it is possible to approximate (12) by a common probability distribution.
For the selection of an initial value of the approximate Gibbs sampling algorithm, we need to approximate the modified Bessel function of the second kind. According to Abramowitz and Stegun (1965), we have \(K_{\nu}(x)\sim\left(\frac{1}{2}\right)\Gamma(\nu)\left(\frac{\varepsilon}{2} \right)^{-\nu}\) as \(x\to 0\) for \(\nu>0\) and \(K_{\nu}(x)\sim\sqrt{\frac{\pi}{2\pi}}e^{-x}\) as \(x\to\infty\). Kawakami and Hashimoto (2023) stated that in either case, it would not make much difference in estimating \(\eta\). So, we will focus on the latter case only for this paper. As \(\eta\to\infty\), we have
\[\pi(\eta|\boldsymbol{\sigma},\rho^{2})\approx\eta^{a+n/2-1}\exp\left\{-\eta \left(\frac{1}{2}\sum_{i=1}^{n}\left(\frac{\sigma_{i}}{\rho^{2}}+\frac{\rho^{ 2}}{\sigma_{i}}\right)+b-n\right)\right\}\,,\]
which holds the approximation \(\pi(\eta|\boldsymbol{\sigma},\rho^{2})\approx\text{Gamma}\left(\eta;a+\frac{n }{2},\frac{1}{2}\sum_{i=1}^{n}\left(\frac{\sigma_{i}}{\rho^{2}}+\frac{\rho^{ 2}}{\sigma_{i}}\right)+b-n\right)\) for large \(\eta\).
The algorithm of the approximate Gibbs sampler is as follows.
Given the current Markov chain states \((\boldsymbol{\sigma},\rho^{2})\), we set the initial value as \(A=a+n/2\) and \(B=\frac{1}{2}\sum_{i=1}^{n}\left(\frac{\sigma_{i}}{\rho^{2}}+\frac{\rho^{2}}{ \sigma_{i}}\right)+b-n\). For \(m=1,\ldots,M\), do the following steps
* \(\eta\leftarrow\frac{A}{B}\);
* \(A\gets a+n\eta^{2}\frac{\partial^{2}}{\partial\eta^{2}}\log K_{1}(\eta)\);
* \(B\gets b+\frac{A-a}{\eta}+n\frac{\partial}{\partial\eta}\log K_{1}(\eta)+ \frac{1}{2}\sum_{i=1}^{n}\left(\frac{\sigma_{i}}{\rho^{2}}+\frac{\rho^{2}}{ \sigma_{i}}\right).\)
until \(|\eta/(A/B)-1|<\varepsilon\) or in other words, the convergence of \(\eta\) is met. The full derivation of the algorithm is detailed in Kawakami and Hashimoto (2023) and they also illustrated that in their simulation results, the approximation is close to the true full conditional distribution and the approximation accuracy increases as the sample size increase. For simulation studies and real data analysis, we set \(M=10\) and a tolerance \(\varepsilon=10^{-8}\).
## 4 Simulations
### Multimodality of Joint Posteriors
As related to Propositions 3.2 and 3.4, we present a simple simulation to demonstrate that the unconditional prior for \(\boldsymbol{\beta}\) can result in multimodality of the joint posterior. Instead of Equations (10) and (11), we specify the unconditional lasso prior
\[\pi(\boldsymbol{\beta}|\lambda_{1})=\prod_{j=1}^{k}\frac{\lambda_{1}}{2}\exp \left\{-\lambda_{1}|\boldsymbol{\beta}_{j}|\right\}\,,\]
and the unconditional elastic net prior
\[\pi(\boldsymbol{\beta}|\lambda_{3},\lambda_{4})=\prod_{j=1}^{k}C\left(\lambda _{3},\lambda_{4}\right)\frac{\lambda_{3}}{2}\exp\left\{-\lambda_{3}|\beta_{j} |-\lambda_{4}\beta_{j}^{2}\right\}\,,\]
with same improper prior \(\pi(\rho^{2})\propto\frac{1}{\rho^{2}}\). Then the joint posterior distribution of \(\mathbf{\beta}\) and \(\rho^{2}\) for Bayesian Huberised lasso quantile regression is proportional to
\[\pi(\mathbf{\beta},\rho^{2}|\mathbf{y}) \propto\left(\rho^{2}\right)^{-n-1}\exp\left\{-\lambda_{1}\sum_{j= 1}^{k}|\beta_{j}|\right\}\] \[\qquad\times\prod_{i=1}^{n}K_{0}\left(\sqrt{\frac{\eta}{\rho^{2}} \left(\frac{|y_{i}-\mathbf{x}_{i}\mathbf{\beta}|+(1-2\tau)(y_{i}-\mathbf{x}_{i}\bm {\beta})}{2}\right)}\right)\,, \tag{13}\]
and that for Bayesian Huberised elastic net quantile regression is proportional to
\[\pi(\mathbf{\beta},\rho^{2}|\mathbf{y}) \propto\left(\rho^{2}\right)^{-n-1}\exp\left\{-\lambda_{3}\sum_{ j=1}^{k}|\beta_{j}|-\lambda_{4}\sum_{j=1}^{k}\beta_{j}^{2}\right\}\] \[\qquad\times\prod_{i=1}^{n}K_{0}\left(\sqrt{\frac{\eta}{\rho^{2} }\left(\frac{|y_{i}-\mathbf{x}_{i}\mathbf{\beta}|+(1-2\tau)(y_{i}-\mathbf{x}_{i} \mathbf{\beta})}{2}\right)}\right)\,, \tag{14}\]
In Appendices A.4 and A.6, it is shown that using the conditional prior (10) and (11), respectively, lead to a unimodal posterior for any choice of \(\lambda_{1},\lambda_{3},\lambda_{4}\geq 0\) and \(\eta>0\) with an improper prior \(\pi(\rho^{2})\). On the other hand, the joint posteriors (13) and (14) can have more than one mode. For example, Figure 2 showed the contour plots of a multimodal joint density of \(\log(\beta)\) and \(\log(\rho^{2})\). This particular example results from considering the following data generated model,
\[y_{i}=x_{i}\beta+\epsilon_{i}\,,\quad\epsilon_{i}\sim ALD(0,\sigma=0.03,\tau=0.5)\,,\]
where \(\beta=1\) and \(x_{i}\sim N(0,1)\) for \(i=1,\ldots,10\), which is similar to Cai and Sun (2021). Due to multimodality in the joint posterior with unconditional prior for \(\mathbf{\beta}\), we use the prior for \(\mathbf{\beta}\) conditioning
Figure 2: Contour plot of an artificially generated posterior density of \((\log(\beta),\log(\rho^{2}))\) of the joint posterior density (13) and (14) for Bayesian Huberised lasso quantile regression and Bayesian Huberised elastic net quantile regression, respectively. The logarithm of \(\beta\) and \(\rho^{2}\) is used for a better visibility.
on the scale parameter \(\rho^{2}\).
### Sensitivity analysis of hyper-parameters
In this subsection, we test the sensitivity of hyperparameters of Gamma prior of \(\eta\), \(\lambda_{1}\), \(\lambda_{3}\) and \(\lambda_{4}\) on the posterior estimates for the proposed methods. We equally divide \(x\in[-2,2]\) into 50 pieces and the data are generated from
\[y_{i}=\mathbf{x}_{i}\boldsymbol{\beta}+\epsilon_{i}\,,\quad\epsilon_{i}\sim ALD (0,\sigma=0.03,\tau=0.5)\,,\quad i=1,\ldots,50\,,\]
with \(\mathbf{x}_{i}=\left(\left(1+e^{-4(x_{i}-0.3)}\right)^{-1},\left(1+e^{3(x_{i} -0.2)}\right)^{-1},\left(1+e^{-4(x_{i}-0.7)}\right)^{-1},\left(1+e^{5(x_{i}-0. 8)}\right)^{-1}\right)^{T}\) and \(\boldsymbol{\beta}=(1,1,1,1)^{T}\). It indicates that the true curve is
\[f(x)=\left(1+e^{-4(x-0.3)}\right)^{-1}+\left(1+e^{3(x-0.2)}\right)^{-1}+\left( 1+e^{-4(x-0.7)}\right)^{-1}+\left(1+e^{5(x-0.8)}\right)^{-1}\,.\]
In fact, this function was utilised in Jullion and Lambert (2007) to test the sensitivity of hyperparameters of the Gamma prior on the scale component in Bayesian P-spline.
We consider the proposed models to estimate \(\boldsymbol{\beta}\). Note that there are four prior hyperparameters a, b, c and d in the Bayesian Huberised lasso quantile regression and six prior hyperparamters \(a_{1}\), \(b_{1}\), \(a_{2}\), \(b_{2}\), \(a_{3}\) and \(b_{3}\) in the Bayesian Huberised elastic net quantile regression. We mainly set \(a=b=c=d=a_{1}=a_{2}=a_{3}=b_{1}=b_{2}=b_{3}=1\) in both simulation studies and data analysis. We generate 3000 posterior samples after discarding the first 1000 posterior samples as burn-in. Then we plot \(y_{i}=\mathbf{x}_{i}\boldsymbol{\beta}\) for \(i=1,\ldots,50\) in Figures 3 and 4 for both proposed Bayesian models, where \(\boldsymbol{\beta}\) is the posterior mean for the corresponding proposed model. In Figure 3, we fixed \(a=1\) with \(b\) varied for the top-left plot and \(b=1\) with \(a\) varied for the top-right plot. In both cases, we keep \(c=d=1\) fixed. Both bottom plots of Figure 3 follows in a similar manner. As for Figure 4, we also fixed \(a_{1}=1\) with \(b_{1}\) varied for the top-left plot while keeping \(a_{2}=b_{2}=a_{3}=b_{3}=1\). The rest of Figure 4 also follows in a similar manner. From the figures, we observe that the estimation results do not change very much for a variety selection of hyperparameters.
Figure 4: Sensitivity analysis of hyper-parameters for the Bayesian Huberised elastic net quantile regression.
Figure 3: Sensitivity analysis of hyper-parameters for the Bayesian Huberised lasso quantile regression.
### Simulation Studies
In simulation studies, we illustrate performance of the proposed methods. We compare the point and interval estimation performance of the proposed methods with those of some existing methods. To this end, we consider the following regression model with \(n\in\{100,200\}\), \(k=20\) and \(\tau\in\{0.25,0.5,0.75\}\):
\[y_{i}=\beta_{0}+\beta_{1}x_{i1}+\ldots+\beta_{k}x_{ik}+\sigma\epsilon_{i}\,, \quad i=1,\ldots,n\,,\]
where \(\beta_{0}=1\), \(\beta_{1}=3\), \(\beta_{2}=0.5\),\(\beta_{4}=\beta_{11}=1\), \(\beta_{7}=1.5\) and the other \(\beta_{j}\)'s were set to \(0\). We assume \(\mathbf{y}=(y_{1},\ldots,y_{n})^{T}\) is the response vector. The predictors \(\mathbf{x}_{i}=(x_{i1},\ldots,x_{ik})^{T}\) were generated from a multivariate normal distribution \(N_{k}(0,\Sigma)\) with \(\Sigma=(r^{|i-j|})_{1\leq i,j\leq k}\) for \(|r|<1\). Similar to Kawakami and Hashimoto (2023) and Lambert-Lacroix and Zwald (2011), we consider the six scenarios.
* Simulation 1: Low correlation and Gaussian noise. \(\boldsymbol{\epsilon}\sim N_{n}(0,I_{n})\), \(\sigma=2\) and \(r=0.5\).
* Simulation 2: Low correlation and large outliers. \(\epsilon=W/\sqrt{var(W)}\), \(\sigma=9.67\) and \(r=0.5\). \(W\) is ta random variable according to the contaminated density defined by \(0.9\times N(0,1)+0.1\times N(0,15^{2})\), where \(\sqrt{var(W)}=4.83\).
* Simulation 3: High correlation and large outliers. \(\epsilon=W/\sqrt{var(W)}\), \(\sigma=9.67\) and \(r=0.95\).
* Simulation 4: Large outliers and skew Student-t noise. \(\epsilon_{i}\sim 0.9\times\text{Skew-}t_{3}(\gamma=3)+0.1\times N(0,20^{2})\), \(\sigma=1\) and \(r=0.5\).
* Simulation 5: Heavy-tailed noise. \(\epsilon_{i}\sim\text{Cauchy}(0,1)\), \(\sigma=2\) and \(r=0.5\).
* Simulation 6: Multiple outliers. \(\epsilon_{i}\sim 0.8\times\text{Skew-}t_{3}(\gamma=3)+0.1\times N(0,10^{2})+0.1 \times\text{Cauchy}(0,1)\), \(\sigma=1\) and \(r=0.5\).
For the simulated dataset, we applied the proposed robust methods denoted by HBQR-BL and HBQR-EN where they were employed with Bayesian Huberised Lasso and Bayesian Huberised elastic net, respectively. We also applied the existing robust methods, including Bayesian linear regression with Bayesian Huberised lasso (Kawakami and Hashimoto (2023)), and Bayesian quantile regression with original Bayesian lasso and Bayesian elastic net (Li et al. (2010)) denoted by HBL, BQR-BL and BQR-EN, respectively. For HBL and BQR-BL, we assume \(\lambda_{1}\sim\text{Gamma}(a=1,b=1)\) and for BQR-EN, we assume \(\lambda_{1}\sim\text{Gamma}(a_{1}=1,b_{1}=1)\) and \(\lambda_{2}\sim\text{Gamma}(a_{2}=1,b_{2}=1)\). For the HBQR-BL and HBQR-EN, We implement both Gibbs and Metropolis-within-Gibbs algorithms, respectively and set all the hyperparameters to 1.
When applying the above methods, we generated 2000 posterior samples after discarding the first 500 samples as burn-in. We computed posterior median of each element of \(\beta_{j}\)'s for point estimates of \(\beta_{j}\)'s, and the performance is evaluated via root of mean squared error (RMSE) defined as
\(\left[(k+1)^{-1}\sum_{j=0}^{k}(\hat{\beta}_{j}-\beta_{j}^{\text{true}})^{2} \right]^{1/2}\), and median of mean absolute error (MMAD) defined as
\(\text{median}\left[(k+1)^{-1}\sum_{j=0}^{k}\left|\hat{\beta}_{j}-\beta_{j}^{ \text{true}}\right|\right]\). We also computed 95% credible intervals of \(\beta_{j}\)'s, and calculated average lengths (AP) defined as \((k+1)^{-1}\sum_{j=0}^{k}|CI_{j}|\) and
\(\tau=0.25\) and \(\tau=0.75\). In the absence of outliers (Simulation 1), the posterior median of \(\eta\) has large values. On the other hand, in the present of large outliers (Simulations 2-4,6) and in a model following a heavy-tailed noise (Simulation 5), small \(\eta\) is chosen. The results for \(\tau=0.25\) and \(\tau=0.75\) are similar (see Appendix C). Therefore, like the HBL method (Kawakami and Hashimoto (2023)), it is evident that \(\eta\) is adaptively chosen for each simulated dataset.
Figure 5: Boxplots of RMSE based on 300 replications in six simulation scenarios for HBQR-BL, HBQR-EN, HBL, BQR-BL and BQR-EN in this order (\(\tau=0.5\)).
Figure 6: Boxplots of MMAD based on 300 replications in six simulation scenarios for HBQR-BL, HBQR-EN, HBL, BQR-BL and BQR-EN in this order (\(\tau=0.5\)).
Figure 7: Boxplots of AL based on 300 replications in six simulation scenarios for HBQR-BL, HBQR-EN, HBL, BQR-BL and BQR-EN in this order (\(\tau=0.5\)).
Figure 8: Boxplots of posterior median of \(\eta\) based on 300 replications in six simulation scenarios for HBQR-BL (top) and HBQR-EN (bottom) (\(\tau=0.5\)).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & Methods & RMSE & MMAD & AL & CP \\ \hline & HBQR-BL100 & **0.3675** & **0.2548** & 0.9238 & 0.8711 \\ & HBQR-EN100 & **0.3795** & **0.2585** & 0.9673 & 0.8725 \\ & BQR-BL100 & 0.3956 & 0.2627 & 1.3406 & 0.9411 \\ \(\tau=0.25\) & BQR-EN100 & 0.3859 & 0.2628 & 0.9624 & 0.8821 \\ & HBQR-BL200 & **0.3328** & **0.2108** & 0.6624 & 0.8483 \\ & HBQR-EN200 & **0.3380** & **0.2104** & 0.6822 & 0.8576 \\ & BQR-BL200 & 0.3534 & 0.2118 & 0.9076 & 0.9311 \\ & BQR-EN200 & 0.3476 & 0.2123 & 0.6819 & 0.8732 \\ \hline & HBQR-BL100 & 0.2659 & 0.2059 & 0.9465 & 0.9211 \\ & HBQR-EN100 & 0.2678 & 0.2100 & 0.9834 & 0.9289 \\ & HBL100 & **0.2426** & **0.1891** & 0.9553 & 0.9432 \\ & BQR-BL100 & **0.2468** & **0.1946** & 1.2976 & 0.9848 \\ \(\tau=0.5\) & BQR-EN100 & 0.2502 & 0.1968 & 0.9838 & 0.9413 \\ & HBQR-BL200 & 0.1962 & 0.1550 & 0.6810 & 0.9143 \\ & HBQR-EN200 & 0.1952 & 0.1549 & 0.6927 & 0.9192 \\ & HBL200 & **0.1777** & **0.1404** & 0.6763 & 0.9384 \\ & BQR-BL200 & **0.1825** & **0.1452** & 0.8756 & 0.9778 \\ & BQR-EN200 & 0.1841 & 0.1460 & 0.6900 & 0.9329 \\ \hline & HBQR-BL100 & **0.3635** & **0.2521** & 0.9395 & 0.8756 \\ & HBQR-EN100 & **0.3662** & **0.2503** & 0.9891 & 0.8722 \\ & BQR-BL100 & 0.3943 & 0.2587 & 1.3484 & 0.9440 \\ \(\tau=0.75\) & BQR-EN100 & 0.3853 & 0.2664 & 0.9920 & 0.8859 \\ & HBQR-BL200 & **0.3386** & **0.2141** & 0.6703 & 0.8573 \\ & HBQR-EN200 & **0.3315** & **0.2105** & 0.6895 & 0.8562 \\ & BQR-BL200 & 0.3571 & 0.2146 & 0.9053 & 0.9340 \\ & BQR-EN200 & 0.3512 & 0.2174 & 0.6890 & 0.8659 \\ \hline \end{tabular}
\end{table}
Table 1: Numerical results in Simulation 1.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & Methods & RMSE & MMAD & AL & CP \\ \hline & HBQR-BL100 & **0.3822** & **0.2579** & 1.1290 & 0.9046 \\ & HBQR-EN100 & **0.4051** & **0.2767** & 1.2135 & 0.9002 \\ & BQR-BL100 & 0.5643 & 0.3385 & 2.6992 & 0.9506 \\ \(\tau=0.25\) & BQR-EN100 & 0.4950 & 0.3100 & 1.6428 & 0.9311 \\ & HBQR-BL200 & **0.3418** & **0.2233** & 0.8011 & 0.8762 \\ & HBQR-EN200 & **0.3528** & **0.2368** & 0.8484 & 0.8765 \\ & BQR-BL200 & 0.4588 & 0.2540 & 1.7800 & 0.9521 \\ & BQR-EN200 & 0.4221 & 0.2436 & 1.2130 & 0.9394 \\ \hline & HBQR-BL100 & **0.2886** & **0.2203** & 1.175 & 0.9533 \\ & HBQR-EN100 & 0.2945 & 0.2336 & 1.2251 & 0.9522 \\ & HBL100 & **0.2683** & **0.2013** & 1.604 & 0.9946 \\ & BQR-BL100 & 0.2954 & 0.2262 & 2.333 & 0.9992 \\ \(\tau=0.5\) & BQR-EN100 & 0.3273 & 0.2340 & 1.5357 & 0.9733 \\ & HBQR-BL200 & 0.2060 & 0.1591 & 0.7990 & 0.9279 \\ & HBQR-EN200 & 0.2130 & 0.1679 & 0.8332 & 0.9297 \\ & HBL200 & **0.1793** & **0.1386** & 1.0921 & 0.9943 \\ & BQR-BL200 & **0.1926** & **0.1511** & 1.4813 & 0.9992 \\ & BQR-EN200 & 0.1941 & 0.1522 & 1.1176 & 0.9929 \\ \hline & HBQR-BL100 & **0.3791** & **0.2624** & 1.1734 & 0.9066 \\ & HBQR-EN100 & **0.3889** & **0.2822** & 1.2615 & 0.9000 \\ & BQR-BL100 & 0.5761 & 0.3468 & 2.7564 & 0.9525 \\ \(\tau=0.75\) & BQR-EN100 & 0.4697 & 0.3086 & 1.8026 & 0.9433 \\ & HBQR-BL200 & **0.3324** & **0.2188** & 0.8013 & 0.8792 \\ & HBQR-EN200 & **0.3352** & **0.2198** & 0.8442 & 0.8705 \\ & BQR-BL200 & 0.4530 & 0.2498 & 1.7595 & 0.9521 \\ & BQR-EN200 & 0.4087 & 0.2416 & 1.2458 & 0.9437 \\ \hline \end{tabular}
\end{table}
Table 2: Numerical results in Simulation 2.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & Methods & RMSE & MMAD & AL & CP \\ \hline & HBQR-BL100 & **0.5676** & **0.4038** & 2.3005 & 0.9259 \\ & HBQR-EN100 & **0.5805** & **0.4285** & 2.5853 & 0.9268 \\ & BQR-BL100 & 0.6854 & 0.4754 & 5.0697 & 0.9524 \\ \(\tau=0.25\) & BQR-EN100 & 0.6147 & 0.4293 & 2.9063 & 0.9430 \\ & HBQR-BL200 & **0.5173** & **0.3732** & 1.8027 & 0.9094 \\ & HBQR-EN200 & **0.5319** & **0.3825** & 2.0427 & 0.9060 \\ & BQR-BL200 & 0.5836 & 0.4033 & 3.7568 & 0.9519 \\ & BQR-EN200 & 0.5542 & 0.3872 & 2.4114 & 0.9422 \\ \hline & HBQR-BL100 & 0.4949 & 0.3576 & 2.370 & 0.9719 \\ & HBQR-EN100 & 0.4958 & 0.3856 & 2.583 & 0.9700 \\ & HBL100 & **0.4525** & **0.3248** & 3.199 & 0.9980 \\ & BQR-BL100 & 0.5023 & 0.3671 & 4.4369 & 0.9992 \\ \(\tau=0.5\) & BQR-EN100 & **0.4910** & 0.3566 & 2.7858 & 0.9871 \\ & HBQR-BL200 & 0.4317 & 0.3197 & 1.8692 & 0.9611 \\ & HBQR-EN200 & 0.4317 & 0.3178 & 2.0474 & 0.9624 \\ & HBL200 & **0.3707** & **0.2719** & 2.4534 & 0.9965 \\ & BQR-BL200 & 0.4053 & 0.3041 & 3.3309 & 0.9992 \\ & BQR-EN200 & **0.3975** & **0.2995** & 2.3221 & 0.9921 \\ \hline & HBQR-BL100 & **0.5563** & **0.3993** & 2.3546 & 0.9300 \\ & HBQR-EN100 & **0.5911** & **0.4240** & 2.7935 & 0.9321 \\ & BQR-BL100 & 0.6650 & 0.4591 & 4.9652 & 0.9531 \\ & BQR-EN100 & 0.5984 & 0.4318 & 3.3417 & 0.9482 \\ & HBQR-BL200 & **0.5241** & **0.3856** & 1.8819 & 0.9114 \\ & HBQR-EN200 & **0.5390** & **0.3926** & 2.1538 & 0.9033 \\ & BQR-BL200 & 0.5786 & 0.4008 & 3.8405 & 0.9522 \\ & BQR-EN200 & 0.5455 & 0.3940 & 2.6987 & 0.9479 \\ \hline \end{tabular}
\end{table}
Table 3: Numerical results in Simulation 3.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & Methods & RMSE & MMAD & AL & CP \\ \hline & HBQR-BL100 & **0.2705** & **0.1957** & 1.0598 & 0.9416 \\ & HBQR-EN100 & **0.2803** & **0.2093** & 1.1574 & 0.9408 \\ & BQR-BL100 & 0.3335 & 0.2542 & 2.5406 & 0.9990 \\ \(\tau=0.25\) & BQR-EN100 & 0.3300 & 0.2362 & 1.5557 & 0.9769 \\ & HBQR-BL200 & 0.2265 & **0.1572** & 0.6799 & 0.9038 \\ & HBQR-EN200 & 0.2287 & **0.1584** & 0.7199 & 0.9004 \\ & BQR-BL200 & **0.2168** & 0.1679 & 1.5867 & 0.9979 \\ & BQR-EN200 & **0.2143** & 0.1636 & 1.0815 & 0.9721 \\ \hline & HBQR-BL100 & **0.4965** & **0.2950** & 1.2265 & 0.9193 \\ & HBQR-EN100 & **0.5048** & **0.3049** & 1.2383 & 0.9213 \\ & HBL100 & 0.5434 & 0.3111 & 1.6510 & 0.9450 \\ & BQR-BL100 & 0.5609 & 0.3312 & 2.2801 & 0.9503 \\ \(\tau=0.5\) & BQR-EN100 & 0.5184 & 0.3134 & 1.6965 & 0.9430 \\ & HBQR-BL200 & **0.4407** & **0.2385** & 0.8315 & 0.9029 \\ & HBQR-EN200 & **0.4466** & **0.2314** & 0.8457 & 0.9006 \\ & HBL200 & 0.4927 & 0.2461 & 1.1366 & 0.9446 \\ & BQR-BL200 & 0.5061 & 0.2571 & 1.4968 & 0.9506 \\ & BQR-EN200 & 0.4835 & 0.2515 & 1.1630 & 0.9444 \\ \hline & HBQR-BL100 & **0.8388** & **0.4236** & 1.4484 & 0.9070 \\ & HBQR-EN100 & **0.8417** & **0.4460** & 1.5382 & 0.9005 \\ & BQR-BL100 & 1.1330 & 0.5333 & 2.7729 & 0.9492 \\ & BQR-EN100 & 1.0732 & 0.5514 & 2.1302 & 0.9281 \\ & HBQR-BL200 & **0.7868** & **0.3716** & 1.0323 & 0.8829 \\ & HBQR-EN200 & **0.7940** & **0.3874** & 1.0884 & 0.8676 \\ & BQR-BL200 & 1.0651 & 0.4624 & 1.9991 & 0.9462 \\ & BQR-EN200 & 1.0206 & 0.4729 & 1.5216 & 0.9149 \\ \hline \end{tabular}
\end{table}
Table 4: Numerical results in Simulation 4.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & Methods & RMSE & MMAD & AL & CP \\ \hline & HBQR-BL100 & **0.4465** & **0.3012** & 1.4890 & 0.9169 \\ & HBQR-EN100 & **0.4460** & **0.3193** & 1.5992 & 0.9189 \\ & BQR-BL100 & 0.8163 & 0.4688 & 4.3100 & 0.9522 \\ & BQR-EN100 & 0.7197 & 0.3866 & 1.2336 & 0.8401 \\ & HBQR-BL200 & **0.3741** & **0.2456** & 1.0247 & 0.9108 \\ & HBQR-EN200 & **0.3926** & **0.2647** & 1.1039 & 0.9035 \\ & BQR-BL200 & 0.7137 & 0.3841 & 3.1585 & 0.9521 \\ & BQR-EN200 & 0.6376 & 0.3508 & 1.5062 & 0.8983 \\ \hline & HBQR-BL100 & **0.3292** & **0.2257** & 1.4712 & 0.9668 \\ & HBQR-EN100 & **0.2469** & **0.2002** & 1.0604 & 0.9616 \\ & HBL100 & 0.4151 & 0.2771 & 2.3541 & 0.9909 \\ & BQR-BL100 & 0.4577 & 0.3172 & 3.6592 & 0.9963 \\ \(\tau=0.5\) & BQR-EN100 & 0.6727 & 0.3435 & 0.7418 & 0.8184 \\ & HBQR-BL200 & **0.2320** & **0.1780** & 0.9866 & 0.9594 \\ & HBQR-EN200 & **0.2495** & **0.1861** & 1.0505 & 0.9570 \\ & HBL200 & 0.2861 & 0.1996 & 1.7512 & 0.9951 \\ & BQR-BL200 & 0.3155 & 0.2263 & 2.6611 & 0.9990 \\ & BQR-EN200 & 0.4242 & 0.2551 & 1.2599 & 0.9279 \\ \hline & HBQR-BL100 & **0.4315** & **0.2998** & 1.5445 & 0.9321 \\ & HBQR-EN100 & **0.4356** & **0.3011** & 1.7046 & 0.9303 \\ & BQR-BL100 & 0.7818 & 0.4538 & 4.2976 & 0.9649 \\ \(\tau=0.75\) & BQR-EN100 & 0.6703 & 0.3859 & 1.7851 & 0.8868 \\ & HBQR-BL200 & **0.3732** & **0.2442** & 1.0471 & 0.9089 \\ & HBQR-EN200 & **0.3913** & **0.2607** & 1.1202 & 0.9008 \\ & BQR-BL200 & 0.7062 & 0.3789 & 3.2171 & 0.9554 \\ & BQR-EN200 & 0.5920 & 0.3386 & 1.8842 & 0.9287 \\ \hline \end{tabular}
\end{table}
Table 5: Numerical results in Simulation 5.
## 5 Real Data Analysis
The robustness and efficiency of the Bayesian Huberised regularised quantile regression models are demonstrated via the analysis of two benchmarking datasets: Crime data and Top Gear data. They have large outliers. For a better interpretation of the parameters and to put the regressors on the common scale, we standardised all the numerical predictors and response variables to have mean 0 and
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & Methods & RMSE & MMAD & AL & CP \\ \hline & HBQR-BL100 & **0.2669** & **0.2034** & 1.0971 & 0.9503 \\ & HBQR-EN100 & **0.2734** & **0.2141** & 1.1773 & 0.9546 \\ & BQR-BL100 & 0.3397 & 0.2591 & 2.2468 & 0.9960 \\ \(\tau=0.25\) & BQR-EN100 & 0.3285 & 0.2438 & 1.4439 & 0.9639 \\ & HBQR-BL200 & **0.1931** & **0.1454** & 0.6920 & 0.9203 \\ & HBQR-EN200 & **0.1978** & **0.1466** & 0.7353 & 0.9235 \\ & BQR-BL200 & 0.2025 & 0.1595 & 1.4260 & 0.9984 \\ & BQR-EN200 & 0.2002 & 0.1557 & 0.9916 & 0.9814 \\ \hline & HBQR-BL100 & **0.2550** & **0.1937** & 1.0816 & 0.9516 \\ & HBQR-EN100 & **0.2634** & **0.2050** & 1.1773 & 0.9546 \\ & HBL100 & 0.4862 & 0.2947 & 1.5152 & 0.9384 \\ & BQR-BL100 & 0.3228 & 0.2480 & 2.1960 & 0.9970 \\ \(\tau=0.5\) & BQR-EN100 & 0.3200 & 0.2370 & 1.4295 & 0.9698 \\ & HBQR-BL200 & **0.4094** & **0.2328** & 0.8321 & 0.8971 \\ & HBQR-EN200 & **0.4147** & **0.2355** & 0.8701 & 0.8959 \\ & HBL200 & 0.4450 & 0.2384 & 1.0562 & 0.9379 \\ & BQR-BL200 & 0.4584 & 0.2502 & 1.3950 & 0.9498 \\ & BQR-EN200 & 0.4392 & 0.2452 & 1.0774 & 0.9363 \\ \hline & HBQR-BL100 & **0.8115** & **0.4248** & 1.4426 & 0.8998 \\ & HBQR-EN100 & **0.8229** & **0.4399** & 1.5531 & 0.8946 \\ & BQR-BL100 & 1.0427 & 0.5067 & 2.5333 & 0.94682 \\ \(\tau=0.75\) & BQR-EN100 & 0.9903 & 0.5243 & 1.8811 & 0.9092 \\ & HBQR-BL200 & **0.7661** & **0.3735** & 1.0210 & 0.8689 \\ & HBQR-EN200 & **0.7734** & **0.3836** & 1.0743 & 0.8630 \\ & BQR-BL200 & 0.9763 & 0.4381 & 1.8219 & 0.9437 \\ & BQR-EN200 & 0.9407 & 0.4492 & 1.3898 & 0.9019 \\ \hline \end{tabular}
\end{table}
Table 6: Numerical results in Simulation 6.
variance 1. Like in simulation studies, we also consider all the five methods of which we generated 10,000 posterior samples after discarding discarding the first 5,000 posterior samples as a burn-in. Then we report posterior medians of regression coefficients and their 95% credible intervals. For brevity, we drop the names of predictors of the datasets and keep the corresponding number to indicate each predictor. For BQR-BL, BQR-EN, HBQR-BL and HBQR-EN, we set the quantile levels as \(\tau\in\{0.1,0.5,0.9\}\) for the Crime and Top Gear datasets.
Since datasets may contain outliers, we adopt the following four criteria as measures of predictive accuracy; mean squared prediction error (MSPE), mean absolute prediction error (MAPE), mean Huber prediction error (MHPE) for \(\delta=1.345\) and median of squared prediction error (MedSPE) via 10-fold cross validation. They are defined by \(\text{MSPE}=10^{-1}\sum_{j=1}^{10}(\mathbf{y}_{j}-\mathbf{X}_{j}^{T}\hat{ \boldsymbol{\beta}}^{(-j)})^{2}\), \(\text{MAPE}=10^{-1}\sum_{j=1}^{10}|\mathbf{y}_{j}-\mathbf{X}_{j}^{T}\hat{ \boldsymbol{\beta}}^{(-j)}|\), \(\text{MHPE}=10^{-1}\sum_{j=1}^{10}L_{\delta}^{Huber}(\mathbf{y}_{j}-\mathbf{ X}_{j}^{T}\hat{\boldsymbol{\beta}}^{(-j)})\) and \(\text{MedSPE}=\text{median}_{1\leq j\leq 10}(\mathbf{y}_{j}-\mathbf{X}_{j}^{T} \hat{\boldsymbol{\beta}}^{(-j)})^{2}\), where \(L_{\delta}^{Huber}(\cdot)\) is defined by (2), \(\hat{\boldsymbol{\beta}}^{(-j)}\) is the posterior median based on dataset except for \(j\)th validation set, and \(\mathbf{y}_{j}\) and \(\mathbf{X}_{j}\) are the response variables and covariate matrix based on the \(j\)th validation set, respectively.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & Methods & MSPE & MAPE & MedSPE & MHPE \\ \hline \multirow{4}{*}{\(\tau=0.1\)} & HBQR-BL & 0.0226 & 0.1223 & **0.0044** & 0.0113 \\ & HBQR-EN & **0.0156** & **0.1034** & **0.0071** & **0.0078** \\ & BQR-BL & **0.0157** & **0.1054** & 0.0137 & **0.0078** \\ & BQR-EN & 0.0169 & 0.1285 & 0.0150 & 0.0085 \\ \hline \multirow{4}{*}{\(\tau=0.5\)} & HBQR-BL & **0.0123** & **0.0946** & **0.0067** & **0.0061** \\ & HBQR-EN & **0.0121** & **0.0938** & **0.0073** & **0.0061** \\ & HBL & 0.0304 & 0.1534 & 0.0173 & 0.0152 \\ & BQR-BL & 0.0192 & 0.1008 & 0.0081 & 0.0096 \\ & BQR-EN & 0.0250 & 0.1474 & 0.0185 & 0.0125 \\ \hline \multirow{4}{*}{\(\tau=0.9\)} & HBQR-BL & **0.0400** & **0.1439** & **0.0077** & **0.0200** \\ & HBQR-EN & **0.0278** & **0.1346** & **0.0079** & **0.0139** \\ \cline{1-1} & BQR-BL & 0.0453 & 0.1582 & 0.0106 & 0.0226 \\ \cline{1-1} & BQR-EN & 0.0401 & 0.1629 & 0.0146 & 0.0200 \\ \hline \end{tabular}
\end{table}
Table 7: Mean squared prediction error (MSPE), mean absolute prediction error (MAPE), mean Huber prediction error (MHPE) for \(\delta=1.345\) and median of squared prediction error (MedSPE) for Crime data, computed from 10-fold cross-validation.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & Methods & MSPE & MAPE & MedSPE & MHPE \\ \hline & HBQR-BL & **0.0288** & **0.1500** & **0.0196** & **0.0144** \\ & HBQR-EN & **0.0296** & **0.1505** & **0.0229** & **0.0148** \\ & BQR-BL & 0.0360 & 0.1736 & 0.0331 & 0.0180 \\ & BQR-EN & 0.0331 & 0.1605 & 0.0267 & 0.0166 \\ \hline & HBQR-BL & 0.0127 & 0.0942 & 0.0064 & 0.0064 \\ & HBQR-EN & 0.0120 & 0.0905 & 0.0070 & 0.0060 \\ \(\tau=0.5\) & HBL & **0.0110** & **0.0863** & **0.0055** & **0.0055** \\ & BQR-BL & 0.0183 & 0.1102 & 0.0101 & 0.0092 \\ & BQR-EN & **0.0110** & **0.0864** & **0.0063** & **0.0055** \\ \hline & HBQR-BL & **0.0643** & **0.2309** & **0.0410** & **0.0322** \\ & HBQR-EN & **0.0843** & **0.2662** & **0.0628** & **0.0421** \\ & BQR-BL & 0.6942 & 0.7652 & 0.7337 & 0.3471 \\ & BQR-EN & 0.2290 & 0.4461 & 0.1790 & 0.1145 \\ \hline \end{tabular}
\end{table}
Table 8: Mean squared prediction error (MSPE), mean absolute prediction error (MAPE), mean Huber prediction error (MHPE) for \(\delta=1.345\) and median of squared prediction error (MedSPE) for Top Gear data, computed from 10-fold cross-validation.
Figure 9: Posterior medians and 95% credible intervals of the regression coefficients at \(\tau=0.5\) in the Bayesian quantile regression with Bayesian lasso (BQR-BL), Bayesian quantile regression with elastic net (BQR-EN), the Huberized Bayesian lasso (HBL) and the proposed Bayesian quantile regression with Bayesian lasso (HBQR-BL) and elastic net (HBQR-EN), applied to the Crime data.
### Crime Dataset
The data are collected from Statistical Abstract of the United States for the 50 states and the District of Columbia (U.S. Census Bureau (2006)). This data were analysed in the book of Statistical Methods for the Social Sciences (Agresti and Finlay (1997)). The predictors are the number of murders per 100,000 people in the population, the percentage of the population living in metropolitan areas, the percentage of the population who are white, the percentage of the population who are high school graduates or higher, the percentage of families living below the poverty level, and the percentage of families headed by a single parent (male householders with no wife present and with own children, or female householders with no husband present and with own children). The response of interest is the number of murders, forcible rapes, robberies, and aggravated assaults per 100,000 people in the population. In total, we have 51 observations and included squared variables, which results in 12 predictors in our models.
The posterior medians and 95% credible intervals of the regression coefficients based on the five methods are reported in Figure 9. From the figure, all the methods behave similarly and the estimation are very close. The BQR-BL method produces relatively largest credible intervals, which suggests that this method may be unstable in producing estimates. The similar performances can also be found in \(\tau=0.1\) and \(\tau=0.9\) (see Appendix C). Table 7 also presents the predictive performance of the five methods for \(\tau=0.5\) and the four Bayesian quantile regression based methods for \(\tau\in\{0.1,0.9\}\). The proposed methods perform better than the existing robust methods in both median and upper quantile levels. The HBL method produces relatively large error measures in the median case among the rest
Figure 10: Posterior medians and 95% credible intervals of the regression coefficients at \(\tau=0.5\) in the Bayesian quantile regression with Bayesian lasso (BQR-BL), Bayesian quantile regression with elastic net (BQR-EN), the Huberized Bayesian lasso (HBL) and the proposed Bayesian quantile regression with Bayesian lasso (HBQR-BL) and elastic net (HBQR-EN), applied to the Top Gear data.
of methods. Looking at the lower quantile level (\(\tau=0.1\)), MSPE, MAPE and MHPE suggest that HBQR-EN and BQR-BL perform better while MedSPE suggests that both proposed methods perform better. In this case, they are very comparable.
### Top Gear Dataset
The data uses information on cars featuring on the website of the popular BBC television show Top Gear. It is available in the R package 'robustHD' (Alfons (2021)) and contains 242 observations on 29 numerical and categorical variables after removing the missing values. A description of the variables is provided in Table 3 of the paper (Alfons et al. (2016)). The response of interest is MPG (fuel consumption) and the remaining variables are predictors. For categorical variables, there are 4 binary variables and 12 variables with three levels. These 12 variables are assigned two dummy variables each. The resulting design matrix consists of 12 numerical variables, 4 individual dummy variables, and 12 groups of two dummy variables each, giving a total of 40 predictors.
The posterior medians and 95% credible intervals of the regression coefficients based on the five methods are reported in Figure 10. From the figure, all the methods are comparable. Like for the Crime dataset, the BQR-BL method produces relatively largest credible intervals. The similar performances can also be found in \(\tau=0.1\) and \(\tau=0.9\) (see Appendix C). Table 8 also presents the predictive performance of the five methods for \(\tau=0.5\) and the four Bayesian quantile regression based methods for \(\tau\in\{0.1,0.9\}\). All the methods are comparable in the median case (\(\tau=0.5\)) where both HBL and BQR-EN have the lowest error measures. Looking at the extreme quantile levels (\(\tau=0.1,0.9\)), the proposed methods significantly outperform the BQR-BL and BQR-EN methods especially at the upper quantile level where the existing robust methods perform slightly worse than the proposed methods. Furthermore, BQR-BL has the highest error measures in all cases.
Acknowledgments
This work is supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant 2295266 for the Brunel University London for Doctoral Training.
|
2302.06950 | Concept of Inverted Refractive-Index-Contrast Grating Mirror and
Exemplary Fabrication by 3D Microprinting | Highly reflective mirrors are indispensable components in a variety of
state-of-the-art photonic devices. Typically used, bulky, multi-layered
distributed Bragg (DBR) reflectors are limited to lattice-matched
semiconductors or nonconductive dielectrics. Here, we introduce an inverted
refractive-index-contrast grating (ICG), as compact, single layer alternative
to DBR. In the ICG, a subwavelength one-dimensional grating made of a low
refractive index material is implemented on a high refractive index cladding.
Our numerical simulations show that the ICG provides nearly total optical power
reflectance for the light incident from the side of the cladding whenever the
refractive index of the grating exceeds 1.75, irrespective of the refractive
index of the cladding. Additionally, the ICG enables polarization
discrimination and phase tuning of the reflected and transmitted light, the
property not achievable with the DBR. We experimentally demonstrate a
proof-of-concept ICG fabricated according to the proposed design, using the
technique of 3D microprinting in which thin stripes of IP-Dip photoresist are
deposited on a Si cladding. This one-step method avoids laborious and often
destructive etching-based procedures for grating structuration, making it
possible to implement the grating on any arbitrary cladding material. | Emilia Pruszyńska-Karbownik, Daniel Jandura, Maciej Dems, Łukasz Zinkiewicz, Artur Broda, Marcin Gȩbski, Jan Muszalski, Dusan Pudis, Jan Suffczyński, Tomasz Czyszanowski | 2023-02-14T10:19:13Z | http://arxiv.org/abs/2302.06950v1 | Concept of Inverted Refractive-Index-Contrast Grating Mirror and Exemplary Fabrication by 3D Microprinting
###### Abstract
Highly reflective mirrors are indispensable components in a variety of state-of-the-art photonic devices. Typically used, bulky, multi-layered distributed Bragg (DBR) reflectors are limited to lattice-matched semiconductors or nonconductive dielectrics. Here, we introduce an inverted refractive-index-contrast grating (ICG), as compact, single layer alternative to DBR. In the ICG, a subwavelength one-dimensional grating made of a low refractive index material is implemented on a high refractive index cladding. Our numerical simulations show that the ICG provides nearly total optical power reflectance for the light incident from the side of the cladding whenever the refractive index of the grating exceeds 1.75, irrespective of the refractive index of the cladding. Additionally, the ICG enables polarization discrimination and phase tuning of the reflected and transmitted light, the property not achievable with the DBR. We experimentally demonstrate a proof-of-concept ICG fabricated according to the proposed design, using the technique of 3D microprinting in which thin stripes of IP-Dip photoresist are deposited on a Si cladding. This one-step method avoids laborious and often destructive etching-based procedures for grating structuration, making it possible to implement the grating on any arbitrary cladding material.
subwavelength gratings, polymer photonics, 3D microprinting
## I Introduction
Reflective elements are key components in photonic and optoelectronic devices, enhancing light-matter coupling effects [1; 2; 3; 4; 5; 6; 7; 8; 9]. Conventional high optical power reflectance mirrors employ distributed Bragg reflectors (DBRs) composed of numerous pairs of layers with quarter-wavelength optical thicknesses and contrasting refractive indices [10]. The epitaxial growth of most semiconductor-based DBRs is technically challenging, due to the low refractive index contrast between materials with similar lattice constants. In turn, distributed Bragg reflectors made of dielectric materials [11] suffer from high thermal resistivity, absence of electrical conductivity, and narrow bands of high transmission making optical pumping difficult. Optical subwavelength structures offer an attractive alternative to multilayer, several-micrometer thick DBRs. A notable example is the high refractive index contrast grating (HCG) [12; 13]. An HCG consists of parallel, thin high-refractive-index stripes that are embedded in a low refractive index surrounding. The stripes can be placed on the top of a thick layer (Fig. 1) made of a low refractive index material, which we call cladding [12; 14], or suspended in air (i.e., a striped membrane) [15]. Its high optical reflection results from the destructive interference of the grating modes, which are confined due to the low refractive index of the surrounding [16]. The advantages of HCGs include their extremely high power reflectance of up to 100% (\(R=1.0\)), a broadband reflection spectrum up to two times wider spectrally than that of semiconductor DBRs [17], and properties that are impossible to achieve using DBRs, such as strong polarization discrimination and phase tuning of reflected light. On the other hand, the fabrication of HCGs involves a multistep procedure, as it typically relies on electron-beam lithography, photolithography, or nanoimprinting. Using these methods, a stripe-like pattern is defined in a photoresist deposited on the surface of the cladding, followed by metal deposition and lift-off to form a protective mask, before the final step of"wet" or "dry" etching [18].
In this paper, we introduce a new design for a highly reflecting subwavelength grating: an inverted refractive index contrast grating (ICG). The design consists of low refractive index grating stripes (\(n_{g}\)) deposited on a high refractive
index cladding layer (\(n_{g}\), Fig. 1). Thus, the low and high refractive indices are \(inverted\) (\(n_{g}<n_{c}\)) with respect to a conventional HCG (\(n_{g}>n_{c}\)).
We start with theoretical analysis indicating the possibility of high reflectance, even though the low refractive index of the grating precludes waveguiding, crucial in the case of HCGs. Subsequently, we verify the theoretical findings by numerical analysis (see Section "Numerical Methods"). For this purpose, we choose a refractive index of the cladding equal to 3.5 and consider two values for the grating refractive index: 1.5 and 2. The value of 1.5 is very low in comparison to the refractive index of materials typically implemented in reflecting metastructures. Anyway, we demonstrate a significant level of reflection and point out that the low refractive index grating is suitable for the use as a mirror in resonant cavities designed for sensing, enhancement of the spontaneous emission rate, or producing nonlinear effects. The choice of a grating refractive index of 2 is motivated by the demonstration of reflection into the zeroth diffraction order reaching nearly 100%, which enables the realization of mirrors for a very broad range of applications in photonics and optoelectronics. We verify our theoretical and numerical analysis by comparison with experimental reflection spectra of an ICG with a very low refractive index grating, which was 3D microprinted using a photoresist polymer (IP-Dip) on silicon cladding. Microprinting is a versatile alternative to subtractive, multistep, etching-based techniques for producing 3D nanostructure and microstructures [19]. It has been used effectively for the fabrication of various light-harnessing structures, such as 3D photonic crystals, micro-waveguides, and micro-optical elements, including micro-lenses and miniaturized multi-lens objectives [20; 21; 22; 23; 24]. However, there are almost no previous reports of using 3D microprinting on semiconductors to produce subwavelegth optical elements. This is due mainly to the common belief that the low refractive indices of polymers used for printing (from 1.5 to 1.58 [25]), as well as the limited spatial resolution of 3D polymer microprinting, preclude the application of microprinting for the fabrication of reflecting subwavelength gratings. In fact, our work shows that 3D microprinting is very well suited for the deposition of subwavelength-scale periodic reflecting structures.
## II Regimes of reflection in inverted- and high refractive index contrast gratings
In general, the high reflectivity of HCGs is a result of the two-mode interference phenomenon [26; 27]. In the subwavelength regime, such gratings support two modes propagating vertically (in the direction perpendicular to the grating plane) that can couple to each other only at the top and the bottom surface of the grating. Total 100% reflection occurs when there is destructive interference of these modes on the output (top) side of the grating. On the input (bottom) side, their superposition can be arbitrary. In most conventional HCGs, high reflectivity can be obtained only when the refractive index of the cladding (\(n_{c}\)) is low enough that only a single diffraction order exists in the reflection [26; 27]. We name the maxima in the reflection spectrum, which are induced by this mechanism Type 1. Examples of these maxima are shown in plots in Fig. 2 showing reflectivity maps of two different ICGs, calculated as a function of the relative wavelength (\(\lambda/L\)) and the cladding refractive index \(n_{c}\) (see Section "Numerical Methods" for details of the method). Their spectral positions depend on \(n_{c}\) and they disappear with the appearance of higher diffraction orders in the cladding. This happens because the two grating modes interfere on the input side, such that the resulting wave couples to higher diffraction orders of the reflected wave, while the low refractive
Figure 1: Configuration of the grating structure, composed of stripes with refractive index \(n_{g}\) implemented on a cladding with refractive index \(n_{c}\) and covered by air from the top. In the case of \(n_{g}<n_{c}\) the grating is the inverted refractive-index-contrast grating (ICG). When \(n_{g}>n_{c}\) the grating represents a conventional high refractive-index-contrast grating (HCG) and when \(n_{g}=n_{c}\) the grating is a monolithic high refractive-index-contrast grating (MHCG). The geometrical parameters of the grating and the coordinate system are indicated.
index cladding prohibits their propagation. Figure 3a illustrates interference on the grating input side of a two-mode solution based on the analytical model proposed in [26], for the Type-1 reflection maximum marked in Fig. 2a at \(n_{c}=1\) and \(\lambda/L=1.079\). The sine-like shape of the superposed field profile indicates strong coupling to the first and higher orders of the reflected wave.
High reflection mechanisms of a different nature are also possible, such as the maximum shown in Fig. 2b for the wavelength \(\lambda/L=1.046\). Based on the same analytical formalism, a single reflection channel related to the zeroth diffraction order can be identified below the first-order diffraction cut-off (Fig. 2b) as the only reflection channel existing in this configuration. The presence of such 100% reflectivity spectral region is independent of the cladding refractive index. The mechanism standing behind this phenomenon is explained in the Supplementary Materials (Section S1). With increasing \(n_{c}\), higher diffraction orders of the reflected wave become possible in the cladding, but the fact that the original zero-order reflection remains unaffected by the change of \(n_{c}\) together with the conservation of energy implies that no light is scattered into the higher-order cladding modes. An important property of this reflection mechanism, which we call Type 2, is the nearly flat superposition of the grating modes in comparison to reflection Type 1, as Figs. 3a and 3b illustrate. In this mechanism, the zero-order component of the grating mode Fourier expansion dominates significantly over other components, enabling 100% reflectivity into the zeroth diffraction order.
The last type of high reflectivity mechanism, named Type 3, is responsible for a reflection peak in Fig. 2a for the wavelength \(\lambda/L=1.022\). As will be demonstrated in Section III, Type 3 may appear when the refractive index of the
Figure 3: Interference of the grating modes on the input side for high reflectivity peaks Type 1, 2, and 3 in a), b), and c), respectively. The blue curves illustrate calculated profiles of the individual modes of the grating, while the red ones show their superposition. Electric field intensity is normalized relative to the one of the incident wave. For the reflection maxima of Type 2 and 3, this superposition is nearly flat, indicating a near-perfect elimination of higher diffraction orders, which is not the case for the reflection of Type 1. Gray lines indicate refractive index profile in the grating.
Figure 2: Reflectivity from ICGs with refractive index \(n_{g}=1.5\) a) and 2.0 b) calculated as a function of the relative wavelength (\(\lambda/L\)) and the cladding refractive index \(n_{c}\). The other grating parameters are as follows: \(H/L=0.468\), \(F=0.395\) and \(H/L=0.373\), \(F=0.414\) for a) and b), respectively. \(H\) denotes a height, \(L\) - period and \(F\) - fill factor of the grating. The light incidents from the cladding (bottom) side. There are three qualitatively different mechanisms responsible for high-reflectivity peaks marked in the plots as Type-1, 2, and 3 (see text). Grey lines indicate the cut-offs of the higher-diffraction orders in the cladding.
grating (\(n_{g}\)) is less than 1.75, whereas for such low \(n_{g}\) Type 2 is absent. Type 3 provides less than 100% reflectivity into the zeroth diffraction order, as the interference of the modes is not fully destructive. However, the reflected light propagates almost solely in the zeroth diffraction orders, due to the accidental flattening of the superposition of the grating modes at the input side, as shown in Fig. 3c.
The theoretical analysis presented in this section demonstrates the physical mechanism responsible for the emergence of very high reflectivity in the case of the subwavelength grating of refractive index lower than that of the cladding. In the following section we characterize the impact of ICG geometry on optical properties using numerical approach.
## III Numerical verification of inverted refractive index contrast gratings properties
In this section we numerically calculate the power reflectance of a grating with the refractive index \(n_{g}\) deposited on the surface of the semi-infinitely thick monolithic cladding with the refractive index \(n_{c}\) larger than \(n_{g}\) (\(n_{g}<n_{c}\)). A semi-infinite air superstrate is assumed above the grating (see Fig. 1). In the calculations, we consider reflection in the zeroth-diffraction order only and a single period of the grating with periodic boundary conditions, which elongates the grating to infinity in the lateral direction. The normal incidence of the light from the cladding side is assumed. As a reference, we also consider the case of an HCG where \(n_{g}>n_{c}\), as well as the border case of a monolithic HCG (MHCG), where \(n_{g}=n_{c}\)[27; 28].
Figures 4a, d show maps of power reflectance for two exemplary ICGs. Calculations are conducted in the domain of the height of the stripes (\(H\)) and the light wavelength (\(\lambda\)). We consider grating material with a refractive index \(n_{g}=1.5\) (see Fig. 4a) or \(n_{g}=2.0\) (see Fig. 4d) and the same cladding layer (\(n_{c}=3.5\)) in both cases.
The ICG modes are leaky (see Sections II), whereas HCG grating modes are not; however, the reflection pattern visible in the maps resembles to some extent the "checkboard" pattern observed also in the case of HCG [27]. As shown in Supplementary Materials S3 this region is limited by cutoffs of TE\({}_{2n}\) (from the short wavelength side) and the long-wavelength limit, according to waveguide theory [29]. Above the long-wavelength limit, only TE\({}_{0n}\) modes exist and the grating behaves as a quasi-uniform (unstructured) layer. Therefore, the reflection resembles a Fabry-Perot interference pattern, produced by a uniform layer without any regions of high power reflectance.
Several regions of high power reflectance are visible in both reflection maps (Fig. 4a, d), confirming the predictions of the theoretical model presented in Sections II. In what follows, we focus on the two power reflectance maxima (PRM) that we name \(B\) and \(A\) (Figs. 4 and Table 1). They feature the smallest height and the broadest width of reflection stopband (WRS), which we define as a reflection stopband above 60%. Therefore they appear to be the most attractive configurations for real-world applications.
In the case of the ICG with \(n_{g}=1.5\), \(A\) and \(B\) PRMs reach more than 80% (Fig. 4a). Both PRMs are located near the mode TE\({}_{20}\) (see Supplementary Fig. S2), which influences the optical field distributions in the grating,
Figure 4: Calculated reflectance map for the inverted contrast grating (ICG) in the domain of wavelength \(\lambda\) and grating height \(H\), both relative to the grating period \(L\). The refractive index of the cladding \(n_{c}\) is 3.5. The refractive index of the grating and fill factor are assumed as \(n_{g}=1.5\), \(F=0.395\) in a) and \(n_{g}=2.0\), \(F=0.414\) in d). In b), c), e), f) the distributions of optical field intensity corresponding to the \(A\) and \(B\) reflection maxima for \(n_{g}=1.5\) b), c) and \(n_{g}=2.0\) e), f) are shown within an ICG illuminated by a plane wave at normal incidence from the cladding side. The parameters of \(A\) and \(B\) maxima are collected in Table 1.
contributing to a single optical field maximum along the \(z\) axis in the region of the grating as illustrated in Figs. 4b, c. Increasing the grating refractive index to \(n_{g}=2.0\) increases the grating reflectance to nearly 100% for \(A\) and \(B\) PRMs and broadens their WRS, as illustrated in Fig. 4d. Light distributions corresponding to \(A\) and \(B\) PRMs in the case of \(n_{g}=2.0\) are illustrated in Figs. 4e, f displaying similar light distribution as in the case of \(n_{g}=1.5\). The geometrical parameters of ICG configurations corresponding to \(A\) and \(B\) PRMs for \(n_{g}=1.5\) and \(n_{g}=2.0\) are collected in Table I. The reflection spectra corresponding to the four maxima are presented in Supplementary Fig. S3.
Closer inspection of light distributions for the PRMs for \(n_{g}=1.5\) and \(n_{g}=2.0\) reveals that the dominant maximum intensity of the light is located close to the top surface of the stripe. Moreover, the optical field extends into the air above the stripe, independently of its refractive index [30; 31]. There is also a significant build-up of light density in the grating (see Supplementary Materials S3) that may be possibly utilized to enhance light-matter interaction in the region of the grating, as demonstrated in [31]. Light distribution for the \(B\) maximum for \(n_{g}=1.5\) shows additional significant local maxima in the air slit between the stripes (Fig. 4b), which could facilitate interaction of the reflected light with the surroundings, enabling possible sensing applications [32] in proximity to the ICG. Further properties of ICGs are discussed in more details in the Supplementary Materials S4 here we indicate the most important conclusions. First concerns possibility of high transmission of the light incident from the air side. This property together with the very high reflectance of the zeroth diffraction order when light is incident from the cladding side, is expected to be useful when the ICG constitutes one or both mirrors of a Fabry-Perot cavity subjected to external excitation. Another property of the ICG is possibility of phase tuning of reflected light at the level of \(d\phi/d\lambda\approx 10\pi\)rad, which provides a facile method of tuning the resonant wavelength of a cavity with an ICG mirror, by modifying the geometrical parameters of the ICG while keeping the cavity thickness constant [33].
A more general picture of the optical performance of the ICG and all possible subwavelength grating configurations is provided in Fig. 5, showing the calculated maximal power reflectance of the gratings in the domain of \(n_{g}\) and \(n_{c}\) for light incident from the cladding side. Magnitude of each point on the map is the largest value for either the \(B\) or \(A\) reflection maximum (see Supplementary Materials S6). The geometrical parameters of the gratings are modified throughout the map, since modifying the refractive index of grating imposes different conditions for the optimal geometrical parameters ensuring the maximal reflectance. Power reflectance of 100% into the zeroth diffraction order is achieved by all HCG configurations that fulfil the condition \(n_{g}>n_{c}\), including the membrane configuration in which the grating is suspended in air (\(n_{c}=1\)). The MHCG configuration (\(n_{g}=n_{c}\)) enables total reflectance when the refractive index of the grating and the cladding is larger than 1.75, in agreement with Ref. [34]. A previously unexplored feature is an apparent ability of the ICG to achieve nearly 100% reflection when \(n_{g}<n_{c}\), which is related to Type 2 reflection as discussed in Section II. The only requirement is that \(n_{g}\) is larger than 1.75. For \(n_{g}>1.75\), the total reflection is found within numerical precision as long as the difference \(n_{c}-n_{g}\) is less than 0.5, while for \(n_{c}-n_{g}>0.5\) the maximal power reflectance into the zeroth diffraction order is not smaller than \(1-10^{-3}\). With a decrease in the refractive index of the grating (\(n_{g}<1.75\)), the power reflectance and WRS also decrease revealing features of Type 3 reflection (see Section II). However, as shown in Fig. 5, an ICG with \(n_{g}<1.75\) still provides power reflectance considerably exceeding the reflectance of the plain surface between the cladding and air. The influence of the refractive index of the grating \(n_{g}\) on the reflection spectrum of ICG is analysed in more detail in the Supplementary Materials S6.
As discussed above, an HCG in which \(n_{g}>n_{c}\) ensures total reflection into the zeroth diffraction order and a wide WRS. However, this configuration requires the implementation of a high refractive index material, such as a semiconductor, on a lower refractive index thick layer, for example a dielectric. This impedes the use of such mirrors in resonant optoelectronic devices, including vertical-cavity surface-emitting lasers, due to practical problems with current injection, heat dissipation, and mechanical stability compared with all-semiconductor configurations. Combining two semiconductor layers of different refractive indices to achieve an HCG is also demanding, due to the
\begin{table}
\begin{tabular}{c c c c c} \hline \hline maximum & \(A\) & \(B\) & \(A\) & \(B\) \\ \hline \(n_{g}\) & 1.5 & 1.5 & 2.0 & 2.0 \\ \(\lambda/L\) & 1.022 & 1.057 & 1.046 & 1.032 \\ \(F\) & 0.395 & 0.397 & 0.414 & 0.447 \\ \(H/L\) & 0.468 & 0.755 & 0.373 & 0.502 \\ \(R\) & 0.840 & 0.810 & 0.996 & \(1-1.8\cdot 10^{-4}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Geometrical parameters of the inverted refractive-index-contrast grating (ICG) for configurations corresponding to \(A\) and \(B\) maxima for \(n_{g}=1.5\) and \(n_{g}=2.0\): \(L\) – period of the grating, \(F\) – fill factor, \(H\) – height of the stripe, \(R\) – optical power reflectance into the \(0^{\text{th}}\) diffraction order for \(n_{c}=3.5\).
typically significant difference in the lattice constants of semiconductor materials with sufficiently high refractive index contrast. Eliminating these problems is possible, in principle, by using an MHCG; however, the fabrication of MHCGs remains a challenge. The concept of an ICG in which a lower refractive index grating is deposited on cladding with a higher refractive index can substantially simplify grating implementation, due to the considerable freedom of forming thin dielectric subwavelength structures on top of semiconductor devices. In particular, the dielectric-semiconductor boundary can be a natural etch-stop that enables better control over the etched structure parameters. An ICG composed of semiconductor cladding and a dielectric grating fabricated using dielectrics with refractive indices higher than 1.75, such as TiO\({}_{2}\) (refractive index in the range of 2.05-2.48 [35]), TaO\({}_{2}\) (2.08-2.3 [36]), or Si\({}_{3}\)N\({}_{4}\) (1.98-2.05 [37]), would allow for nearly 100% reflection into the zeroth diffraction order of the normal incident light from the semiconductor side. If materials of even lower refractive index, such as SiO\({}_{2}\) (1.4-1.5 [38]) or IP-Dip photoresist (1.5-1.58 [25]) are deposited on an arbitrary semiconductor, reflection into the zeroth diffraction order is expected to reach 85% and nearly 98% into all diffraction orders, as will be demonstrated in the next section.
## IV Experimental demonstration of 3D microprinted ICG
To experimentally verify the theoretical model and numerical simulations presented in Sections II and III, we fabricated an ICG using 3D microprinting technique. The low refractive index of IP-Dip \(n_{g}=1.53\)[25] prevents 100% power reflectance into the zeroth diffraction order. However, expected reflectance above 80% is required in a variety of optical and optoelectronic applications, including resonator cavity enhanced light emitting diodes and resonant cavity enhanced photodetectors. Uniquely, 3D microprinting enables flexibility in the fabrication of the ICG, making possible wavelength, phase, and wavefront tuning by tailoring the parameters of the ICG stripes. Figure 6 and Fig. S6 in the Supplementary Materials illustrate an ICG fabricated by 3D microprinting directly on a thick Si wafer with a refractive index of \(n_{c}=3.5\) at a wavelength of 1500 nm.
The ICG was designed for peak TE-polarized reflection at \(\lambda=1500\) nm. The double side polished Si wafer was covered with an antireflective Si\({}_{3}\)N\({}_{4}\) coating, consisting of a single quarter-wavelength thick layer on the surface opposite to the surface on which the ICG was implemented. The ICG was designed with the following parameters: \(\lambda/L=1.022\), \(F=0.4\), \(H/L=0.47\), and \(L=1460\) nm, corresponding to \(A\) PRM in Fig. 4a. The parameters were predicted to provide maximal reflectance, assuming a rectangular cross-section of ICG stripes. The process of grating fabrication is detailed in the section Fabrication Methods.
The actual geometrical dimensions of the processed ICGs were determined by scanning electron microscopy (SEM), and \(H\) was additionally inspected using a confocal optical microscope. For the presented sample, \(L=1460\) nm (determined with 50 nm precision; see Supplementary Materials S7), \(F=0.45\) and \(H/L=0.46\). To validate the numerical analysis, the actual cross sections of the ICG stripes were extracted from the SEM images (see Fig. 6c). The obtained profiles were implemented in the numerical model. In what follows, all numerical results relate to the cross-section shape of the real-world ICG. Reflection maps calculated for the ICG (see Fig. S7a in the Supplementary Materials) show great similarity to the reflection maps of an ICG consisting of stripes with a rectangular cross section (see Fig. 4a). The deviation in the cross-section in our experiment from the rectangular shape does not affect
Figure 5: Map of maximal power reflectance for subwavelength gratings calculated in the (\(n_{c}\), \(n_{g}\)) space. Each point on the map represents maximal reflection of \(B\) and \(A\) PRMs. The white dashed line represents the MHCG configuration (\(n_{c}=n_{g}\)). The region positioned on the left of the dashed line represents an HCG (\(n_{c}<n_{g}\)). The region on the right of the dashed line represents an ICG (\(n_{c}>n_{g}\)). The vertical line at \(n_{c}=1\) represents a membrane suspended in air. The horizontal line at \(n_{g}=1\) corresponds to a plain surface between the cladding and the air.
maximal reflection, but in general it may require modification of the grating parameters to achieve maximal power reflectance [39]. The power reflectance of the ICG with the real-world cross section is discussed in more detail in the Supplementary Materials S7.
The transmission through the ICG sample can be expressed as follows:
\[T_{\mathrm{ICG}} =T_{\mathrm{ar}}e^{-\alpha d}\left(1-R_{\mathrm{ICG}}\right) \tag{1a}\] \[T_{\mathrm{ref}} =T_{\mathrm{ar}}e^{-\alpha d}\left(1-R_{\mathrm{plain}}\right) \tag{1b}\]
where \(T_{\mathrm{ICG}}\) is transmission measured for normal incident light through the ICG, \(T_{\mathrm{ref}}\) is the reference transmission through the neighboring unprocessed plain silicon surface, \(T_{\mathrm{ar}}\) is transmission through the antireflecting coating (which also accounts for any scattering occurring in the wafer and on its surface), \(\alpha\) is the absorption coefficient of the silicon wafer, \(d\) is the thickness of the wafer, and \(R_{\mathrm{plain}}\) is the reflection from the plain interface between Si and air (which is 0.304 at the wavelength of 1500 nm based on Fresnel equations for reflection). With the set of equations (1) the reflectivity of the ICG (\(R_{\mathrm{ICG}}\)) can be extracted directly from the transmission measurements:
\[R_{\mathrm{ICG}}=1-\frac{T_{\mathrm{ICG}}}{T_{\mathrm{ref}}}\left(1-R_{ \mathrm{plain}}\right) \tag{2}\]
Figure 7 presents the measured (Fig. 7a) and calculated (Fig. 7b) reflection spectra for various angles of the polarizer rotating from 0 to 90 degrees with a 15 degree step. The extreme angles of rotation represent TE and TM polarisations. In the measurements, only the zeroth diffraction order was transmitted through the grating, due to the dimensions of the subwavelength stripes. Therefore, \(R_{\mathrm{ICG}}\) accounts for all diffraction orders of light reflected by the grating. The experimental reflection spectra for TE polarisation reveal a local maximum at \(\lambda/L=1.027\) (\(\lambda\approx 1500\) nm) that corresponds very well with the numerical results. The measured maximal reflection is close to 90%, which is also close to the numerical simulations, revealing maximal reflection into all diffraction orders at the level of 97%. The TE reflection abruptly reduces towards shorter wavelengths, which is consistent with the calculated reflection map in Fig. 4a, indicating high transmission in this spectral range. Rotation of the polarizer from the position corresponding to TE polarization to the position corresponding to TM polarization reduces the reflectivity to nearly 30% that is a level of a reflectance from the interface between silicon and air. At the wavelength corresponding to the maximal reflectance, TE polarisation reflection is twofold larger than TM polarisation reflection. The calculations show that TE polarization reflection can be fivefold larger than TM polarization reflection. The inconsistencies in the measurements and simulations are typically related to the fabrication precision, which introduces deviations in the grating periodicity. The experimental power reflectance can be enhanced by perfecting the process of ICG fabrication. Overall, our experimental results show very good agreement with calculations and confirm the feasibility of high power reflectance using an ICG.
Figure 6: a), b) Scanning electron microscope (SEM) image of a 3D microprinted IP-Dip grating deposited on Si cladding in two consecutive enlargements; c) profile of the ICG stripes scanned from SEM images and implemented in the software with the grating dimensions indicated.
## V Conclusions
We have presented a new high reflecting mirror design for an inverted refractive index contrast grating, along with a theoretical, numerical, and experimental investigation of its optical performance. By theoretical analysis we demonstrated the possibility of high reflectance independent of the refractive index of the cladding on which the grating is deposited, particularly when refractive index of the grating is lower than the refractive index of the cladding.
By numerical analysis, we showed that the ICG provides almost 100% optical power reflectance for light incident at normal from the cladding layer side toward the air. The only requirement is for the grating to be formed from a material with a refractive index higher than 1.75. The refractive index of the layer below the grating can be arbitrary. When the refractive index of the grating is less than 1.75, the grating still strongly enhances the power reflectance compared to reflection occurring at the plane interface between the cladding layer and air. The ICG enables polarization control of reflected light, with fivefold larger reflection of transverse electric (TE) polarization compared to transverse magnetic (TM) polarization, and facilitates phase tuning of reflected light.
To experimentally verify our numerical analysis, we characterized the optical reflectance of an IP-Dip grating fabricated by 3D microprimting on a thick silicon wafer. Qualitative and quantitative comparison of the measured and calculated power reflectance spectra revealed very good agreement, indicating nearly 90% reflection into all diffraction orders and strong polarization control.
At a more general level, the proposed design and its implementation using an additive-type technique open up a new possibilities for the fabrication of subwavelength structures, which are in increasing demand in photonics, optics, and optoelectronics. The fabrication of highly reflective mirrors in the form of 3D microprinted gratings does not require high-vacuum techniques such as vapor deposition or epitaxy, and has the additional advantage of scalability. Thanks to the relaxation of the requirements for the refractive index of the cladding layer hosting the grating, the range of materials that can be applied is extended, making the use of perovskite or organic grating layers possible.
## Numerical Methods
To determine the optical reflectance of the gratings, we use the plane-wave reflection transformation method [40], which is a fully vectorial optical model. Because of the periodicity of the gratings, the electrical field of the electromagnetic wave can be expressed in the form of Bloch waves: \(\Psi(x)=e^{ik_{x}x}f(x)\), where \(f(x)\) is a periodic function with the same period as the grating \(L\), and \(k_{x}\) is the lateral component of the wavevector of the light, ranging from \(-\pi/L\) to \(\pi/L\). In the analysis, we use 60 plane waves that enable numerical relative error below \(10^{-8}\). The model has been shown to have high reliability by comparison with experimental results [41; 42]. In the analysis we consider transverse electric (TE) polarization, where the electric field is parallel to the grating stripes. Transverse magnetic (TM) polarization perpendicular to the grating is not considered here, as the ICG shows significantly lower power reflectance of this polarization.
Figure 7: Measured a) and calculated b) reflection spectra into all diffraction orders of inverted refractive-index-contrast gratings (ICGs) 3D microprinted using IP-Dip on silicon cladding. The spectra are calculated for the geometrical parameters of the stripes determined experimentally, with the grating defined by the following parameters: \(L=1460\,\mathrm{nm}\), \(F=0.45\), \(H/L=0.46\). The spectra are measured and calculated for polarization modified gradually with a 15-degree step from TE to TM. The spectra of TE and TM polarization correspond to angles of the polarizer of 0 and 90 degrees, respectively.
Fabrication methods
To fabricate the ICG grating, we used the Photonic Professional GT laser lithography system from Nanoscribe GmbH with a 63\(\times\) immersion objective and IP-Dip polymer material. The system uses Er-doped femtosecond frequency-doubled fiber laser emitting pulses at 780 nm wavelength with an approximately 100 MHz repetition rate and 150 fs pulse width. The femtosecond laser is focused into the volume of the IP-Dip photoresist, where the two-photon polymerization process occurs in the volume of the focal spot (voxel). In the fabrication process, the IP-Dip polymer was deposited on top of the silicon substrate and polymerization by laser writing was realized layer by layer in a single-step process. The grating structure on top of the silicon cladding was fabricated using a programmed script in a two-layer arrangement of horizontal stripes, with laser power of 26 mW and a scanning speed of 10000 m/s. For the development of a polymerized structure, PGMEA (propylene glycol monomethyl ether acetate) was applied for 20 min to dissolve and remove the unexposed photoresist. Finally, the sample was rinsed in isopropyl alcohol for 4 min and dried with nitrogen.
## III Measurements
For the transmission measurements, a supercontinuum light source (Leukos SM-30-W; 400-2400 nm) was coupled to the optical fiber, illuminating the sample from the side. The polarizer for 1550 nm was placed between the supercontinuum source and the sample and a rotary stage was used to change the angle of polarization. On the opposite side of the sample, a detection optical fiber was moved precisely toward the sample using an immersion layer. The transmission spectra of the ICG grating were measured using an OceanOptics NIRQuest spectrometer (900-2050 nm) with respect to the reference transmission of the silicon substrate.
## IV Acknowledgements
This work is supported by the Polish National Science Center within the projects OPUS 2018/29/B/ST7/01927, 2017/25/B/ST7/00437 and 2020/39/B/ST7/03502 and by the Slovak National Grant Agency under project No.
Figure 8: Diagram of the experimental setup.
VEGA 1/0363/22.
|
2306.02019 | Generative Adversarial Networks for Data Augmentation | One way to expand the available dataset for training AI models in the medical
field is through the use of Generative Adversarial Networks (GANs) for data
augmentation. GANs work by employing a generator network to create new data
samples that are then assessed by a discriminator network to determine their
similarity to real samples. The discriminator network is taught to
differentiate between actual and synthetic samples, while the generator system
is trained to generate data that closely resemble real ones. The process is
repeated until the generator network can produce synthetic data that is
indistinguishable from genuine data. GANs have been utilized in medical image
analysis for various tasks, including data augmentation, image creation, and
domain adaptation. They can generate synthetic samples that can be used to
increase the available dataset, especially in cases where obtaining large
amounts of genuine data is difficult or unethical. However, it is essential to
note that the use of GANs in medical imaging is still an active area of
research to ensure that the produced images are of high quality and suitable
for use in clinical settings. | Angona Biswas, MD Abdullah Al Nasim, Al Imran, Anika Tabassum Sejuty, Fabliha Fairooz, Sai Puppala, Sajedul Talukder | 2023-06-03T06:33:33Z | http://arxiv.org/abs/2306.02019v2 | # Generative Adversarial Networks for Data Augmentation
###### Abstract
One way to expand the available dataset for training AI models in the medical field is through the use of Generative Adversarial Networks (GANs) for data augmentation. GANs work by employing a generator network to create new data samples that are then assessed by a discriminator network to determine their similarity to real samples. The discriminator network is taught to differentiate
between actual and synthetic samples, while the generator system is trained to generate data that closely resemble real ones. The process is repeated until the generator network can produce synthetic data that is indistinguishable from genuine data. GANs have been utilized in medical image analysis for various tasks, including data augmentation, image creation, and domain adaptation. They can generate synthetic samples that can be used to increase the available dataset, especially in cases where obtaining large amounts of genuine data is difficult or unethical. However, it is essential to note that the use of GANs in medical imaging is still an active area of research to ensure that the produced images are of high quality and suitable for use in clinical settings.
Medical imaging, diagnosis, Generative Adversarial Networks, augmentation, data generation.
## 1 Introduction
Data augmentation is an important technique in medical image analysis to improve the robustness and generalization of models. Numerous studies have applied data augmentation techniques, including Generative Adversarial Networks (GANs), to medical images for generating realistic images. A few of the popular examples are described by authors Abdelhalim [1], and Sun [27] in their conference papers on how Generative Adversarial Network is useful for data augmentation. Abdelhalim [1] in his conference papers explains about on how to create generated pictures of skin lesions using a GAN to enhance the data. On the other side Sun et al. [27] employs Generative Adversarial Networks (GANs) to create artificial medical pictures for data augmentation and shows how well it performs on a task to identify lung nodules. These studies highlight the necessity for rigorous examination of the produced pictures to assure their quality and appropriateness for clinical application, in addition to the potential of GANs for data augmentation in medical image analysis.
Figure 1 is representing the process of augmentation using GANN. In this paper [24],the training and testing procedure is shown in Figure 1 utilizing both actual data and data produced by the GAN. The input data is split into three groups: training data, which makes up 70% of the total, testing data, which makes up 30% of the total, and GAN data, which makes up 11.75% of the training data and 8.2 percent of the fault machine data.
It is crucial to remember that data augmentation is just one aspect of building robust medical image analysis models, and a thorough evaluation of multiple techniques and models is needed for each specific task and dataset. The most important component of AI applications is data. Lack of sufficient labeled data frequently results in overfitting, which prevents the model from generalizing to new samples. This can be lessened through data augmentation, which effectively increases the volume and diversity of data that the network observes. It is accomplished by applying changes, including such rotation, cropping, shadowing, etc., to an initial dataset in order to artificially create new data. However, figuring out which augmentations will be most effective for the current dataset is not an easy process.
We must understand that data augmentation is just one aspect of building robust medical image analysis models for creating pseudo-realistic images, and extra evaluation of multiple techniques and models might be needed for each specific task requirement and data set. The most important component of AI applications is data. Lack of sufficient labeled data frequently results in overfitting, which prevents the model from generalizing to new samples. This can be lessened through data augmentation, which effectively increases the volume and diversity of data that the network observes. We can accomplish data augmentation by effectively applying needed changes to input images such as rotation, cropping, shadowing, etc., to an original data set for possible artificial data creation which resembles input images. However, figuring out which augmentations will be most effective for the current data set is not an easy process.
For many years, the method of data augmentation has been employed extensively in machine learning and computer vision. By creating additional samples from the existing ones, data augmentation aims to artificially expand the size of the training dataset. This can lessen overfitting and increase the resilience and generalizability of models. This work demonstrates the value of data augmentation in enhancing the performance of deep learning algorithms in computer vision tasks, as demonstrated by numerous recent studies in the field of data augmentation such as Krizhevsky et al. [18]. In their paper, they further explain the data augmentation methods usage like randomized cropping and flipping to expand the ImageNet collection. Since then, data augmentation has become a widely adopted technique in machine learning, with many studies proposing new data augmentation methods and evaluating their performance on various tasks. Some recent research works that emerged in data augmentation are mentioned by Cubuk et al. [8], which explains work on learning enhancement techniques from data. In their paper, they utilized reinforcement learning to automatically learn optimal data augmentation strategies for a specific task. Also, they propose a novel data augmentation technique based on the random mixing of image patches from different samples. In conclusion, data augmentation has a long history in machine learning and computer vision and it is still an active field of study with new approaches and methodologies being put out often. Pseudo images that resemble with original ones are generated by
following data augmentation methodologies and we can further enhance the images with the input data by following the below-mentioned approaches:
Flipping: This involves flipping the image horizontally or vertically to generate a new sample.
Rotation: This involves rotating the image by a random angle to generate a new sample.
Scaling: This involves resizing the image to generate a new sample.
Translation: This involves shifting the image by a random offset to generate a new sample.
Cropping: This involves randomly cropping a portion of the image to generate a new sample.
Color augmentation: This involves randomly changing the brightness, contrast, or saturation of the image to generate a new sample.
Mixup: This involves mixing two different images to generate a new sample.
Cutout: This involves randomly masking out a portion of the image to generate a new sample.
These techniques have been widely used in computer vision and machine learning, and be effective in improving the robustness and generalization of models. The different data augmentation techniques in computer vision are evaluated by some of the recent studies including a study by Xiao et al. [19], Cubuk et al. [8], and Xiao et al. [19]. The research done by Xiao et al. [19] in their paper talks about various data augmentation techniques, including flipping, rotation, and scaling, on image classification tasks. These studies demonstrate the importance of data augmentation in computer vision and machine learning and highlight the need for careful evaluation of different data augmentation techniques for each specific task and dataset.
A novel approach to data augmentation in computer vision has been presented, and it is known as generative adversarial networks (GANs). A generator network plus a discriminator network make up a GAN, a form of deep learning model. The discriminator network is taught to discern between the created samples and the genuine ones, while the generator network is trained to develop new samples that are identical to the original ones. The generation and discriminator are trained in an oppositional fashion, with the generator attempting to make samples that can deceive the discriminator and the discriminator attempting to accurately determine whether a sample is genuine or fabricated.
One potential advantage of GANs for data augmentation is that they can generate new samples that are diverse and representative of the original dataset. The resilience and applicability of models developed using the additional dataset may benefit from this approach. GANs could be very beneficial for data augmentation in computer vision, and some of the recent studies by Cubuk et al. [8], and Chen et al. [7] in their papers emphasized it. Here, they suggest a GAN-based data augmentation approach for classifying medical images and a Generative adversarial network data augmentation method for segmenting images. Various works highlight the need for more investigation and assessment of GANs in these fields by showcasing the promise of GANs for data augmentation in computer vision and medical imaging analysis.
In [14] recent years, both medical image analysis and artificial intelligence have experienced significant growth and development. The development of AI models that can carry out a variety of tasks in computer-aided diagnoses, such as
Figure 1: Distribution of various dataset types (a) Dataset with the sufficient sample (b) Dataset with a poor sample size. [24]
classification tasks, segmentation techniques, image registration, and image synthesis, has been made possible by the increasing accessibility of massive data of medical images and advancements in deep learning algorithms. These AI models have the potential to significantly impact healthcare by providing faster, more accurate, and more cost-effective solutions for medical image analysis. However, there are also important challenges that must be addressed, including the need for high-quality annotated datasets, the development of robust and interpretable models, and the need for careful validation and evaluation of the models. Recent studies which specifically evaluated the performance metrics of AI models in medical image analysis discuss more these challenges. As presented by author Chen et al. [7] in their paper, evaluate the performance of AI models for disease classification and localization on a large dataset of chest X-rays. Also, author Chen et al. [7] presents a deep learning architecture for pulmonary nodule detection in CT images and evaluates its performance on a large dataset of medical images. These studies demonstrate the potential of AI for medical image analysis and highlight the need for continued research and development in this field.
This chapter will cover the lack of medical data and how generative adversarial networks may assist to address it (GANs).
The shortage of data in the medical field presents a significant obstacle for Artificial Intelligence and also emphasizes the challenges for medical analysis. It can be time-consuming, costly, and susceptible to ethical and legal restrictions to gather high-quality annotated medical photographs. This may result in a lack of data for developing and testing AI models, which might have an impact on their effectiveness and generalizability.
Several methods have been put out to improve medical data to solve this problem, including generative models like Generational Adversarial Networks (GANs) and Variational Auto-encoders (VAEs). This chapter will discuss the problem of limited data and possible remedies utilizing GANs and other variational auto-encoders (VAEs). These models may be used to create new synthetic pictures that are comparable to the original photos after being trained on a small dataset of medical images. The original dataset may then be supplemented with synthetic pictures, expanding the amount and complexity of the data for developing and testing AI models.
## 2 Literature Review
Medical image analysis and artificial intelligence (AI) have become increasingly important in recent years for improving the accuracy and efficiency of medical diagnosis and treatment [4]. AI algorithms can be trained on large amounts of medical data to automatically detect and diagnose medical conditions, such as diseases, tumors, and abnormalities, based on medical images. The use of AI in medical image analysis has the potential to revolutionize the healthcare industry by providing more accurate and timely diagnoses, reducing the workload of medical professionals, and improving patient outcomes.
Some examples of AI applications in medical image analysis include computer-aided diagnosis (CAD), image segmentation, image registration, and image synthesis. CAD involves using AI algorithms to assist medical professionals in making diagnoses based on medical images. Image segmentation involves separating an image into different regions or
Figure 2: Distribution of various dataset types (a) Dataset with a sufficient sample (b) Dataset with a poor sample size [24]
objects of interest. Image registration involves aligning and matching different images of the same patient over time. Image synthesis involves generating new images from existing images for data augmentation, improved visualization, and model training. Several studies have demonstrated the potential of AI in medical image analysis and have shown promising results in various applications.
### Artificial Intelligence in the Context of Medical Images
The effective utilization of artificial intelligence (AI) on medical images is crucial for realizing the potential benefits of AI in healthcare, such as improved accuracy and efficiency of medical diagnosis and treatment. To achieve this, several factors must be considered, including data quality, model design, and evaluation methods. Data quality is a key factor in the effective utilization of AI in medical images. Medical images should be of high quality and accurately annotated to ensure that AI algorithms can effectively learn from the data. Additionally, large amounts of medical data are needed to train and evaluate AI models, and the data should be diverse and representative of the population of interest. As presented by author Cai et al. [6] in their paper, where they propose a deep learning-based computer-aided diagnosis system and demonstrate its effectiveness on a benchmark dataset. Additionally, Chen et al. [7] in his paper compares the performance of fully trained deep convolutional neural networks (DCNNs) and fine-tuned DCNNs for medical image analysis and demonstrates the effectiveness of fine-tuning for improved performance.
Model design is another important factor in the effective utilization of AI on medical images. AI models should be designed with the specific application in mind, taking into account the type of medical image and the desired output. For example, AI models for medical image segmentation should be designed with the ability to accurately distinguish between different regions or objects of interest in an image. Evaluation methods are also critical for the effective utilization of AI in medical images. For more details about it, we can look into some recent studies where author Cai et al. [6] compare the performance of various deep learning algorithms for medical image segmentation and demonstrate the effectiveness of deep learning for this application. In [13] a collaborative federated learning system that enables deep-learning image analysis and classifying diabetic retinopathy without transferring patient data between healthcare organizations has been introduced. The models generated using artificial intelligence should be evaluated on independent datasets to ensure their generalizability and reliability. Additionally, the evaluation metrics used should be relevant to the specific application, we can consider model evaluation metrics such as sensitivity and specificity for medical diagnosis, and the Dice similarity coefficient for medical image segmentation. These studies demonstrate the potential of AI for the effective utilization of medical images in healthcare and highlight the importance of considering data quality, model design, and evaluation methods.
### Issue of Scarcity of Medical Data
The scarcity of medical data is a major challenge in the field of healthcare and medical research. Medical data refers to the information generated by medical devices, like imaging devices, and other sources, like electronic health records by author Nasim et al [26]. The scarcity of medical data can limit the ability of medical professionals to make accurate diagnoses and develop effective treatments, as well as hinder the development of new medical technologies and research.
The primary reason for data scarcity in the medical field might include data privacy concerns, limited data sharing between institutions, and the cost of data collection and storage. Data privacy concerns make it difficult for medical researchers and companies to access large amounts of medical data for their research and development activities. Limited data sharing between institutions also restricts the availability of medical data, as each institution has its own data collection, storage, and access policies. Finally, the cost of data collection and storage can be prohibitive for many institutions, particularly for those with limited resources. The images available in the medical field are far fewer as compared to other forms of data, such as text and numerical data, for many reasons, some are briefly mentioned below:
Data Privacy Concerns: Medical images often contain sensitive personal information and are subject to strict privacy regulations, such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States. This can make it difficult for researchers and organizations to access large amounts of medical image data [30].
Cost of Data Collection and Storage: Medical imaging, such as MRI and CT scans, require specialized equipment and trained personnel, and the cost of acquiring and storing these images can be high. This limits the number of medical images that can be collected and stored.
Limited Data Sharing between Institutions: Each institution is entitled to its data and they have different policies to access these records. The primary reason for having these policies is to secure medical information as it is very personal and institution likes to keep it confidential. Accessing these data might be very tedious and time-consuming even for researchers.
Annotation Requirements: Medical images often require manual annotation by medical experts, which can be time-consuming and costly. The need for accurate annotations also limits the number of medical images that can be used for training and evaluating machine learning models.
Despite these challenges, efforts are being made to increase the amount of medical image data available, such as data augmentation, synthetic data generation, and federated learning. These approaches aim to overcome the limitations of the scarce amount of medical images and enable the effective utilization of AI in healthcare.
Data augmentation involves generating new data from existing data, while synthetic data generation involves creating artificial data that mimics real-world data. Federated learning involves training machine learning models on multiple institutions' data without actually sharing the data, thereby addressing privacy concerns. Here are a few advancements in federated learning [28, 29, 21].
Despite these efforts, the scarcity of medical data remains a major challenge and requires ongoing research and development to overcome. This is important for improving medical diagnosis and treatment, as well as advancing the field of medical research and technology.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Reference** & **Publication Year** & **Workflow** & **Outcome** \\ \hline
[7] & 2022 & The goal of this study is to synthesize several retinal (or neuronal) pictures with realistic appearances from a tubular structured annotation that is hidden and contains the binary vascular (or neuronal) shape. It was inspired by current developments in generative adversarial networks (GANs) and visual style transfer. Required 10 training instances. & The effectiveness of the suggested strategy is supported by extensive experimental evaluations on several retinal fundus and neural imaging applications. \\ \hline
[12] & 2022 & Researchers have enhanced classification by supplementing data using noise-to-image or image-to-image GANs, which may synthesis realistic/diverse extra training pictures to replace the data gap in the true image distribution. Two-step GAN-based DA that creates and improves brain Magnetic Resonance (MR) pictures in order to enhance the DA impact using the GAN combinations. \\ \hline
[22] & 2015 & The authors suggest testing the conditional distributions learnt by applying common classification criteria to a conditional version of our model. Using a nearest neighbor classifier to compare actual data to a collection of artificially created conditional samples, we trained a DC-GAN using MNIST (splitting off a 10K validation set) and a permutation invariant GAN baseline. \\ \hline
[5] & 2018 & The use of GAN-derived synthetic pictures to supplement training data has been examined in this article, and it has been shown to increase performance on two segmentation tasks. The strategy has been demonstrated to perform well in situations with sparse data, whether due to a dearth of accurate data or as a result of class inequality. & According to a cautious interpretation of the findings from the usual tasks investigated here, supplementing 5–50 labelled picture volumes with an extra 10–100\% GAN-derived synthetic patches has the potential to significantly enhance DSC. \\ \hline \end{tabular}
\end{table}
Table 1: Related work in a nutshell
### Data Augmentation
Image augmentation is a technique used to artificially increase the size of a dataset by generating new images based on existing ones. The purpose of image augmentation is to reduce overfitting, improve generalization, and increase the robustness of machine learning models when applied to image recognition tasks. Image augmentation is commonly used in computer vision and medical imaging. There are various types of image augmentation techniques, including rotation, scaling, flipping, cropping, and color transformation. These techniques can be applied to images randomly or in a controlled manner to generate new, augmented images. The choice of augmentation techniques depends on the nature of the task and the type of data being used. Image augmentation is very useful for many reasons. Some of the common problems while dealing with deep learning or machine learning algorithms are Overfitting and Underfitting. Overfitting occurs when a machine learning model is too closely fit to the training data and performs poorly on new, unseen data. Image augmentation can help reduce overfitting by generating new, augmented images that can be used to train the model, making it more robust and generalizable. Generating these pseudo images using data augmentation techniques based on existing ones could help improve the generalization of machine learning models to new, unseen data. This can increase the robustness of the models and reduce the risk of poor performance on real-world data. Image augmentation can be used to simulate real-world scenarios, such as changes in lighting conditions, and to test the robustness of machine learning models to these conditions. This can help ensure that the models are suitable for deployment in real-world applications. Image augmentation can also be used to balance the distribution of classes in a dataset, which can be important for avoiding bias in machine learning models.
In medical imaging, image augmentation is used to address the scarcity of annotated medical images. Image augmentation is also applied in the clinical imaging field to address the scarcity of annotated medical images. In clinical imaging, it is important to have large datasets of annotated images to train machine-learning models for various tasks, such as disease diagnosis, lesion segmentation, and treatment planning. However, obtaining annotated medical images can be challenging due to ethical and logistical reasons. Image augmentation can also be used to simulate real-world scenarios, such as changes in lighting conditions, and to test the generalization of models to new, unseen data. Overall, image augmentation is a powerful tool for addressing the scarcity of data in computer vision and medical imaging, and has the potential to improve the performance and robustness of machine learning models applied to these fields.
The use of Generative Adversarial Networks (GANs) for picture-generating tasks is very common. The purpose of picture creation is to create artificial visuals that resemble genuine photos. The generator is instructed to generate synthetic pictures that are identical to genuine images, while the classifier is trained to discriminate between actual and synthetic images.
In the [2] realm of image generation, GANs have been applied to a variety of tasks, including creating fake pictures of people, things, and environments. These applications have proven that GANs are capable of producing high-quality synthetic pictures that closely resemble genuine photos and are efficient at generating images that are hard to get or are not readily available in huge quantities.
### Concept of General Adversarial Network
The role of a Generative Adversarial Network is broadly divided into two parts namely, A generator and a discriminator in deep learning architecture as mentioned by the author Ali et al. [3]. The discriminator is taught to distinguish between the synthetic pictures produced by the generator and actual photos from the target dataset, while the generator is trained to produce artificial images that are comparable to a target dataset.
The discriminator seeks to reliably determine whether a picture is genuine or synthetic, while the generator strives to create artificial images that are indistinguishable from actual photos. The generator and discriminator are trained together in a game considering. While the discriminator gets better at telling the difference between actual and fake photos, the generator gets better at creating fake images that seem like real ones with time as presented by author Sauber-Cole et al. [25]
The use of Generational Adversarial Networks (GANs) for picture-generating tasks is very common. The purpose of picture creation is to create artificial visuals that resemble genuine photos. The generator is trained to produce synthetic pictures that are indistinguishable from genuine images, while the discriminator is trained to discriminate between actual and synthetic images. These applications have proven that GANs are capable of producing high-quality synthetic pictures that closely resemble genuine photos and are efficient at generating images that are hard to get or are not readily available in huge quantities. These synthetic images can be used to augment the size of the dataset, improving the accuracy and robustness of machine learning models.
Yet, conventional data augmentation techniques only yield a small amount of credible alternative data. The performance of CNNs has been enhanced by the use of Generative Adversarial Networks (GANs), which produce fresh data. Yet,
compared to CNNs, data augmentation methods for training GANs are less studied. In this study [20], authors suggest a novel GAN architecture for augmenting chest X-rays for semi-supervised COVID-19 and pneumonia identification. The suggested GAN may be utilized to efficiently augment data and enhance illness classification accuracy in chest X-rays for pneumonia and COVID-19, according to the authors' research. The construction of the suggested IAGAN's Generator is seen in Fig. 3. The generator (G) receives a batch of actual training images and a Gaussian noise vector as input at each cycle. Authors aim to not only use the full image representation using the discriminator, but also to get a lower representation of images fed through the generator for better generalizability of G in generating image data by first encoding the input images using convolution and attention layers to a lower-dimensional representation, before concatenating this representation of the image with the projected noise vector (concatenation happens after goes through a dense layer and non-linearity). The trained generator may employ photos from several classes and produce a wider variety of images thanks to the dual input to the generator.
Once the training process is complete, the generator can be used to generate synthetic images that are similar to the target dataset. These synthetic images can be used to augment the size of the dataset, improving the accuracy and robustness of machine learning models. GANs have been applied to a wide range of applications, including image generation, data augmentation, and semi-supervised learning [15]. Overall, GANs are a powerful tool for synthesizing new, synthetic images that are similar to real images and are effective for data augmentation in various fields, including medical imaging.
### GAN concept for medical images
Generative Adversarial Networks (GANs) is one of the popular deep learning architecture which has been used effectively in several fields, including medical imaging. By creating artificial pictures that closely resemble genuine images, GANs can be employed in medical imaging to supplement the little quantity of annotated clinical data available as stated by author Goodfellow in his book [11].
A generator and a discriminator are the two primary parts of a GAN elaborately explained in his paper Han et al. [33]. The generator is trained to produce synthetic pictures that are indistinguishable from genuine images, Whereas the discriminator tries to discern the difference between actual and fake pictures, the generator creates synthetic ones. The generator and discriminator are taught in an antagonistic way, with the generator attempting to deceive the discriminator into believing the fake pictures are genuine and the discriminator attempting to accurately identify the true images. GANs can be used to supplement the training data, boosting its variety and lowering the likelihood of overfitting. The employment of artificial pictures may also be utilized to investigate the latent region of the distribution of data and assess how reliable computer vision algorithms are.
GANs have been used for a variety of tasks in medical imaging, including image synthesis, super-resolution, image-to-image translation, and data augmentation. GANs have been used, for instance, to convert MR pictures across different modalities and to create high-resolution MR images with low-resolution MR images [2]. It is significant to remember that both the richness of the training data and the intricacy of the generator structure affect the accuracy and realism of the synthetic pictures produced by GANs as discussed by Radford in his paper about GAN with unsupervised representation learning [22]. Additionally, GANs are heavily data-driven and may not adapt well to additional data distributions, particularly in medical imaging where the data can be quite diverse and sensitive to inter-patient variability.
Figure 3: Distribution of various dataset types (a) Dataset with the sufficient sample (b) Dataset with a poor sample size.[24]
## 3 Methodology
A significant obstacle in the realm of medical image analysis is the dearth of medical data. The purpose of this study is to investigate the usage of variational auto-encoders (VAEs) and generative adversarial networks (Generative adversarial) to improve medical data [17]. The literature on the application of GANs and VAEs for data augmentation will be thoroughly reviewed. This review will focus on the use of these methods in the area of medical image analysis and will cover recent research publications, technical reports, and conference proceedings.
### Data Collection
For experimentation, the project will employ a publicly accessible medical imaging collection, such as the MRI, CT Scan, or Chest X-Ray dataset. One portion of the dataset will be utilized for training and the other portion for testing. Data collection is a primary and crucial step before we train our deep learning model using generative adversarial networks (GANs). The GANs' effectiveness and capacity to produce plausible synthetic data can be considerably impacted by the caliber and diversity of the data used to train them, according to the author, Goodfellow et al. [11]. The method of gathering data for GANs involves numerous steps. The selection of the data is also crucial. Choosing the pertinent data for the task at hand is the first step. A publicly accessible medical images collection, like the MNIST or the Chest X-Ray dataset, might be employed in the case of healthcare image analysis. Making sure that the information gathered is diverse and appropriate for the task at hand is crucial. The GANs will be better able to generalize and provide high-quality synthetic data as a result.
### MRI Pre-processing
Once the data has been selected, it needs to be preprocessed to prepare it for training. This may involve resizing the images to a standard size, normalizing the pixel values, and removing any irrelevant information from the images. Image preprocessing is an important step before training Generative Adversarial Networks (GANs) as it can significantly impact the performance and ability of the GANs to generate realistic synthetic data by referred by Karras in his paper about analyzing and improving image quality [16].
### Workflow of GAN for Data Augmentation
To implement the GANs and VAEs, deep learning frameworks like TensorFlow or PyTorch will be used. The training dataset will be used to train the GANs and VAEs, and the testing dataset will be used to assess their performance. The preprocessed data may be used with data augmentation techniques to add variation to the training data and to guard against overfitting. Random picture flips, translations, and rotations are common data augmentation techniques.
A training set as well as a validation set should be created from the preprocessed and enhanced data. The validation set is used to assess how well the GANs performed while they were being trained, whereas the training set is used to train the GANs.
Figure 4: The model training process pipeline. According to the supplied picture, the generator first creates ground glass nodules from the backdrop. Second, to determine if the synthetic picture is real or not, either region of interest (ROI) discriminative model (red line) and the entire image discriminator (blue line) extract the features from the ROI and complete image, respectively. [31]
#### 3.3.1 General Algorithm.
A generator and a discriminator are the two primary parts of the deep learning technique known as generative adversarial networks (GANs). To create fake data that mimics genuine data, these elements are trained in an adversarial way.
The basic steps involved in a GAN algorithm are:
1. Data Preparation: The training data is collected and preprocessed.
2. Generator: The generator is a neural network that maps a random noise vector to a synthetic data sample. The generator is trained to generate synthetic data that resembles real data.
3. Discriminator: The discriminator is a neural network that distinguishes between real and synthetic data. The discriminator is trained to correctly identify real data and to reject synthetic data.
4. Adversarial Training: The generator and the discriminator are trained simultaneously, in an adversarial manner. The generator tries to generate synthetic data that the discriminator cannot distinguish from the real data, while the discriminator tries to correctly identify the real data and reject the synthetic data generated by the generator.
5. Synthetic Data Generation: After the training process is completed, the generator can be used to generate synthetic data that resembles real data.
The quality of the synthetic data generated by a GAN depends on the complexity of the generator and discriminator, the quality of the training data, and the number of iterations performed during the training process.
### The Process of Data Augmentation Using GAN
Generative Adversarial Networks (GANs) perform data augmentation by synthesizing new, artificial data samples that resemble the real data [32]. The GAN consists of two main components: a generator and a discriminator. The generator is a neural network that maps a random noise vector to a synthetic data sample.
The discriminator is a neural network that distinguishes between real and synthetic data. The discriminator is trained to correctly identify real data and to reject synthetic data. The generator and the discriminator are trained simultaneously, in an adversarial manner. The generator tries to generate synthetic data that the discriminator cannot distinguish from the real data, while the discriminator tries to correctly identify the real data and reject the synthetic data generated by the generator as presented by author Bowles et al. [5]. After the training process is completed, the generator can be used to generate synthetic data that resembles real data. This synthetic data can be used to augment the real data, increasing the size of the training data set, and improving the performance of machine learning algorithms trained on this data.
### Data Augmentation Using Variational Auto-Encoders (VAEs)
Variational Autoencoders (VAEs) can be used for data augmentation by generating synthetic data samples from the same underlying distribution as the real data. VAEs consist of an encoder network that maps the input data to a lower-dimensional representation, and a decoder network that maps the lower-dimensional representation back to the
Figure 5: The network’s structure. In the input mask’s place, the generator produces a synthetic ground glass nodule. The batch normalization, the ”parametric rectified linear unit” (PReLU) activation function, and convolutional layers with a 3 3 kernel size make up the generator. The batch normalization, the leaky PReLU activation function, and the 3 3 kernel size convolutional layers made up the discriminator. [31]
original data space [23]. The encoder and decoder are trained together in an unsupervised manner, such that the decoder can reconstruct the original data from the lower-dimensional representation.
To use VAEs for data augmentation, the encoder-decoder architecture can be used to generate synthetic data samples as presented by author Garay-Maestre, [10]. This can be done by sampling from the prior distribution over the lower-dimensional representation, and then passing this sample through the decoder network to obtain a synthetic data sample in the original data space.
The one advantage of VAE as compared to that of GAN for data augmentation is that they can generate synthetic data that is similar to the real data in terms of structure and distribution as explained by author Fuertes, [9]. This can help improve the performance of machine learning algorithms by increasing the size of the training data set and helping to avoid overfitting.
### Result Analysis
The results of the study will be analyzed and discussed in terms of the performance of the GANs and VAEs for data augmentation. The results will be compared to existing approaches for data augmentation, and the advantages and disadvantages of the GANs and VAEs will be discussed. The study will conclude with a summary of the findings and a discussion of the implications of the results for the field of medical image analysis. Recommendations for future research in this area will also be provided.
Generative Adversarial Networks (GANs) are effective for image augmentation because they can generate synthetic data that resembles real data. This synthetic data can be used to augment the real data, increasing the size of the training data set and improving the performance of machine learning algorithms trained on this data.
GANs can generate high-quality synthetic data because they are trained on real data. The generator network maps a random noise vector to a synthetic data sample, while the discriminator network is trained to distinguish between real and synthetic data. The generator and discriminator are trained simultaneously in an adversarial manner, with the generator trying to generate synthetic data that the discriminator cannot distinguish from real data, and the discriminator tries to correctly identify real data and reject synthetic data.
This adversarial training process results in the generator learning to generate synthetic data that closely resembles the real data, and the discriminator learning to correctly identify real data. As a result, the synthetic data generated by the GAN can be used to augment the real data and increase the size of the training data set, leading to improved performance of machine learning algorithms.
## 4 Conclusion
Generative Adversarial Networks (GANs) may be utilized for data augmentation for creating synthetic samples to increase the size of the training dataset. In this procedure, new data samples are generated by a generator network and then assessed by a discriminator network to see whether they are sufficiently comparable to the original samples. While the discriminator network is taught to discriminate between actual and created samples, the generator network is trained to increase its capacity to generate data that are comparable to real ones. The enhanced dataset may then be used to train a model for the desired objective. The procedure is repeated until the generator network can generate synthetic samples that are indistinguishable from genuine ones. Medical image analysis has used Generative Adversarial Networks (GANs) for a variety of tasks, including data augmentation, picture creation, and domain adaptation. In the field of medical imaging, GANs can create synthetic samples that can be utilized to increase the dataset that is
Figure 6: General description of the suggested technique. [10]
already accessible, especially when collecting significant volumes of actual data is challenging or morally problematic. It is important to emphasize that research into the application of GANs in radiography is still ongoing, and a thorough evaluation of the produced pictures is necessary to assure their quality and appropriateness for clinical applications.
|
2310.07687 | Orbital Polarimetric Tomography of a Flare Near the Sagittarius A*
Supermassive Black Hole | The interaction between the supermassive black hole at the center of the
Milky Way, Sagittarius A*, and its accretion disk occasionally produces
high-energy flares seen in X-ray, infrared, and radio. One proposed mechanism
that produces flares is the formation of compact, bright regions that appear
within the accretion disk and close to the event horizon. Understanding these
flares provides a window into accretion processes. Although sophisticated
simulations predict the formation of these flares, their structure has yet to
be recovered by observations. Here we show the first three-dimensional (3D)
reconstruction of an emission flare recovered from ALMA light curves observed
on April 11, 2017. Our recovery shows compact, bright regions at a distance of
roughly six times the event horizon. Moreover, it suggests a clockwise rotation
in a low-inclination orbital plane, consistent with prior studies by GRAVITY
and EHT. To recover this emission structure, we solve an ill-posed tomography
problem by integrating a neural 3D representation with a gravitational model
for black holes. Although the recovery is subject to, and sometimes sensitive
to, the model assumptions, under physically motivated choices, our results are
stable, and our approach is successful on simulated data. | Aviad Levis, Andrew A. Chael, Katherine L. Bouman, Maciek Wielgus, Pratul P. Srinivasan | 2023-10-11T17:36:17Z | http://arxiv.org/abs/2310.07687v2 | # Orbital Polarimetric Tomography of a Flare Near the Sagittarius A\({}^{*}\) Supermassive Black Hole
###### Abstract
The interaction between the supermassive black hole at the center of the Milky Way, Sagittarius A\({}^{*}\), and its accretion disk, occasionally produces high energy flares seen in X-ray, infrared and radio. One mechanism for observed flares is the formation of compact bright regions that appear within the accretion disk and close to the event horizon. Understanding these flares can provide a window into black hole accretion processes. Although sophisticated simulations predict the formation of these flares, their structure has yet to be recovered by observations. Here we show the first three-dimensional (3D) reconstruction of an emission flare in orbit recovered from ALMA light curves observed on April 11, 2017. Our recovery results show compact bright regions at a distance of roughly 6 times the event horizon. Moreover, our recovery suggests a clockwise rotation in a low-inclination orbital plane, a result consistent with prior studies by EHT and GRAVITY collaborations. To recover this emission structure we solve a highly ill-posed tomography problem by integrating a neural 3D representation (an emergent artificial intelligence approach for 3D reconstruction) with a gravitational model for black holes. Although the recovered 3D structure is subject, and sometimes sensitive, to the model assumptions, under physically motivated choices we find that our results are stable and our approach is successful on simulated data. We anticipate that in the future, this approach could be used to analyze a richer collection of time-series data that could shed light on the mechanisms governing black hole and plasma dynamics.
The compact region around the Galactic Center supermassive black hole Sgr A\({}^{*}\) is a unique environment where the magnetized turbulent flow of an accretion disk is subject to extreme gravitational physics. The dynamical evolution of this complex system occasionally leads to the production of energetic flares [10] seen in X-ray [19], infra-red [8], and radio [27]. The physical nature, structure, origin, formation, and eventual dissipation of flares are topics of active research [17, 8, 15, 2, 26] key to our understanding of accretion flows around black holes. One proposed explanation for Sgr A\({}^{*}\) flares is the formation of compact bright regions caused by hot pockets of lower-density plasma within the accretion disk, that are rapidly energized (e.g. through magnetic reconnection [1]). These "bubbles", "hotspots" or "flux tubes", observed in numerical simulations (e.g. [24]), are hypothesized to form in orbit close to the innermost stable circular orbit (ISCO) of Sgr A\({}^{*}\). The association of flares with orbiting hotspots close to the event horizon is consistent with near-infrared detections made by the GRAVITY Collaboration [12, 13] and radio observations of the Atacama Large Millimeter/Submillimeter Array (ALMA) [26].
The context for this work is set by the first images [6] of Sgr A\({}^{*}\) revealed by the Event Horizon Telescope (EHT) collaboration. The images, reconstructed from Very Long Baseline Interferometry (VLBI) observations from April 6-7, 2017, show a ring-like structure with a central brightness depression - a strong suggestion that the source is indeed a supermassive black hole [7]. The presence of synchrotron-radiating matter very close to the horizon of Sgr A\({}^{*}\) could give rise to complex bright 3D structures that orbit and evolve within the accretion disk. While both [12] and [26] employed a strongly constrained parametric hotspot model (essentially 2D) to interpret their observations, the goal of this work is to step out of the 2D image plane and recover the complex 3D structure of flares as they orbit and evolve in the accretion disk around Sgr A\({}^{*}\).
We present the first 3D recovery results of a Sgr A\({}^{*}\) flare from ALMA light curve observations on April 11, 2017 (Fig. 1). In contrast to the quiescent state imaged by EHT on April 6/7 [7], these observations were taken directly after an X-ray flare and exhibit a high degree of variability in radio [4, 27] including distinct coherent patterns in the lin
early polarized light curve component [26]. To achieve this 3D reconstruction result we develop a novel computational approach which we term: _orbital polarimetric tomography_.
Tackling this inverse problem necessitates a change from typical tomography, wherein 3D recovery is enabled by multiple viewpoints. Instead, the tomography setting we propose relies on observing a structure in orbit, traveling through curved spacetime, from a fixed viewpoint. As it orbits the black hole, the emission structure is observed (projected) along different curved ray paths. These observations of the evolving structure over time effectively replace the observations from multiple viewpoints required in traditional tomography. Our approach builds upon prior work on 3D tomography in curved spacetime which showed promising results in _simulated_ future Event Horizon Telescope (EHT) observations [22, 20].
Similar to the _computational images_ recovered by EHT [7] our approach solves an under-constrained inverse problem to fit a model to the data. Nevertheless, ALMA observations do not resolve event horizon scales (\(\sim 10^{5}\) lower resolution), which makes the tomography problem we propose particularly challenging. To put it differently, we seek to recover an evolving 3D structure from a single-pixel observation over time. A key advantage for dynamical studies is the very high signal-to-noise and cadence (four seconds) of the ALMA dataset [27], as well as the inclusion of both total intensity and full polarization information [26]. In order to solve this challenging task, we integrate the emerging approach of neural 3D representations [21, 20] with physics constraints. The robustness of the results thus relies on the validity of the constraints imposed by the gravitational and synchrotron emission models.
Our simulation analysis (Supplementary Material) shows how polarimetric light curves contain information that could constrain both the 3D flare structure and inclination angle of Sgr A\({}^{*}\). While the total intensity light curve is dominated by the accretion disk, such extended emission structures are partially depolarized in an image-average polarization sense [26]. In contrast, compact bright sources, such as a putative hotspot, are characterized by a large fractional linear polarization (LP) and fast evolution on dynamical timescales [14, 26], hence allowing separation of the flare component from the background (accretion).
## Results
### ALMA polarimetric observations of Sgr A\({}^{*}\)
On April 11, 2017, ALMA observed Sgr A\({}^{*}\) at \(\sim 230\mathrm{GHz}\) as part of a larger EHT campaign. The radio observations directly followed a flare seen in the X-ray (Supplementary Material Fig. 12). The LP, measured by ALMA-only light curves [26, 27] as a complex time series \(Q(t)+iU(t)\), appears to evolve in a structured, periodic, manner suggesting a compact emission structure in orbit. The work of [26] hypothesizes a simple bright spot (i.e. idealized point-source [9] or spherical Gaussian [23]) at \(r\sim 11\mathrm{M}^{1}\), however, a rigorous data-fitting was not performed. Furthermore, the proposed parametric model is limited and does not explain all of the data features. The orbital polarimetric tomography approach that we propose enables a rigorous data-fitting and recovery of flexible 3D distributions of the emitting matter, relaxing the assumption of a coherent orbiting feature enforced by prior studies [12, 26]. This opens up a new window into understanding the spatial structure and location of flares relative to the event horizon.
Our model, outlined in the Supplementary Material, is able to fit the ALMA light curve data very accurately (see Fig. 2). The optimization procedure simultaneously constrains the inclination angle of the observer and estimates a 3D distribution of the emitting matter associated with this flaring event (Fig. 1), starting from 9:20 UT, about \(30\) minutes after the
Figure 1: [Left panel] The validation \(\chi^{2}\) indicates a preference toward low inclination angles (red curve) \(\theta_{\mathrm{o}}<18^{\circ}\) with a local minimum around \(\theta_{\mathrm{o}}=12^{\circ}\). This preference towards low inclination is also apparent in the data fits (analysis in Supplementary Material). Furthermore, our analysis is largely insensitive to the black hole spin. [Right panels] A recovered 3D volume visualized from two view angles in intrinsic (flat space) coordinates (the event horizon illustrated for size comparison). The recovery shows two emission regions (blue arrows) at radii of \(\sim 11/13\mathrm{M}^{1}\) in a clockwise orbit.
peak of the X-ray flare [26]. Despite the fact that ALMA observations are unresolved (effectively a single pixel with time-dependent complex LP information) at the horizon scale, our analysis suggests some interesting insights:
* Low inclination angles (\(\theta_{\rm o}<18^{\circ}\)) are preferred by the validation \(\chi^{2}\) (Fig. 1 left panel, blue). While the methodology is different, this result is broadly consistent with EHT findings from April 6/7 [8] which favored low inclination angles of \(\sim 30^{\circ}\) by comparing recovered images with General Relativistic Magneto Hydro Dynamic simulations. The fiducial model of [26] corresponded to an inclination angle of \(\sim 22^{\circ}\). Low inclination was also favored in the analysis of the GRAVITY infra-red flares [12, 13, 14].
* The recovered 3D emission has two compact bright regions at \(r\sim 11M,13M\) (Fig. 1 middle/right panels). The location (radius and azimuthal position) of the bright region is consistent with the qualitative analysis of [26].
### Data fitting
Before solving the tomography problem, we perform preprocessing according to the procedure outlined in [26]. In particular, we subtract a constant (time-averaged) LP component, interpreted as the ring-like accretion disk component observed by the EHT, and de-rotate the electric vector polarization angle (EVPA) to account for Faraday rotation (details in the Supplementary Material). To obtain a model prediction an initial 3D emission structure is adjusted so that, when placed in orbit, the numerically ray-traced LP light curves align with the observations. Mathematically this is formulated by minimizing a \(\chi^{2}\) loss between the observed LP and the model prediction.
Our tomography relies on ray tracing which requires knowledge of the path rays take in 3D curved spacetime. In general, ray paths (geodesics) depend on _unknown_ black hole properties [9]: mass, spin, and inclination. Nevertheless, the mass of Sgr A\({}^{*}\) is well constrained through stellar dynamics [10]; \(M\simeq 4{\times}10^{6}M_{\odot}\) where \(M_{\odot}\) denotes solar mass. Furthermore, Fig. 1 illustrates that the loss is not very sensitive to black hole spin: \(a\in[0,1]\). Thus, the only remaining unknown is the inclination angle. To estimate the inclination we numerically bin \(\theta_{\rm o}\in[0,\pi/2]\) and recover the 3D emission for every given (fixed) angle.
For each inclination angle, we recover a (locally) optimal 3D emission by minimizing a \(\chi^{2}\) loss over the model parameters. Practically, for numerical stability, we avoid the extreme angles of face-on and edge-on by gridding \(\theta_{\rm o}\in[4^{\circ},80^{\circ}]\) (at \(2^{\circ}\) increments). Figure 1 plots a likelihood approximation for \(\theta_{\rm o}\), which appears to favor low inclination angles (\(\theta_{\rm o}<18^{\circ}\)). For each inclination, the recovery is run five times with a random initialization for the 3D structure. Therefore, the error bars are not a measure of posterior uncertainty, rather, they indicate the stability of the locally optimal solution. The 3D structure shown in Fig. 1 is the average structure across all random initialization at an inclination of \(\theta_{\rm o}=12^{\circ}\) which corresponds to the \(\chi^{2}\) minimum (Fig. 2). In the Appendix, we highlight how the key features of the recovered 3D structure are consistent across a range of inclination angles (within the local minimum basin) and random initialization.
### Assumptions and systematic noise
The key assumption for orbital tomography is that the 4D (space and time) emission is in orbit around the black hole and can be modeled as a simple transformation to a canonical (or initial) 3D emission. This enables formulating an inverse problem of estimating the 3D emission from observations. While this assumption does not hold in general, it is well suited for compact bright structures over short time scales, during which complex dynamics could be negligible. We consider orbits characterize by a Keplerian angular velocity profile, accounting for shearing due to differential rotation (ignored by the previous analyses [12, 26]) while neglecting the dynamics of cooling, heating, expansion, and turbulence. Furthermore, in modeling synchrotron emission, we assume a (homogeneous) vertical magnetic field, as pre
Figure 2: The _intrinsic_ LP curves (centered and de-rotated) and a model fit over a period of \(~{}100\) minutes. The model light curves are produced through ray tracing the estimated 3D volume at a fiducial inclination angle of \(\theta_{\rm o}=12^{\circ}\) (Fig. 1 analysis). The resulting light curves accurately describe the data including the small looping feature highlighted by the blue arrow (right panel). The data-fit \(\chi^{2}\ll 1\) for a noise level of \(\sigma_{Q}=\sigma_{U}=0.01\) Jy [26]
ferred in the analyses of [12, 26], that is externally fixed and is independent of the flare or accretion disk dynamics. In the Supplementary Material, we examine the effects of other magnetic field configurations (radial, toroidal) and sub-Keplerian orbits on the data-fit and 3D reconstruction. Lastly, we do not model radial (in-fall) or vertical velocity components. We constrain the 3D recovery domain to a region that is best modeled by these assumptions with a radius of \(6\mathrm{M}\leq r\leq 20\mathrm{M}\) and close to the equatorial disk \(|z|\leq 4\mathrm{M}\) (\(6M\) is the innermost stable circular orbit of a non-spinning black hole2). Table 1 summarizes the key assumptions made in the reconstruction shown in Fig. 1.
Footnote 2: Our analysis found that results are only weakly sensitive to black hole spin (Fig. 1 left panel)
Solving an under-constrained inverse problem requires some form of regularization (whether implicit or explicit). The 3D neural representation has an implicit regularization that favors smooth structures [21, 25] (details in the Supplementary Material). We additionally regularize the recovered total intensity of the flare to be around \(0.3\) Jy [26] with a standard deviation of \(0.15\) Jy. The choice to only fit the LP light curves reflects the uncertainty associated with the highly non-polarized intensity of the background accretion disk. In the Supplementary Material, we quantitatively assess the effect of the background accretion disk on simulated reconstruction results.
Although the evolution of the recovered 3D structure well matches the observed light curve, there could be alternative morphologies and orbital models that also match the data. This non-uniqueness of the solution is a general feature of under-constrained inverse problems. Nonetheless, we find that our results are stable under different initial conditions and our approach is successful on synthetic data (see further analysis in the Supplementary Material).
## Conclusions
We present a novel computational approach to image dynamic 3D structures orbiting the most massive objects in the universe. Integrating general relativistic ray tracing and neural radiance fields enables resolving a highly ill-posed tomography in the extremely curved space-time induced by black holes. Applying this approach to ALMA observations of Sgr A\({}^{*}\) reveals a 3D structure of a flare, with a location broadly consistent with the qualitative analysis presented in [26]. This first attempt at a 3D reconstruction of a Sgr A\({}^{*}\) flare suggests an azimuthally elongated bright structure at a distance of \(~{}11\mathrm{M}\) trailed by a dimmer source at \(~{}13\mathrm{M}\). Although the recovered 3D is subject, and sometimes sensitive, to the gravitational and emission models (see Supplementary Material), under physically motivated choices we find that the 3D reconstructions are stable and our approach is successful on simulated data. Moreover, our data-fit metrics provide constraints favoring low inclination angles and clockwise rotation of the orbital plane, supporting the analyses of [26], EHT [7], and GRAVITY [12].
Orbital polarimetric tomography shows great promise for 3D reconstructions of the dynamic environment around a black hole. Excitingly, extending the approach and analysis to spatially resolved observations (e.g. EHT) could enable relaxing model assumptions to further constrain the underlying physical structures that govern the black hole and plasma dynamics (e.g. black hole spin, orbit dynamics, magnetic fields). Lastly, by adapting orbital polarimetric tomography to other rich sources of black hole time series observations (e.g., quasars, microquasars), this imaging technology could open the door to population statistics and improve our understanding of black holes and their accretion processes.
|
2307.04999 | The GECAM Real-Time Burst Alert System | Gravitational Wave High-energy Electromagnetic Counterpart All-sky Monitor
(GECAM), consisting of two micro-satellites, is designed to detect gamma-ray
bursts associated with gravitational-wave events. Here, we introduce the
real-time burst alert system of GECAM, with the adoption of the BeiDou-3 short
message communication service. We present the post-trigger operations, the
detailed ground-based analysis, and the performance of the system. In the first
year of the in-flight operation, GECAM was triggered by 42 GRBs. GECAM
real-time burst alert system has the ability to distribute the alert within
$\sim$1 minute after being triggered, which enables timely follow-up
observations. | Yue Huang, Dongli Shi, Xiaolu Zhang, Xiang Ma, Peng Zhang, Shijie Zheng, Liming Song, Xiaoyun Zhao, Wei Chen, Rui Qiao, Xinying Song, Jin Wang, Ce Cai, Shuo Xiao, Yanqiu Zhang, Shaolin Xiong | 2023-07-11T03:33:05Z | http://arxiv.org/abs/2307.04999v1 | # The GECAM Real-Time Burst Alert System
###### Abstract
Gravitational Wave High-energy Electromagnetic Counterpart All-sky Monitor (GECAM), consisting of two micro-satellites, is designed to detect gamma-ray bursts associated with gravitational-wave events. Here, we introduce the real-time burst alert system of GECAM, with the adoption of the BeiDou-3 short message communication service. We present the post-trigger operations, the detailed ground-based analysis, and the performance of the system. In the first year of the in-flight operation, GECAM was triggered by 42 GRBs. GECAM real-time burst alert system has the ability to distribute the alert within \(\sim\)1 minute after being triggered, which enables timely follow-up observations.
keywords: gamma-ray burst: general - gravitational waves - methods: data analysis +
Footnote †: journal: Research in Astronomy and Astrophysics
## 1 Introduction
On September 14, 2015, the first detection of gravitational wave (GW) signals from the merger of two stellar-mass black holes, observed by the Laser Interferometer Gravitational-Wave Observatory (LIGO) detectors, inaugurated the era of GW astronomy (Abbott et al. 2016a). This was the first direct evidence of the predictions of general relativity. On August 17, 2017, the Advanced LIGO and Advanced Virgo Gravitational-Wave interferometers detected the first GW, GW 170817, from a binary neutron star merger, significantly promoting the study of gravitational-wave multi-messenger astronomy (Abbott et al. 2017). _Fermi_ and _INTEGRAL_ detected a short gamma-ray burst (GRB), GRB 170817A, 1.7 s after the GW events. The electromagnetic (EM) follow-up observations not only succeeded in localizing the merger to the host galaxy, NGC 4993, but also provided the first unambiguous detection of a kilonova, the broadband signature
and EM observatories, for the first time, validated the merger model proposed decades ago to explain the short GRBs (Paczynski 1986).
The identification of EM counterparts to GW events allows for the precise localization of the GW source, which would further yield rich scientific rewards (see Nakar 2020 for a review). The EM counterpart identification is constrained by the accuracy of the localization of the GW signal, which is usually expected to be a few hundreds of square degrees (Abbott et al. 2020). In general, we expect that searching for high energy EM counterparts to a GW event will play a major role in the discovery of the EM counterpart. This is because, firstly, the luminosity of the high-energy counterpart is large and less likely to be absorbed by the medium; secondly, in the low energy bands, there might be few optical candidates localized within the error region of the GW source (i.e., Abbott et al. 2016b). Since the high energy sky is less "crowded", it is more reasonable to relate a high energy transient to the GW event; thirdly, the time delay between the high-energy emission and the GW emission is assumed to be minimal. Therefore, a precise localization of the high-energy transient could substantially reduce the localization uncertainty of the GW event, which further facilitates the follow-up observations at other wavelengths. In recent years, a large number of observations have been made with hard X-ray and \(\gamma\)-ray telescopes, such as _Fermi_-GBM (Meegan et al. 2009), _Swift_-BAT (Barthelmy et al. 2005), _INTEGRAL_-SPI-ACS (Winkler et al. 2003), _Insight_-HXMT (Zhang et al. 2020; Cai et al. 2021) and Konus-_Wind_ (Aptekar et al. 1995), to search for high energy counterparts to GW sources.
Gravitational wave high-energy Electromagnetic Counterpart All-sky Monitor (Li et al. 2020, 2021b) (GECAM, also known as "HuaiRou-1") is a space-based project proposed for the detection of high-energy EM counterparts to GW sources, as well as other high-energy transient sources, i.e., GRBs and magnetars. GECAM consists of two micro-satellites, GECAM-A and GECAM-B, which are designed to operate on identical orbits (600 km altitude and 29\({}^{\circ}\) inclination), on opposite sides of the Earth, in order to get a simultaneous view of the entire sky. Each satellite features a dome-shaped array of 25 Gamma-ray detectors (GRD) and 8 Charged particle detectors (CPD). The GRDs are composed of a LaBr\({}_{3}\) crystal and silicon photomultiplier tube (SiPM) array, covering an energy range from 6 keV to 5 MeV (An et al. 2021). The CPDs are used to monitor the flux of charged particles on the GECAM orbit and help distinguish between astrophysical events and charged particle events. The CPDs use plastic scintillators combined with SiPM, covering an energy range of 300 keV-5 MeV (Xu et al. 2021). In case of a trigger, the flight software (Zhao et al. 2021) catches the in-direction and provides a preliminary classification to the source, which will be downlinked as a trigger alert to the ground. In order to carry out rapid follow-up observations at other wavelengths, a real-time downlink of the alert data is required. Considering the current status of the real-time downlink resources in China, GECAM adopts the global short message communication service (Li et al. 2021a) of BeiDou-3 navigation satellite system (Yang et al. 2019) to downlink the trigger alert data to the ground. GECAM is the first satellite to use the BeiDou-3 global short message service on board and the first space astronomy satellite in China capable of real-time downlink.
The GECAM Scientific Ground Segment (Chen et al. 2020; Zheng et al. submitted to RAA) thus includes a section that is devoted to process the BeiDou short messages upon their arrival. In the following, we describe the onboard triggering and data flow in Section 2 and the real-time burst alert system in Section
## 2 Onboard Triggering and Data Flow
### In-flight Trigger and Localization
The GECAM In-flight Realtime Trigger and Localization software (GIRTLS) (Zhao et al. 2021) continuously monitors the background count rates of all GRDs for significant increases on different energy ranges and timescales, to detect GRBs and other short-timescale transients. The background is accumulated over 20 s pre-trigger, excluding the most recent 5 seconds (in default). The event data are binned to 50 ms and 8 energy channels, which means that the trigger timescales are defined as multiples of 50 ms until reaching 4 s. Except for the 50 ms timescale, all of the triggers include two phases offset by half of the time bin. GECAM supports 64 different trigger algorithms, each of which comes with an adjustable threshold. The trigger algorithms currently implemented include five energy ranges and seven timescales, a detailed description of the 64 algorithms can be found in Zhao et al. (2021). A trigger is only generated when at least three detectors exceed the threshold at the same time. When there is a trigger, the GIRTLS gives an approximate location to the source using the relative rates recorded in the 25 GRDs that accumulated on 4 timescales. Besides GRBs, there are other events, such as solar flares and charged particle events that can trigger the alert, so the GIRTLS further performs a classification by using the count ratio between CPD and GRD, the localization, the hardness ratio, and the geographic location of the satellite to identify the type of source.
Once triggered on board, the GIRTLS produces the trigger alert data that are downlinked to the ground via BeiDou short message. The trigger alert data includes information on the trigger significance, the burst spectrum, on-board localization and classification, and light curve for improving ground localization. There are two algorithms to localize the burst on the ground: one using the relative count rates from the 25 GRDs, which requires a relatively long-time light curve from each detector; the other one using the time delay of the burst between the two satellite, which operates on high temporal resolution light curves (Xiao et al. 2021). Due to the limitation of capacity of single BeiDou short message (560 bits per message) and downlink capacity of the BeiDou system (Li et al. 2021), the high temporal resolution light curve is only generated for short bursts that are believed to be related to neutron star mergers (Goodman 1986).
There are two types of trigger alert data: long trigger and short trigger. If the count rate exceeds the threshold at 4 s and 20 s post-trigger, the trigger will be identified as long trigger. Each long trigger is comprised of 31 BeiDou short messages. The first two messages contain the most important parameters for the rapid follow up observations, i.e., trigger time, burst localization, classification and spectrum, satellite position and attitude at trigger time, with backups. The 3rd and 4th messages contain light curves from three GRDs with the highest and lowest trigger significance, which is binned by different trigger timescales and energy ranges. The light curves provide a quick view of the burst. The 5th message contains the light curve of 8 CPDs, covering from 30 s prior to 180 s following the trigger time, which is used to distinguish particle events from GRBs. The 6th to 30th messages store the light curves from each GRD from \(\sim\)50 s before the trigger (divided into 8 time bins) to 185 s after the trigger (divided into 22 time bins) and are binned by timescales from 50 ms to 50 s, with shorter timescales close to the trigger time. The last message gives the satellite attitude which lasts 120 s after the trigger time. The BeiDou short messages transmit every 17 s
The difference between the short and long trigger alert data is that the short trigger includes a combined high-resolution (0.4 ms in default) light curve from 25 GRDs with 2500 bins. Each short trigger contains up to 31 short messages, depending on the size of the light curve after compression. The first two messages are the same as the long trigger. The rest of the messages are the compression method and the compressed light curve.
### On-ground Analysis
After being received by the National Space Science Center (NSSC) on ground, the BeiDou short message is forwarded to Scientific Ground Segment at Institute of High Energy Physics (IHEP) and ingested into the Burst Alert System (BAS). The BAS is developed to process the trigger alert data in real-time and transmit the locations and other important information to the astronomy community via the standard communication channel (e.g., the GRB Coordinates Network (GCN) [1]). The types of GECAM notices generated by the BAS are listed below.
1. **GECAM FLIGHT**: trigger time, trigger energy range, trigger significance, on-board localization (RA and Dec), ground refined classification (see Section 3.1), \(\sim\)1 minute after trigger.
2. **GECAM GROUND**: ground localization (RA and Dec, see Section 3.2) and classification (see Section 3.1), \(\sim\)10 minutes after trigger.
The notices are sent only if the BAS classified the trigger as an astrophysical transient, such as a GRB. Since July 15, 2021, we sent a total of 323 notices in 2021, of which 156 were flight and 167 were ground, containing 205 triggers.
The BAS provides a refined classification by using an updated algorithm (see Section 3.1). Due to the limitation on memory and computational resources on board, the GIRTLS uses a coarser sky grid (3072 grid points), three pre-defined templates (soft, normal and hard spectra in Band function), and an averaged pre-burst background level to localize the source. Compared to GIRTLS, the BAS provides improved locations by applying a finer sky grid, fitting the burst spectrum, and estimating the background with pre- and post-trigger data (see Section 3.2) or with the time delay calculated based on the Modified Cross-correlation Function (Li-CCF, Xiao et al. 2021) when a burst is observed by both satellites, or GECAM and other satellites (see Section 3.3).
Moreover, GECAM produces time-tagged event data that are transmitted via the X-band ground station. The X-band data are not downlinked in real-time like the alert data, but delayed up to several hours based on the passages over the station. The X-band data are used to determine the final characteristics of the bursts. The continuous event data also enhances the ground-based searching for untriggered GRBs by using the coherent search method, which was initially applied to _Insight_-HXMT (Cai et al. 2021).
## 3 The Burst Alert System (BAS)
### Re-classification of the trigger
GECAM will detect GRBs, solar flares, particle events, soft gamma repeaters (SGRs) and earth occultation of bright sources (e.g., Sco X-1). The GIRTLS in-flight uses the background-subtracted counts ratio be
tween CPD and GRD to identify particle events and further uses the event localization (the error box is 2 \(\sigma\)) and hardness ratio to distinguish known sources. Hence, it is only valid when the background is correctly estimated and a precise location is obtained.
On the other hand, the BAS on-ground provides a refined classification to each trigger. The relevant data applied are event localization, hardness ratio, count rate of CPD, count ratio of CPD and GRD, the location of the spacecraft, and McIlwain magnetic L coordinates. Particle events occur predominantly in trapped particle regions, mostly in the entry or exit of the South Atlantic Anomaly (SAA) region, or at high L values. Thus, they are identified when three of the following four conditions are met: spacecraft geographic location, L value, CPD count rate and the count ratio between CPD and GRD. Like in GIRTLS, the BAS compares the event location with the sun and other known sources, e.g. SGR 1935+2154, with the error box set to 3 \(\sigma\) of the location error and includes the systematic error. If the hardness ratio is in the predefined range, and the source (the sun and other known sources) is not occulted by Earth, the event is classified as a solar flare or burst from known sources. Events which are located near the galactic plane and have a hardness ratio above one will be classified as generic sources. GECAM can also be triggered by bright sources rising from the Earth's limb, and this can be easily identified since the occultation time for each source can be calculated precisely.
### Ground localization using relative rates
#### 3.2.1 Background estimation
The BAS performs background fittings after the BeiDou short messages are complete. The method applied here is recursive non-parametric regression, similar to what is adopted by _Fermi_-GBM RoboBA (Goldstein et al. 2020). First, we fit the data from -49.1 to -4.1 s (divided into 4 time bins, binned by timescales from 5 to 20 s) pre-trigger and 5 to 185 s (divided into 10 time bins, binned by timescales from 5 to 50 s) post-trigger by a polynomial function up to second order for each GRD, respectively. When at least four detectors exceed the predefined signal-to-noise ratio thresholds, the corresponding bins will be removed from the background. The regression will perform repeatedly on the remaining time bins, until the recursive process converges (see Figure 1). When there are less than two bins at pre-trigger or post-trigger, the BAS cannot perform background fitting, and the background is thereby averaged by pre-trigger. This usually happens during extreme background fluctuation, i.e., the satellite is close to SAA, or when the burst duration is abnormally long. There are 6 out of 37 GRBs 2 which failed to fit the background. Five failures result from the long burst duration, the other one is caused by the background fluctuation. The BAS has a success rate of about 84%.
Footnote 2: [http://www.astro.caltech.edu/](http://www.astro.caltech.edu/)
#### 3.2.2 Spectrum fit and localization
The GECAM on-board localization system operates on three spectral templates, which leads to an inaccurate localization if the spectral templates mismatch with the actual spectrum. Ideally, this can be corrected by simultaneously fitting the spectrum and location (Burgess et al. 2018). However, the small number of time and energy bins of the trigger alert data are not suitable for this fitting. Thus, one needs to fit the spec
trum and location iteratively. The burst spectrum is a combination of the 3 detectors with the highest trigger significance. These detectors usually have a similar incidence angle and therefore response. We added their response files. First, we generate the response file using the on-board location and fit the spectrum with the Band function and cut-off power-law model (see Figure 2). Then, we construct the template for each detector in 15-1020 keV range over 12,288 grid points in the payload coordinates, with the best-fitting model and parameters. These are compared to the observed counts accumulated in the 25 GRDs, to find a \(\chi^{2}\) minimum. And the position is converted to equatorial coordinates using the spacecraft attitude. The new position is used as input for the next iteration, until the position converges. A full sky HEALPix map of the
Figure 1: Background selection for GRB 211102B (upper panel) and a bright burst from SGR J1935+2154 (T0=2021-09-11 05:32:38.65 UT, lower panel). Shown is the data of one GRD in the 15–1020 keV energy range. The orange filled region shows the data chosen to perform the fit and the red line is the background estimate.
### Ground localization using Time delay method
In addition to the spectral fitting method, GRBs can also be located via the time delay method or triangulation technique (Laros et al. 1997). When a GRB arrives at two spacecrafts, it can be localized to an annulus characterized by the time delay and spacecraft positions. The time delay and its uncertainty are usually calculated by the cross-correlation function (CCF). However, when the classic CCF method is applied to locate GRBs for low orbit satellites, the localization region becomes too large to give effective constraints. To make an improvement, Xiao et al. (2021) proposed an improved time delay localization method based on a Modified Cross-correlation Function (MCCF, Li-CCF) (Li et al. 1999), from which it provides an accurate time delay from the high time resolution light curves.
Once all the short trigger alert data are received, BAS decompresses it to obtain a high time resolution light curve (see Figure 4). If a burst is observed by both satellites, the light curves are sent to the MCCF localization algorithm. Xiao et al. (2021) provides a full description of the algorithm and an estimate of the uncertainty (\(1\sigma\): less than 0.3 \({}^{\circ}\)). Consequently, the annulus is excluded by the Earth occulted part and combined with the localization derived by comparing the count rates from different detectors (Xiao et al. 2022).
Because GECAM-A has not turned on yet (see Section 4), there are no GRBs or other bursts that have been localized with this method by the two GECAM satellites. However, we have applied this method
Figure 2: An example (GRB 211204C) of the burst spectrum. Data from GRD 15, 23, and 24 are used. The best-fit model (black solid line) and data are shown in the upper panel, the residuals are shown in the lower panel.
(Xiao et al. 2021). The half-width of the annulus region obtained by GECAM-B and _Fermi_-GBM is 0.4 \({}^{\circ}\)
Figure 4: An example (SGR J1935+2154, T0=2021-02-16 22:20:39.650 UT) of the high time resolution (1 ms) light curve. Shown is the sum of the data from all GRDs.
Figure 3: An example (GRB 210520A) of the full sky map of the localization. The localization posterior is shown with a red gradient. The detector pointings at the time of the trigger are shown as light gray circles for all 25 GRDs (Note that the size of the circles do not represent the field of view of the detectors, only the pointing of the detector normal). The Galactic plane is shown as a black line with a circle denoting the Galactic center. The Earth occultation is shown in blue, and the Sun and moon are shown in yellow and blue-gray, respectively. Additionally, several bright sources are shown in different colors.
## 4 In-flight performance
The two GECAM satellites were co-launched on 2020 December 10 (Beijing Time) (Li et al. 2021b). GECAM is scheduled to work in a survey mode, where the GECAM points opposite to the Earth. Because of the power issue, GECAM-B was set to the "pointing" mode with the solar panel orienting towards the Sun since January 14, 2021, in order to provide the maximum energy to the spacecraft. Unfortunately, at the date of this writing, GECAM-A failed to turn on the payload, due to a power supply issue. GECAM-B works for about 10 hours per day.
### Trigger statistics and analysis
During its first year (2021) of operation, GECAM was triggered 858 times 3 on a variety of transient events in flight (see Figure 5): 42 of these are verified as GRBs, 32 as bursts from SGRs, 1 as Type-I X-ray burst (XRB) from X-ray binary 4U 0614+09 (Chen et al. 2022), and 783 as others (solar flares, charged particles, earth occultation, or instrument effect) by Burst Advocate (BA). Table 1 shows the number of events classified by the GIRTLS, BAS, and BA. For example, 666 triggers are classified as GRBs by the GIRTLS. Among them, 42 are "real" GRBs, 32 are SGRs, 1 is an XRB, and 591 are Others. The GIRTLS has a 100% success rate classifying GRBs, but only a 24% success rate of not identifying other events as GRBs. Compared to GIRTLS, 288 triggers are classified as GRBs by the BAS, and 34 of these are "real" GRBs. Eight "real" GRBs were misclassified as Generic sources by the BAS, as they were located near the galactic plane. The BAS has an 80% success rate classifying GRBs, and a 70% success rate of not identifying other events as GRBs. Most of those mis-classified as GRBs are particle events and instrument effects. We will continue to investigate additional improvements to the classification algorithms.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & & \multicolumn{6}{c}{Event Classified by BA} \\ \cline{3-6} & \multirow{2}{*}{Classified As} & \multicolumn{2}{c}{GRB} & \multicolumn{2}{c}{Known Source} & \multicolumn{1}{c}{Others} \\ \cline{3-6} & & & \multicolumn{2}{c}{SGR} & \multicolumn{1}{c}{XRB} & \\ \hline \hline \multirow{4}{*}{GIRTLS} & GRB & 666 & 42 & 32 & 1 & 591 \\ & Known Source & 185 & - & - & - & 185 \\ & Occult & 7 & - & - & - & 7 \\ \hline \multirow{4}{*}{BAS} & GRB & 288 & 34 & 5 & - & 249 \\ & Known Source & 25 & - & 25 & - & - \\ & Generic Source & 8 & 8 & - & - & - \\ & Solar Flare & 89 & - & - & - & 89 \\ & Occulta & 174 & - & - & - & 174 \\ & Particles & 141 & - & - & - & 141 \\ & Instrument Effect & 133 & - & 2 & 1 & 130 \\ \hline \end{tabular}
\end{table}
Table 1: The number of events classified by GIRTLS, BAS, and BA. Others include solar flares, particles, occulta and instrument effect.
Figure 5: Monthly trigger statistics for year 2021. The trigger classification reported here are the result of auto-ground analysis.
Figure 6: Monthly bursts for year 2021. GECAM-B detected 42 GRBs (green star), 32 SGRs, 31 of them are from SGR 1935+2154 and the other one is from SGR J1555.2-5402 (blue square), and 1 XRB from 4U 0614+09 (light blue triangle) in 2021.
The monthly trigger statistics over the first year of the mission is shown in Figure 5. The higher rate of triggers in the beginning six months is due to the temperatures of the SiPM exceeded the design specifications (\(-20\pm 3^{\circ}\)C) when the spacecraft adjusted its attitude mode. This leads to an increase of the thermal noise and may give false triggers. The SiPM is also prone to significantly increased thermal noise caused by on-orbit radiation damages, thereby decreasing its signal-to-noise ratio. This is clearly suggested by a significant decrease in the rate of detected triggers on the occultation of Sco X-1 after April, which has a soft spectra (see Figure 5). Thus, we raised the low-energy threshold of GRD on December 30, 2020, January 5 and 18, 2021, and February 19, 2021, and the current low-energy threshold of GRD is about 15 keV. In addition, on January 27, 2021 we presented the first report of the reactivation of SGR J1935+2154 (Huang et al. 2021). GECAM also detected a series of bursts from this source in July and September of 2021.
Table 2 summarizes all 42 in-flight triggered GRBs from the first year's operation of GECAM. Figure 7 shows the sky distribution of the GRBs in celestial coordinates. There are 27 GRBs that are localized by other instruments (e.g., _Swift_-BAT, _Fermi_-GBM) or the IPN. These reference locations are also listed in Table 2. Figure 8 shows the fraction of GECAM in flight and ground localizations within a given offset from the reference location. The vertical dot-dashed line shows that 68% of the reference locations are contained in a \(\sim 9^{\circ}\) region for both in-flight and ground locations.
Figure 9 shows the time delay between the trigger time and the receiving time of the first or second short message of the trigger. The average time delay is 45 s and the minimum time delay is 25 s. About 95% of the triggers have time delays of less than 67 s. This is necessary for follow up observations and has led to several observations, e.g., DeLaunay et al. (2021); Lipunov et al. (2021b). The time delay includes two parts. The first one is the delay from on-board signal processing. For short triggers, it takes \(\sim\)5 s to process the data, while for long triggers, it takes about 20 s. The second one is the delay between the message sending on-board and receiving on ground via the BeiDou short message service. Since the message is transmitted every 17 s, there will be an extra 17 s delay if the previous message failed to be received.
The BeiDou short messages is not only transmitted in real-time, but also stored in the on-board storage and transmitted via the X-band ground station. We can thereby estimate the success rate of transmissions by comparing the data from the two methods. Figure 10 shows the total number and the lost number of BeiDou short messages per day in 2021. On around January 15, most of the messages failed to transmit due to the attitude of the satellite. Because of the power supply issue, the satellite has to be frequently turn-off, which makes some messages fail to be timely sent before the satellite shutting down. Regardless of the satellite status, the success rate is 94.6%, which is consistent with the official result given by the Beidou system
Figure 8: The fraction of GECAM in-flight (42 GRBs) and ground (37 GRBs) localizations lying within a given offset from the real position. The vertical dot-dashed line indicates the 68% containment radius.
## 5 Conclusions and perspectives
GECAM is the China's first transient explorer with a real-time alert system, who is capable of distributing GRB coordinates to ground observers within minutes using the BeiDou-3 short message service. During the first year of operation, GECAM had been triggered 858 times in flight, of which 42 are GRBs. The BAS processes the trigger alert data and provides refined classifications and localizations. The burst alert data can be transmitted to our collaborations within \(\sim\)1 minute. As of this writing, we are also collaborating with the GCN team on disseminating the notices via GCN. The in-flight performance shows that GECAM real-time BAS based on the BeiDou-3 short message service operates stably and efficiently. It has been applied to the subsequent GRB mission High Energy Burst Searcher (HEBS), which is a gamma-ray burst monitor on-board an experimental satellite to be launched in 2022 (Xiao et al. 2022).
GECAM mission aims to detect and localize GRBs associated with GW events. But the low luminosity and flux of GRB 170817A suggest that a population of short GRBs may be missed due to the lack of on-board triggers. In addition to the automated flight triggers, GECAM will also provide a targeted coherent search for GRBs associated with GW events, and search the sub-threshold short GRBs which can be used to search for low-significance GW signals. Moreover, a further dedicated effort is ongoing to improve the ground localization, classification, and automatic alerting procedure. GECAM is going to play a crucial role in the LIGO, Virgo, and KAGRA forthcoming fourth observing run (O4) to search for and characterize the EM counterparts of GW events. It is also necessary to fully exploit the scientific potential of neutrinos and fast radio bursts, since these events also require high-energy EM observations for identification and further
Figure 9: Histogram of the time delays between the trigger time and the receiving time of the first or second short message of the trigger.
###### Acknowledgements.
The GECAM (Huairou-1) mission is supported by the Strategic Priority Research Program on Space Science of the Chinese Academy of Sciences. The authors thank the support from the Strategic Priority Research Program on Space Science (Grant No. XDA15360000, XDA15360300, XDA15360102, XDA15052700) of the Chinese Academy of Sciences, the National Natural Science Foundation of China (Grant No. U2031205, 12133007), and the National Key R&D Program of China (2021YFA0718500, 2022YFF0711404).
|
2308.03873 | Evaluating and Explaining Large Language Models for Code Using Syntactic
Structures | Large Language Models (LLMs) for code are a family of high-parameter,
transformer-based neural networks pre-trained on massive datasets of both
natural and programming languages. These models are rapidly being employed in
commercial AI-based developer tools, such as GitHub CoPilot. However, measuring
and explaining their effectiveness on programming tasks is a challenging
proposition, given their size and complexity. The methods for evaluating and
explaining LLMs for code are inextricably linked. That is, in order to explain
a model's predictions, they must be reliably mapped to fine-grained,
understandable concepts. Once this mapping is achieved, new methods for
detailed model evaluations are possible. However, most current explainability
techniques and evaluation benchmarks focus on model robustness or individual
task performance, as opposed to interpreting model predictions.
To this end, this paper introduces ASTxplainer, an explainability method
specific to LLMs for code that enables both new methods for LLM evaluation and
visualizations of LLM predictions that aid end-users in understanding model
predictions. At its core, ASTxplainer provides an automated method for aligning
token predictions with AST nodes, by extracting and aggregating normalized
model logits within AST structures. To demonstrate the practical benefit of
ASTxplainer, we illustrate the insights that our framework can provide by
performing an empirical evaluation on 12 popular LLMs for code using a curated
dataset of the most popular GitHub projects. Additionally, we perform a user
study examining the usefulness of an ASTxplainer-derived visualization of model
predictions aimed at enabling model users to explain predictions. The results
of these studies illustrate the potential for ASTxplainer to provide insights
into LLM effectiveness, and aid end-users in understanding predictions. | David N Palacio, Alejandro Velasco, Daniel Rodriguez-Cardenas, Kevin Moran, Denys Poshyvanyk | 2023-08-07T18:50:57Z | http://arxiv.org/abs/2308.03873v1 | # Evaluating and Explaining Large Language Models for Code Using Syntactic Structures
###### Abstract
Large Language Models (LLMs) for code are a family of high-parameter, transformer-based neural networks pre-trained on massive datasets of both natural and programming languages. These models are rapidly being employed in commercial AI-based developer tools, such as GitHub CoPilot. However, measuring and explaining their effectiveness on programming tasks is a challenging proposition, given their size and complexity. The methods for _evaluating_ and _explaining_ LLMs for code are inextricably linked. That is, in order to explain a model's predictions, they must be reliably mapped to fine-grained, understandable concepts. Once this mapping is achieved, new methods for detailed model evaluations are possible. However, most current explainability techniques and evaluation benchmarks focus on model robustness or individual task performance, as opposed to interpreting model predictions.
To this end, this paper introduces AST_xplainer_, an explainability method specific to LLMs for code that enables both new methods for LLM evaluation and visualizations of LLM predictions that aid end-users in understanding model predictions. At its core, AST_xplainer_ provides an automated method for aligning token predictions with AST nodes, by extracting and aggregating normalized model logits within AST structures. To demonstrate the practical benefit of AST_xplainer_, we illustrate the insights that our framework can provide by performing an empirical evaluation on 12 popular LLMs for code using a curated dataset of the most popular GitHub projects. Additionally, we perform a user study examining the usefulness of an AST_xplainer_-derived visualization of model predictions aimed at enabling model users to explain predictions. The results of these studies illustrate the potential for AST_xplainer_ to provide insights into LLM effectiveness, and aid end-users in understanding predictions.
explainability, interpretability, large language models, dl4se
## I Introduction
The advent and proliferation of online open-source code repositories and rapid advancements in transformer-based neural large language models (LLMs) have served as a catalyst for the advancement of automated Software Engineering (SE) tools with rapidly advancing effectiveness. LLMs for code have demonstrated considerable proficiency across a diverse array of generative SE tasks, inclusive of, but not restricted to, code completion [1, 2], program repair [3, 4], and test case generation [5]. Moreover, these advancements are rapidly being introduced into commercial developer tools such as GitHub CoPilot [6] and Replit's Ghostwriter [7].
However, the sheer complexity and size that enable the often surprising effectiveness of LLMs for code is a double-edged sword. That is, while these attributes enable LLMs to capture important patterns in code that allow them to be applied to a range of programming tasks, effectively _explaining_ and _evaluating_ the capabilities of these models is a challenging proposition -- they effectively function as "black boxes" that derive predictions from exceedingly complex internal model mechanics. Current research in both designing LLMs for code and in applying them to programming tasks typically makes use of existing benchmarks (_e.g._, CodeSearchNet [8], or HumanEval [9]) and metrics that have been adapted from the field of natural language processing (NLP) such as accuracy, BLEU, METEOR, and ROUGE, as well as more recent metrics further tailored for code such as CodeBLEU [10]. However, recent work has illustrated the limitations of benchmarks such as HumanEval [11], and there has been growing criticism of automated metrics within the NLP community [12, 13, 14, 15]. These deficiencies largely stem from the fact that such benchmarks and metrics are often targeted at evaluating functional or syntactic correctness of generated code or task performance, but are not able to _explain_ model predictions or capabilities in an interpretable manner.
Methods for _evaluating_ and _explaining_ LLMs for code are inextricably linked to one another. An informative evaluation requires some degree of explainability of model predictions, such that model behavior can be understood at a fine-grained level. However, the fundamental challenge in achieving explainability of LLMs for code lies in establishing a reliable mapping mechanism that can bridge the gap between a given model's predictions and human-understandable programming
language (PL) concepts that can aid in explaining the model's decisions. As such, designing both effective evaluations and interpretability techniques for LLMs of code requires that one first establish this conceptual mapping.
To overcome the challenges in explaining and evaluating LLMs for code we propose a novel method for enabling a reliable conceptual mapping of LLM predictions to PL concepts, called AST_xplainer_, which collects and aggregates LLM token predictions into a construct that we call Abstract Syntax Concepts (_AsC_), derived from Abstract Syntax Trees (ASTs). By explicitly mapping model predictions to code structure, AST_xplainer_ provides a fine-grained methodology for examining _how_ models perform relative to programming language concepts, and can help model end-users reason about _why_ an LLM may have made a certain set of predictions. AST_xplainer_'s mapping of model predictions to _AsCs_ enables two new types of evaluations for LLMs of code, and one novel interpretability technique that visualizes model _AsCs_ to aid end users (_i.e.,_ developers using LLMs to auto-complete code) in understanding LLM predictions. Fig. 1 illustrates these three main components of AST_xplainer_.
The first evaluation technique, called AsC-_Eval_ is able to estimate the structural performance of a predicted syntax element in order to measure the uncertainty of the downstream code generative process (e.g., for code completion). The second evaluation technique called AsC-_Causal_, is capable of generating causal explanations that link these structural performance values with canonical model performance (i.e., Cross-Entropy Loss). Finally, AsC-_Viz_ implements a practical interpretability technique by visualizing model LLM prediction uncertainty, organized into AST structures, adding end-users in understanding the reliability of model predictions in practice. We evaluate AsC-_Eval_ and AsC-_Causal_ through a large-scale, comprehensive empirical study that evaluates 12 popular LLMs on a novel dataset of \(\approx\) 10 million tokens that are exclusive of the model's training data. Furthermore, to evaluate the effectiveness of AsC-_Viz_, we conduct a user study examining the utility of multiple visualizations in aiding developers to understand and explaining model predictions. The results of our empirical study lead to novel insights regarding the performance of LLMs for code, and user study illustrates the promising utility of AsC-_Viz_.
The contributions of this paper are as the following:
* An _evalative metric_ based on Abstract Syntax Concepts and Next-token Predictions (AsC-_Eval_).
* An explainability method (AsC-_Causal_) that links canonical evaluations with our Abstract Syntax Concepts to provide insights into why the cross-entropy loss is being affected by structural elements of code data.
* A user study that shows how AST visualizations (AsC-_Viz_) help to understand generated code.
* A benchmark to evaluate Abstract Syntax Concepts in LLMs, which includes a curated dataset (_Galerus_) of 50K python samples.
* Experimental data, curated datasets, source code, and complementary statistical analysis used in this research are published in an open-source repository, which is available at [https://github.com/WM-SEMERU/CodeSyntaxConcept](https://github.com/WM-SEMERU/CodeSyntaxConcept).
## II Background & Related Work
AST_xplainer_ is an evaluative and explainability approach to quantify the prediction uncertainty of LLMs for code. LLMs are the result of scaling up billions of parameters for context-aware word representations from pre-trained models [16]. This section defines and formalizes the basic elements of our approach. We provide a definition of LLMs and how to evaluate them, the definition of Abstract Syntax Trees (ASTs) and how they were employed for probing, and finally, the explainability methods for LLMs.
### _Large Language Models for Code_
Our research focused on LLMs because of their outstanding performance on code-based generative tasks. While other representations exist, such as graph-based models [17, 18], we focus our discussion on sequence-based representations for simplicity. The goal of sequence-based models is to statistically learn a representation of a software artifact (_e.g.,_ snippet, comments, or test cases). We refer to SE-specific sequence-based data as a software corpus \(\mathcal{S}\). Given the sequential nature of \(\mathcal{S}\), we can decompose \(\mathcal{S}\) into a desired granularity of tokens, words, or sub-words [19] by using a transformation function \(\Gamma(\mathcal{S})=w_{1},...,w_{I}\) (_i.e., tokenizers_). This transformation function is a tokenization method for converting a software corpus into a sequence of discrete objects \(w_{i}\) for \(1\leqslant i\leqslant I\). Note that \(w_{i}\in V\), where the vocabulary \(V\) is a finite set.
Given this definition, a statistical language model is a probability distribution \(P\) over a fixed granularity of sequences of software corpora \(\mathcal{S}\). We can factorize the joint distribution over the \(i-\)dimension as: \(P(\mathcal{S})=P(w_{1},...,w_{I})=\prod_{i=1}^{I}P(w_{i}|w_{<i})\). Due to the discrete nature of the data, the expression \(P(w_{i}|w_{<i})\) can be estimated using a classifier. The classifier, in our particular case, is a LLM [20]. Hence, rather than using _n_-grams or Markov Models to approximate \(P(w_{i}|w_{<i})\)[21], it is convenient to use a latent model \(P(w_{i}|w_{<i})\approx P(w_{i}|h_{i})\), where \(h_{i}\) is known as a _hidden state_ that embeds the sequence information from past observations up to the time step \(i\).
Depending on _how_ the sequence is processed, the hidden state \(h_{i}\) can be computed using either _Encoder-Only_, _Encoder-Decoder_, or _Decoder-Only_ architectures according
Fig. 1: The evaluative and explainability method AST_xplainer_ is composed of AsC-_Eval_, AsC-_Causal_, and AsC-_Viz_.
to the _transformers_' layers [22]. One popular bidirectional objective function used widely in representation learning is _masked language_ modeling [23]. This function aims to predict masked text pieces based on the surrounding context. CodeBERT [24], CuBERT (345M) [25] CodeRoBERTa [26], and GraphCodeBERT [27] are examples of _Encoder-Only_ models for code. In programming contexts, these methods provide useful representations of code sequences for downstream tasks such as code classification, clone and defect detection. CodeT5 [28] and PLBART [4] are examples of _Encoder-Decoder_ models. These models encode an input sequence and, then, this encoded sequence is decoded with a different architecture. Encoder-Decoder models are trained with the goal of reconstructing masked input sequences [29]. Additionally, they have been employed for SE tasks such as code summarization, and code generation using masks [28]. Finally, _Decoder-Only_ models predict the probability of a token given a preceding sequence. CodeGPT [30], CodeParrot [31], GPT-Neo [32], GPT-J [33], Codex [34], GPT-NeoX [35], and Google's left-to-right decoder-only Transformer language models [22, 36] are examples of _Decoder-Only_ models for code.
Although our proposed approach AST_xplainer_ was designed to be compatible with either type of LLMs, this paper concentrated on _Decoder-Only_ models due to their popularity for code-based generative tasks [37]. These models share a common property: _the ability to connect previously processed information to a present task, such as using an initial sequence of tokens to predict new code tokens_. The resulting auto-completed sequence should be coherent with respect to the context of the initial sequence. This property is known as the ability to model _long-range dependencies_[38].
**Definition 1**: _Decoder-Only Transformers._ _Decoder-Only models update the hidden state \(h_{i}=f(h_{i-1},w_{<i})\) using past inputs \(w_{<i}\) and a previous hidden state \(h_{i-1}\). In other words, these models function in a feed-forward manner that predicts future values from historical values directly. LLMs trained on source code have the ability to generate tokens or sub-words given a history. Hence, decoder-only models are employed as generative models \(\hat{w_{i}}\backsim P(w_{i}|w_{<i})=\sigma(y)_{i}=\frac{e^{w_{i}}}{\sum_{j} \sigma^{j}}\). In the previous approximation, the predicted token \(w_{i}\) is _conditioned_ by the previous information. The term \(y_{j}\) represents the _non-normalized log-probabilities_ for each output token \(j\). We extracted and normalized these **log-probabilities** from the last layer of LLMs to estimate the **Next-token Predictions** (_NtP_) in AST_xplainer_ (see Sec.III). This estimation relies on the softmax function. The softmax \(\sigma_{i}\) returns a distribution over predicted output classes, in this case, the classes are each token in the previously introduced vocabulary \(V\). It is expected that the predictions contained in \(\sigma_{i}\) are influenced by previous inputs of the sequence \(w_{<i}\).
### _ASTs Probing Approaches_
_Probing_ is a supervised analysis to determine which type of parameters (_e.g.,_ input code snippets, tokenization process, number of hidden layers, and model size) influence the learning process in machine learning models [39]. The purpose of probing is to assess whether hidden representations of machine learning models (_i.e.,_ LLMs) encode specific linguistic properties such as syntactic structures of programming languages. For example, Lopez et al. [40] trained a linear classifier to show that code syntactic structures are encoded in pre-trained models in the form of Abstract Syntax Trees (ASTs). Lopez et al.'s approach demonstrates that the middle layers of pre-trained models contain ASTs' information [40].
Nonetheless, instead of proposing another syntax probe, our approach AST_xplainer_ adapts AST information to evaluate and explain LLMs (see Sec. III). ASTs are defined as a formal representation of syntactical structures built upon linguistic elements of PLs. ASTs are formed according to the production rules defined in Context Free Grammar (CFGs). More precisely, production rules are functions that combine terminal and non-terminal nodes into statements. Terminal nodes are symbols in the source code (_e.g.,_ tokens in region 3 of Fig.5), while non-terminal nodes encapsulate more than one terminal node to define the structure of a statement (_e.g.,_ nodes containing children in region 7 of Fig. 5).
When designing our approach AST_xplainer_ (see Sec.III), we leveraged meaningful and interpretable information defined in Context-Free Grammars (\(CFGs\)). \(CFGs\) are a set of rules containing the syntax and structural information of a language [41]. Ultimately CFGs define instructions that specify how different tokens (_i.e.,_ Lexemes) are put together to form valid statements in every programming language.
**Definition 2**: _Context Free Grammars._\(CFG\)\(\mathbb{G}\) is expressed as \(\mathbb{G}=(\alpha,\lambda,\omega,\beta)\) where \(\alpha\) denotes the finite set of non-terminal symbols, \(\lambda\) the finite set of terminal symbols, \(\omega\) the finite set of production rules and \(\beta\) the start symbol. The set of production rules \(\omega\) for any type of statement (_e.g.,_ conditional, assignation, operator) is expressed in terms of the terminal and non-terminal symbols._
### _Explainability for Code Generation_
LLMs for code can be considered a black box because of their uncertain behavior when predicting tokens. To estimate such uncertainty, we can employ _explainability_ methods on LLMs. Explainability aims to understand how a model operates and comes to decisions either by exploring inner layers or performing perturbation analysis on the models' inputs [42, 43]. For example, Gholizadeh et al. [44] propose a local explainability technique, namely layer-wise relevant propagation (LRP), that computes the importance of an interpretable _n_-gram in classifying a text sequence. LRP calculates a score with the sum of activated weights during the back-propagation to identify the most influential _n_-grams. This score is employed for explaining the importance of a given _n_-gram for a canonical (_i.e.,_ SVM) and a neural model(_i.e.,_ CNN). The authors demonstrated that LRP outperforms the gradient-only-based and permutation-only-based explainability techniques [44]. It is important to clarify that, in our research, _explainability_ and _interpretability_ are used interchangeably.
In the context of pre-trained models for code, Liu et al. experimented with Encoder-Decoder models for code2code
and comment2code tasks (_e.g.,_ T5, CodeText, and CodeTrans). Their research aims at explaining why neural models generate code sequences reliably by identifying tokens that contribute the most to a sequence prediction [15]. Moreover, Vasconcelos et al. propose a technique that highlights generated code using an uncertainty threshold. Their approach points out fragments of the sequence where developers can intervene upon the uncertainty threshold [45]. On the other hand, we can explain pre-trained models for code using structural information. For instance, Wan et al. conducted an interpretability analysis on Encoder-only models (_e.g.,_ CodeBert and GraphCodeBert) focusing on three aspects: 1) how the self-attention weights align with the syntax structure, 2) whether the syntax structure is encoded in the hidden layers, and 3) how pre-trained models induce syntax structure [14].
Even though previous research has introduced explainability techniques to analyze pre-trained models with structural information, those techniques have been tested and designed for modest-size Encoder-Only models (_i.e.,_ less than 1B). Conversely, our study AST_xplainer_ proposes not only an explainability technique that contextualizes canonical metrics (_i.e.,_ cross-entropy loss) based on causal inference (see Fig.4) but also an evaluative metric (AsC-_Eval_) for Decoder-only LLMs that predicts ASTs terminal and non-terminal nodes. More importantly, we introduce and control a set of confounders based on code features (_e.g.,_ AST-levels, AST-nodes, and number of tokens) to properly estimate the relationship between AsC-_Eval_ and canonical metrics (see Tab. II).
Kim et al. [13] introduce a formal mathematical structure known as a **function for explainability** (\(\varphi\)). We use this definition to formally describe what constitutes an explainable method in SE. Most LLMs for code operate by predicting tokens \(P(w_{i}|d_{i})\) that do not _inherently_ match high-level concepts a human can easily understand. Kim et al. claim that such difficulty can be expressed mathematically as representing the state of LLMs as a vector space (\(\vec{m}\)). Conversely, humans or, in our study, developers operate in a different vector space \(\vec{h}\), which corresponds to an unknown set of **human-interpretable concepts** (\(h\)). As such, our main challenge is to map \(\vec{m}\rightarrow\vec{h}\) bridging this gap between the disparate vector spaces. The _key insight_ of AST_xplainer_ is the formalization of an explainability function \(\varphi\) for LLMs of code.
**Definition 3**: **Interpretability Function for Next Token Predictions.** Consider \(\varphi:\vec{m}\rightarrow\vec{h}\). In this formulation, \(\vec{m}\) represents an approximation of a model's vector space as measured through token prediction performance at different granularity levels (_i.e.,_ normalized log-probabilities). This vector space approximation is then mapped to human-understandable concepts \(\vec{h}\) that represent programming language syntactic concepts (_i.e.,_ terminal and non-terminal nodes).
## III The AsC-_Eval_ Component
While LLMs have seen striking advances with regard to code generation and other downstream SE tasks [46, 47], researchers are still not able to evaluate what aspects of code are actually statistically learned by these models. In this section, we propose a new metric, AsC-_Eval_, to showcase the statistical behavior of syntactic elements generated by LLMs. Our proposed AsC-_Eval_ comprises the basic units for explainability (see Fig. 2) as Abstract Syntax Concepts (_AsC_), an alignment function \(\delta\) that links tokens with ASTs, and an aggregation function \(\theta\) that estimates the prediction performance of a terminal and non-terminal nodes. We propose an explainability function \(\varphi\) that relies on the alignment function \(\delta\) and the aggregation function \(\theta\) to perform the mapping from log-probabilites (_i.e.,_ _NtP_) to developer-understandable concepts (_i.e.,_ _AsC_).
### _Abstract Syntax Concepts (AsC)_
AsC-_Eval_ can be formally defined (see Def. 3) as an explainability function \(\varphi\) of token predictions of LLMs using Context Free Grammars. We introduce the term **Abstract Syntax Concepts** (_AsC_) to represent the terminal and non-terminal symbols in a Context Free Grammar (see Def 2). Specifically, to approximate a LLMs' vector space, in \(\vec{m}\), we extract the last layer to calculate _NtP_, which is, in fact, a generative measure of performance. Then in \(\vec{h}\), we map the model's prediction performance at the token level (_NtP_) to _AsC_ (for which we define a set of categories \(\mathcal{H}\)), to make it easier to interpret what aspects of LLMs are _effective_ or _erroneous_ at predicting.
In PLs, terminal and non-terminal nodes retain different semantic meanings. For instance, 'identifier' and'string' nodes correspond to a common _Natural Language_ concept category. As such, we can group nodes \(n\) into semantically meaningful _categories_\(\mathcal{H}\). Fig. 3 depicts some of our proposed categories for Python. These categories will allow AsC-_Eval_ to assign semantic meaning to predicted _AsC_. _AsC_ are the fundamental mathematical units for enabling the evaluation and explainability of LLMs. Figure 3 depicts some of the concepts used to evaluate LLMs with AsC-_Eval_. Concepts \(n\in N\) are types of symbols defined by tree-sitter's \(CFG\)[48]. In summary, Each token in a sequence \(s\) can be assigned to a category \(h\in\mathcal{H}\). With our categories \(\mathcal{H}\), researchers and developers can easily associate LLMs' performance to particular structural code attributes. As such, AsC-_Eval_ allows for LLMs Next-token Predictions to be explained in a developer-centric way.
Fig 2-A depicts the AST representation of a Python snippet of a naive implementation of the function \(countCharts\). This function counts and returns the number of occurrences of a given character for an input string. In the AST representation, the leaf nodes correspond to the terminal tokens used in the snippet, while the intermediate nodes correspond to non-terminals. Our approach relies on the tree-sitter library [48] to construct the AST representations of the snippets. Once the AST has been parsed, we can access the information for all nodes and retrieve useful properties such as their type, children, and location.
### _AST Alignment function (8)_
Figure 2-B illustrates the process of aligning terminal and non-terminal nodes in the AST representation with their corresponding tokens. Prior to this alignment process, we split the \(countCharts\) snippet \(s\) into tokens using the model tokenizer \(\Gamma(s)=(w_{1},...,w_{i})\). Since the tokenizer may produce a sequence of tokens where each token does not necessarily matches with a single terminal node, a single node in the AST may contain more than one associated token. In fact, intermediate nodes are aligned with a sub-sequence of the original snippet rather than a single token. We define for this purpose the alignment function \(\delta:N\to s_{<=i}\) where \(s_{<=i}\) corresponds to a subsequence of a snippet and \(N\) is the set of terminal and non-terminal nodes. We leverage the offset property of each AST node to conduct this process, in other words, we search for all the tokens in \(s\) that are located within the offset range of each node. To illustrate how function \(\delta\) works, let's consider the example in Figure 2-B, in the sub-tree the terminal node '(' is aligned with token [] while the sibling node 'identifier' is aligned with tokens [].
The parent node 'parameters' will be consequently aligned with [].
### _AST Aggregation function (6)_
We design an aggregation function \(\theta\) that computes our proposed metric AsC-_Eval_, which represents how confident a terminal or non-terminal node \(n\) is predicted by an LLM. By relating these node predictions to an actual node symbol, we gain an understanding of how well a studied model is _generating code_. These AsC-_Eval_ performance values can also uncover specific long-range interactions and map them into an AST visual structure (see Sec. V). AsC-_Eval_ performs at two levels of granularity depending on the scope of the analyzed corpus \(\mathcal{S}\). We refer to such granularity as _local_ and _global_ aggregation. Local aggregations operate for a code snippet, while global aggregations operate for a corpus. Although local aggregation can provide a AsC-_Eval_ value for a single snippet, this aggregation allows computing an average of aggregated values at snippet granularity.
Figure 2-C shows the aggregation function used to compute the prediction probability for each node. Once the tokens are aligned with their corresponding nodes using \(\delta\), we traverse the entire AST and aggregate the _NtP_ probabilities of their associated tokens. The aggregation function \(\theta\) can take the form of a statistical average, median or max values depending on the user configuration. In our study, we set the aggregation \(\theta:N\to median(\delta(N))\) for a subset of tokens \(s_{<=i}\). For example, as illustrated in Fig. 2-C, the parent node 'parameters' has an associated average value of \(0.23\). This parent node average was aggregated with its terminal values: '(' with \(0.07\), 'identifier' with \(0.4\), ',' with \(0.5\), 'identifier' with \(0.1\), and ')' with \(0.1\).
## IV The AsC-_Causal_ Component
In this section, we show how AsC-_Causal_ component can be used to explain and contextualize other canonical metrics such as the cross-entropy loss. To achieve that, we propose a causal inference technique to estimate the impact of Abstract Syntax Concepts (_AsC_) predictions on overall LLM performance.
LLMs are more understandable when they _reflect human knowledge_[13]. One way of determining whether an LLM for code reflects human knowledge is testing it to see whether or not it operates _similar to how a developer would estimate the prediction of a sequence_[49]. For instance, consider the situation where a developer inserts a 'for statement' in a snippet. Inherently, a developer mentally rationalizes several things such as the concept of _Iteration_. If an LLM is able to make a similar prediction, it suggests to us that it has _statistically learned_ some understanding of the syntax structure of a programming cycle. We can consider that this statistical behavior impacts the cross-entropy loss. This impact indicates that Abstract Syntax Concepts (_AsC_) are influencing the quality of a LLM (see Def. 4). In order to estimate such influence, we propose a causal inference technique based on the do-calculus analysis [50, 51, 52]. For instance, in Eq. 1, we compute a causal effect (Eq. 1a) and correlation (Eq. 1a) for the concept treatment 'for_statement' impacting the cross-entropy loss of a given LLM.
\[p(Y|do(t=forState.)) =\sum_{z\in codeFact.}p(Y|z,t)p(t) \tag{1a}\] \[p(Y|t=forState.) =\sum_{z\in codeFact.}p(Y|z,t)p(t|z) \tag{1b}\]
We can explain the prediction performance of LLMs using AsC-_Eval_ values as treatment effects. These effects are computed from a **Structural Causal Model** (SCM), which
Fig. 2: AsC-_Eval_ Components. Left: Nodes are employed as “concepts”. Center: Each token is aligned to the end nodes of the AST with an offset function. Right: Node probabilities are estimated with an aggregation function.
represents our assumptions about the underlying causal process. In our study, these assumptions take the form of the performance of each _AsC_ (treatments \(T\)), code features (confounders \(Z\)), and the LLMs canonical performance (outcome \(Y\)). The relationship or directionality information of these causal variables is explicitly stated in the SCM (see Fig. 4). The goal of the causal analysis is to determine the _Average Treatment Effect_ (ATE) that a treatment has on the outcomes after controlling the confounding variables. In other words, we want to estimate the probability \(p(Y|do(T))\) (see Eq. 1a) to identify cases of _spurious correlations_ (_i.e.,_ association is not causation) [53]. Note that the probability \(p(Y|do(T))\) is different from \(p(Y|T)\) in Eq. 1b. We state that the probability \(p(Y|T)\) represents the correlation between the variables \(Y\) and \(T\) without controlling any confounder's effects on treatments or outcomes. In our study, we compute the Pearson correlation \(\rho=p(Y|T)\). Conversely, the treatment effect \(p(Y|do(T))\) is estimated with _a liner regression_ after applying the _the backdoor criterion_ for controlling confounders [53].
**Definition 4**: _AsC Causal Treatment Effects._ Given a Structural Causal Model where a set of variables \(PA\) denotes the parents (_i.e.,_ code features) of \(T\), the treatment effect of T (_i.e.,_ _AsC_) on Y (_i.e.,_ cross-entropy loss) is given by
\[p(Y=y|do(T=t))= \tag{2a}\] \[\Sigma_{z}p(Y=y|T=t,PA=z)p(PA=z)=\] (2b) \[\Sigma_{z}p(T=t,Y=y,PA=z)/p(T=t|PA=z) \tag{2c}\]
Based on the causal inference definition by Pearl et al. [50], we propose a specific treatment effect for our Abstract Syntax Concept _AsC_. Def. 2 depicts the statistical marginalization of confounders. In simple terms, the Average Treatment Effect comprises the _pure_ impact of the treatment in the outcome without the influence of confounding variables. These effects represent the slope of the linear model obtained between the treatment and the output after controlling for confounding. In our study, we controlled for confounders such as _sequence size_, _number of nodes_, _number of tree levels_, and _cyclomatic complexity_.
## V The AsC-_Viz_ Component
The visualization component AsC-_Viz_ is a graphical explainability technique that displays the AsC-_Eval_ performance values of the terminal and non-terminal nodes for a _single_ local evaluation. We take advantage of the hierarchical structure of PLs to visually accommodate AsC-_Eval_ values obtained in AsC-_Eval_ into the AST. Fig. 5 illustrates how we accommodate the AsC-_Eval_ values for a code generation task using our analyzed _gpt-3 [1.3B]_ model. Region 1 shows a box with a prompt with an incomplete snippet followed by a second box with generated tokens in blue. Then, in region 2, the resulting auto-completed snippet is processed with AsC-_Eval_ and represented as an AST. Each node has information about the AsC-_Eval_ performance after applying local aggregations \(\theta\). The nodes are color-coded. The highest aggregated values (_i.e.,_ best predictions) are displayed in shades of blue. In contrast, nodes with the smallest values (_i.e.,_ worst predictions) are displayed in shades of red. Nodes,
Fig. 4: **Structural Causal Model** to estimate the Average Treatment Effect of _AsC_ to LLM Performance by controlling code features (AsC-_Causal_).
Fig. 3: AsC-_Eval_ for 10 _AsC_ Categories and 2 LLMs (_mono-lang [2B]_ and _gpt-3 [125M]_)
in region 2, encapsulate the code tokens generated by the LLM as presented in region 2. We refer to tokens linearly organized as _sequence representation_.
## VI Experimental Design
In order to illustrate the insights that AST_xplainer_ can enable, we present an empirical evaluation on 12 LLMs, which shows how LLMs behave for each Abstract Syntax Concept, and a user study, which assesses the usability of our approach. This section details the methodological steps we followed to configure, evaluate, and explain our selected LLMs.
**RQ\({}_{1}\)**: _AsC-Eval: To what extent do Large Language Models for code predict syntactic structures?_
**RQ\({}_{2}\)**: _AsC-Causal: How do Abstract Syntax Concepts impact LLMs' canonical prediction performance?_
**RQ\({}_{3}\)**: _AsC-Viz: How useful is our AST evaluation method for developers in a practical scenario?_
### _Study Setup_
_Data Collection._ Our selected LLMs were trained on _BigQuery_[54], _BigPython_[55], and the _Pile_[56]. These datasets include repositories and files from GitHub created before 2021. However, in order to properly evaluate AsC-_Eval_, we must avoid data contamination. That is, we need to avoid using samples in the evaluation that LLMs have already been used for training. For the same reason, we cannot evaluate our approach using popular code datasets such as _CodesearchNet_[8] or _CodeXglue_[57]. To solve this data contamination issue, we collected 50k unique Python snippets and created a brand new code dataset, called _Galeras_. _Galeras_ contains only recent commits performed from January 1st, 2022 to January 1st, 2023. We collected Python repositories from Github that have more than one hundred stars and extracted snippets of code from new and updated Python methods. We cleaned sample duplicates using the commits' history. Additionally, _Galeras_ includes information about the commit message, comments on the method, the whole AST data structure of the method, number of nodes, AST levels, AST errors, white spaces, lines of code, cyclomatic complexity, and token counts.
_Model Collection._ We evaluated and explained a total of 12 open Decoder-Only LLMs filtered by popularity. Our largest model has 2.7B parameters. Table I shows the LLMs grouped into four different categories that correspond with the fine-tuning strategy employed. The first category consists of GPT-3-based models trained mostly on natural language (_i.e.,_ Pile [56]). The second category includes models trained on natural language but built upon the _codegen_ architecture [58]. The third category consists of models trained on multiple programming languages using BigQuery [54] on both gpt-2 and codegen architectures. The last category corresponds to both _Multi-Language-Type_ models fine-tuned on BigPython [58], which we refer to as _Mono-Language-Type_, and gpt-2 models (_i.e.,_ codeparrot [31]).
_Machine Configuration._ We performed the experiments using 20.04 Ubuntu with an AMD EPYC 7532 32-Core CPU, A100 NVIDA GPU with 40GB VRAM, and 1TB RAM. For the model inference process, we used HugginFace and Pytorch [59, 60]. All models were loaded into the GPU of the machine to boost the inference time.
### _Rq\({}_{1}\)_ The AsC-Eval Empirical Methodology_
To answer **RQ\({}_{1}\)**, we generated the normalized log-probabilities (see Sec.II) or Next Token Predictions (_NtP_) for each code snippet in \(\mathcal{S}=\)_Galeras_. These log-probabilities were extracted at inference time for each token position for the 12 LLMs. The log-probabilities distributions have a vector size of \(|V|\) for each token position in \(s\in\mathcal{S}\). These distributions are processed to obtain the log-probability that actually matches the expected token in a position \(i\). Therefore, each token position has an associated prediction value that we save for generating the _NtP_ sequence. Such Next-token Prediction sequence is the input for the aggregation function \(\theta\) that generates the corresponding AsC-_Eval_ values (see Sec. III). Additionally, we computed the cross-entropy loss of each snippet \(s\) in our dataset. To obtain the AsC-_Eval Global_ value in Tab. I and Fig. 6, we aggregated AsC-_Eval_ performance values (_i.e.,_ all available _AsC_) by LLM. The values per model are bootstrapped with the median (size of 500 samplings) to enable a fair comparison among models. Similarly, to obtain the AsC-_Eval_ per Abstract Syntax Concept Category (_e.g.,_ Data Str, Decision, or Scope), we globally aggregated performance values of tokens under these categories. We also explored with Type Model aggregations (see Table. I).
### _Rq\({}_{2}\)_ The AsC-Causal Empirical Methodology_
To answer **RQ\({}_{2}\)**, we compute both Pearson correlation \(\rho\) values and causal treatment effects (see Def.4) for a subset of 14 syntactic concepts (see Fig. 4). Specifically, we propose the treatments (\(T\)) _Scope_, _Exceptions_, _Operator_, _Decision_, _Data Structures_, _Functional Programming_, _Natural Language_, _Iterative_, _Types_, and _Testing_. Each _AsC_ was correlated to 4 confounding variables (_i.e.,_ Cyclo, AST Levels, #AST Nodes, and Sequence Size) and the cross-entropy loss of _gpt-3 [125M]_ and _mono-lang [2B]_. We decided to explore only edge case LLMs (_i.e.,_ the best and worst models by AsC-_Eval_ performance) since we detected that the correlated values were very similar across LLMs. On the other hand, we estimated the probability of the treatment effect \(p(Y|do(T)\) for each _AsC_ and the cross-entropy loss by controlling the 4 previously mentioned confounders. This probability function was estimated using the _doWhy_ tool [53]. Table II summarizes the treatment effects and correlations between the AsC-_Eval_ values locally aggregated (see Sec. III) and the cross-entropy loss grouped by concept categories (\(\mathcal{H}\)).
### _Rq\({}_{3}\) Qualitative User-Study Methodology_
To answer **RQ\({}_{3}\)**, we designed four surveys to understand the perception of software practitioners in regard to the _usability_ of AsC-_Eval_ and AsC-_Viz_. Our goal is to assess the effectiveness of our AsC-_Eval_ and AsC-_Viz_ approaches to explain why and how certain source code tokens are predicted by LLMs trained on code. Leveraging interpretability techniques to explain
the decisions of such models can give software practitioners insights into the behavior and the quality of the predictions.
We introduced a set of code exemplars with their corresponding AsC-_Viz_ explanation. We asked the participants to rate the explanations for four Python samples distributed across treatments. We use a within-subjects design or repeated measures design, in which every individual receives each of the experimental treatments consecutively. Table III contains a summary with the description of each survey. Each individual survey has two sections. The first section of each survey intends to gauge the proficiency of participants using Python and their familiarity with language models for code generation tasks. Participants were also asked about their knowledge of representation of algorithms (AST) and the major problems that they have faced when using LLMs for source code generation.
In the second section, we provide four Python prompts with an incomplete method along with the prediction of the missing lines given by an LLM. Since our goal is to evaluate the usability of AST_xplainer_ rather than the perception of the participant in regards to the model performance, we omit details about the model used for predictions (_gpt-3 [1.3Bl]_). Each prompt is accompanied by a visualization (_i.e.,_ AST-partial, AST-complete, and sequence) that shows the AsC-_Eval_ and _NtP_ values for the predicted tokens. Then we ask the participant to assess the visualization and rate its usefulness. The visualizations are separated into different surveys.
Figure 5 poses an example of a local evaluation of a code completion task that was presented to the participants. The survey asks to evaluate four different samples processed with AsC-_Eval_. Some of the samples have syntactic errors. Each survey comprises an incomplete Python method (_i.e.,_ prompt) and a complete method with the highlighted portion of the generated code (region 1 in Fig. 5). For each sample in the surveys, we presented a specific visualization. For instance, surveys (\(S2\)) and (\(S3\)) contain an AST-based representation similar to the one in Fig. 5. The AST-complete visualization \(S3\) shows the terminal and non-terminal nodes (region 2 in Fig. 5). Nonetheless, the AST-partial visualization (\(S2\)) only shows the non-terminal nodes. Finally, a sequence-based visualization of the generated logits for each token was presented to the participants in the survey (\(S1\)) (region 3 in Fig. 5).
\begin{table}
\begin{tabular}{c c c c|c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{3}{c|}{**Large Language Models (LLMs)**} & \multicolumn{8}{c}{_ASC-Eval Performance (bootstrapped medium)_} \\ \hline _Type_ & _Name_ & _Architecture_ & _Size_ & _Global_ & Data Str. & Decision & Except. & F. Prog. & Iter. & NL & Oper. & Scope & Testing & Types \\ \hline \multirow{4}{*}{**Natural L.**} & get-noe-12sn [32] & _gpt-3_ & 125M & 0.48 & 0.50 & 0.52 & **0.43** & **0.49** & 0.74 & **0.32** & **0.48** & 0.51 & 0.59 & **0.33** \\ & get-noe-12sn [32] & _gpt-3_ & 1.38 & 0.59 & 0.60 & 0.61 & 0.53 & 0.62 & 0.79 & **0.43** & 0.57 & 0.68 & 0.68 & **0.44** \\ & get-noe-2.7B [32] & _gpt-3_ & 2.78 & **0.62** & 0.62 & 0.63 & 0.56 & 0.66 & **0.81** & **0.46** & 0.60 & 0.74 & 0.70 & **0.47** \\ \hline \multirow{4}{*}{**Natural L.**} & codepan-350M+ [61] & _codepan_ & 350M & 0.55 & 0.56 & 0.57 & **0.48** & 0.57 & 0.77 & **0.39** & 0.54 & 0.64 & 0.64 & **0.40** \\ & codepan-2B+ [61] & _codepan_ & 2B & **0.65** & 0.65 & 0.65 & 0.58 & 0.68 & **0.82** & **0.48** & 0.61 & 0.78 & 0.72 & 0.50 \\ \hline \multirow{4}{*}{**Multi-Language**} & codepan-small-neti [31] & _gpt-2_ & 110M & 0.57 & 0.54 & 0.55 & 0.64 & 0.60 & 0.60 & 0.54 & 0.71 & 0.67 & **0.42** \\ & codepan-350M+nuli [61] & _codepan-350M+al_ & 350M & 0.68 & 0.72 & 0.75 & 0.70 & 0.69 & 0.51 & 0.62 & 0.83 & 0.73 & 0.51 \\ & codepan-2B-nuli [61] & _codepan-2B-nl_ & 2B & **0.79** & 0.74 & 0.79 & **0.83** & 0.81 & 0.77 & 0.65 & 0.74 & **0.91** & 0.80 & 0.71 \\ \hline \multirow{4}{*}{**Mons-Language**} & codepan-small [31] & _gpt-2_ & 110M & 0.61 & 0.58 & 0.58 & 0.68 & 0.66 & 0.63 & **0.46** & 0.57 & 0.73 & 0.69 & **0.47** \\ & codepan-350M+nuo [61] & _codepan-350M+multi_ & 350M & 0.73 & 0.68 & 0.76 & 0.78 & 0.76 & 0.73 & 0.57 & 0.68 & **0.86** & 0.77 & 0.58 \\ & codepan-2B-nuo [61] & _codepan-2B-nnuli_ & 2B & **0.84** & 0.79 & **0.84** & **0.90** & **0.85** & **0.81** & 0.73 & **0.82** & **0.94** & **0.85** & **0.83** \\ \hline \hline \end{tabular}
\end{table} TABLE I: Large Language Models characteristics and their associated AsC-_Eval_ performance. Erroneous AsC-_Eval_ values are in red. Confident AsC-_Eval_ values are in blue. Best global AsC-_Eval_ is underlined.
Fig. 5: Local Evaluation for Code Completion (AsC-_Viz_).
## VII Results & Discussion
### _Rq\({}_{1}\) Empirical AsC Performance Evaluation_
In this RQ, we provide an empirical value (bootstrapped median columns in Tab. I) of the prediction of Abstract Syntax Concepts for the 12 LLMs. We set a threshold of \(0.6\) as an acceptable rate of prediction confidence for our AsC-_Eval_ metric. Fig. 3, for example, shows our best and worst LLMs, _mono-lang [2B]_ and _gpt-3 [125M]_ respectively, at every proposed Abstract Syntax Concept. We observe that, in general, scaling the parameters of LLMs plays a fundamental role in the prediction of _AsC_. The dashed green boxes show the largest AsC-_Eval_ performance increments from the worst to the best concepts. Particularly, _Exceptions_, _Natural Language_, _Operators_, _Types_, and _Decisions_ present the biggest jumps in syntactic AsC-_Eval_ performance.
Our empirical evaluation shows that _AsC_ categories that fulfill the \(0.6\) threshold for the 12 LLMs are _Scope_ with the highest AsC-_Eval_ performance of \(0.94\) for the _Mono-Language-Type_ models, _Iterations_ with \(0.82\) for _codegen-nl [2B]_, and _Testing_ with \(0.85\) for _mono-lang [2B]_ (see Table.I). Conversely, we found some concept categories struggle with AsC-_Eval_ performance. We refer to these categories as _erroneous_ since they are below \(0.5\). Those categories are mainly _Natural Language_ category with the largest average median of \(0.46\) and _Data Types_ with the largest average median of \(0.47\) for _NL GPT-3_.
We believe that models poorly behave with low AsC-_Eval_ performance because category concepts such as _Natural Language_ and _Data Types_ require more context to be accurately predicted. For instance, the'string' concept requires a larger window context before properly being predicted. Similarly, the category _Data Types_ is prone to be erroneous since they may appear more frequently at the beginning of the snippets compared to other categories. Also, bear in mind that _Data Types_ are less frequent concepts due to the dynamic typing for Python. In general, none of the evaluated architectures performed well at predicting _Data Types_ accurately except by _mono-lang [2B]_, which was trained with a large number of code samples.
Table I depicts that _Iteration_ category mostly surpasses the threshold for all our models except for _codeparrot-small-multi_ with an average median AsC-_Eval_ of \(0.6\). From our smaller models (_i.e.,_ in a range of millions of parameters), the lowest average median obtained for _gpt-3 [125M]_ is \(0.74\), which also surpasses the threshold. This outstanding behavior of _NL GPT-3_ models could be explained as Python reserved words for iterations such as **for** and **while** also appear in natural language with similar semantics.
Fig. 6 indicates that models trained on natural language have more median variability than models fine-tuned on code datasets. For instance, _NL GPT-3_ and _NL Codegen_ report values in a range from \(0.2\) to \(0.9\). Conversely, fine-tuned models with code such as _Mono-Language-Type_ has a lower variability than _NL GPT-3_ and _NL Codegen_ categories. For example, _mono-lang [2B]_ has a global avg. median AsC-_Eval_ of \(0.84\) and a variability range between \(0.7\) and \(1.0\), outperforming the \(0.6\) threshold. Furthermore, _mono-lang [2B]_ is our best model with an average global AsC-_Eval_ of \(0.84\). On one hand, this suggests that fine-tuned models on code are predicting _AsC_ with higher confidence than natural language-only models. On the other hand, although _Multi-Language-Type_ models exhibit high variability (from \(0.5\) to \(0.9\)), their average median AsC-_Eval_ (_i.e.,_\(0.68\) for _multi-lang [110M]_) is even better than natural language models (_i.e.,_\(0.48\) with variability from \(0.2\) to \(0.8\) for _gpt-3 [125M]_).
_RQ\({}_{2}\) AnS-Eval:_ The prediction of syntactic structures highly depends on LLMs' parameter size and fine-tuning strategy. More specifically our largest evaluated model _mono-lang [2B]_, which was fine-tuned with the BigPython and BigQuery datasets, obtains the highest global average _AsC_ Performace of \(0.84\) with the lowest variability.
### _Rq\({}_{2}\) Empirical Causal Evaluation_
In this research question, we want to quantitatively demonstrate that cross-entropy loss of LLMs tends to be negatively impacted by the AsC-_Eval_ values at snippet granularity. Therefore, we can explain in lower granularity which parts of the code LLM perform poorly (see red boxes in Tab. I). We showcase empirical evidence that the previous statement holds for both correlation \(\rho\) and causal effect \(p(y|do(t))\) values. For example, Table II shows that, in general, all Abstract Syntax Concept Categories (_i.e.,_ Global Avg. AsC-_Eval_) influence the cross-entropy loss for our best (_i.e., mono-lang [2B]_) and worst (_i.e., gpt-3 [125M]_) models, with an average treatment effect of \(1.78\) and \(-1.60\) respectively.
The most outstanding finding is that the _Natural Language_ category has the largest impact on the cross-entropy loss. For example, the _AsC_ concept 'identifier' has a causal effect of \(-1.78\) for _gpt-3 [125M]_ and \(-2.89\) for _mono-lang [2B]_. In contrast, _Functional Programming_ categories present the lowest impact on cross-entropy loss with a subtle 'lambda' positive causal effect of \(0.2\) for _gpt-3 [125M]_. This subtle positive effect was expected as NL-based LLMs have not been fine-tuned on code corpora with 'lambda' expressions. In
Fig. 6: AsC-_Eval_ Performance grouped by specific LLMs and AsC-_Eval_ density by Model Type.
addition, we want to highlight the moderate Pearson correlation value between the 'if statement' concept and the cyclomatic complexity for our best and worst models with the same value of \(\rho=0.58\). This observation is consistent with the definition of the cyclomatic complexity metric since this metric takes into consideration the control flows induced by conditional structures.
_RQ\({}_{3}\)__Usability of AsC-Eval_: We found that the partial visualization of the AST is the most _readable_ representation to showcase local aggregated predictions with 57% agreement. Although AST partial visualization has mixed opinions about its _usefulness_ with an agreement rate of 29%, the AST complete visualization has an agreement rate of 50%.
## VIII Conclusion & Future Work
Our research proposes an Abstract Syntax Concept approach to evaluate and explain the performance of Large Language Models for code. We conducted a rigorous empirical evaluation on 12 popular LLMs using a curated dataset that uncovered novel performance qualities of LLMs for code. Our empirical evaluation revealed that mono-language outperforms multi-language LLMs at predicting all types of syntax concepts. This suggests the importance of fine-tuning strategies over model size. In addition, we demonstrated that Abstract Syntax Concepts influence the cross-entropy loss of LLMs after controlling for code confounders. In fact, this influence persists across models at different parameter sizes and fine-tuning strategies. Additionally, we illustrated the utility of visualizations built upon our defined Abstract Syntax Concepts in a user study. We believe these results illustrate the promise of blending explainability and evaluation techniques for LLMs of code, and signal the potential for future work to further integrate explainability techniques into future LLM benchmarks.
|
2306.15565 | A proof of Guo-Wang's conjecture on the uniqueness of positive harmonic
functions in the unit ball | Guo-Wang [Calc.Var.Partial Differential Equations,59(2020)] conjectured that
for $1<q<\frac{n}{n-2}$ and $0<\lambda\leq \frac{1}{q-1}$, the positive
solution $u\in C^{\infty}(\bar B)$ to the equation \[ \left\{ \begin{array}{ll}
\Delta u=0 &in\ B^n,\\ u_{\nu}+\lambda u=u^q&on\ S^{n-1}, \end{array} \right.
\] must be constant. In this paper, we give a proof of this conjecture. | Pingxin Gu, Haizhong Li | 2023-06-27T15:44:02Z | http://arxiv.org/abs/2306.15565v1 | # A proof of Guo-Wang's conjecture on the uniqueness of positive harmonic functions in the unit ball
###### Abstract.
Guo-Wang [_Calc.Var.Partial Differential Equations_, **59** (2020)] conjectured that for \(1<q<\frac{n}{n-2}\) and \(0<\lambda\leq\frac{1}{q-1}\), the positive solution \(u\in C^{\infty}(\bar{B})\) to the equation
\[\left\{\begin{array}{ll}\Delta u=0&in\ B^{n},\\ u_{\nu}+\lambda u=u^{q}&on\ S^{n-1},\end{array}\right.\]
must be constant. In this paper, we give a proof of this conjecture.
Key words and phrases:Uniqueness, Positive harmonic function, Sobolev inequality, Obata type identity, Pohozaev identity 2020 Mathematics Subject Classification: 58J90, 35B33
## 1. Introduction
In the past decades, a great deal of mathematical effort in best constant of Sobolev inequality has been devoted. For \(n\geq 3\), a well-known subject is to figure out the minimum constant of Sobolev trace inequalities
\[||u||_{L^{\frac{2(n-1)}{n-2}}(\partial\mathbb{R}^{n}_{+})}\leq C||\nabla u||_ {L^{2}(\mathbb{R}^{n}_{+})},\quad\forall u\in C^{\infty}_{0}(\bar{\mathbb{R}}^ {n}_{+}). \tag{1.1}\]
A key issue for this study is to investigate the extreme value of Sobolev quotient. Escobar [3] showed by conformal transformations that the best constant equals to
\[Q(B^{n}):=\inf_{u\in C^{\infty}(\bar{B}^{n})}\frac{\int_{B^{n}}|\nabla u|^{2}+ \frac{n-2}{2}\int_{S^{n-1}}u^{2}}{\big{(}\int_{S^{n-1}}|u|^{\frac{2(n-1)}{n-2} }\big{)}^{\frac{n-2}{n-1}}}. \tag{1.2}\]
Lions [9] proved that (1.2) can be achieved by a positive \(u\) satisfying the Euler-Lagrange equation
\[\left\{\begin{array}{ll}\Delta u=0&in\ B^{n},\\ u_{\nu}+\frac{n-2}{2}u=u^{\frac{n}{n-2}}&on\ S^{n-1},\end{array}\right. \tag{1.3}\]
where \(\nu\) is the unit outer normal vector on \(S^{n-1}\). With this conclusion, Escobar [3] classified all positive solutions of (1.3) by an integral method and hence [4] proved that
\[|S^{n-1}|^{\frac{1}{n-1}}\bigg{(}\int_{S^{n-1}}u^{\frac{2(n-1)}{n-2}}\bigg{)} ^{\frac{n-2}{n-1}}\leq\frac{2}{n-2}\int_{B^{n}}|\nabla u|^{2}+\int_{S^{n-1}} u^{2},\quad\forall u\in C^{\infty}(\bar{B}^{n}). \tag{1.4}\]
Different from Escobar, using harmonic analysis, Beckner [1] derived a family of inequalities
\[|S^{n-1}|^{\frac{q-1}{q+1}}\bigg{(}\int_{S^{n-1}}u^{q+1}\bigg{)}^{\frac{2}{q+ 1}}\leq(q-1)\int_{B^{n}}|\nabla u|^{2}+\int_{S^{n-1}}u^{2},\quad\forall u\in C ^{\infty}(\bar{B}^{n}), \tag{1.5}\]
provided \(1<q<\infty\), if \(n=2\), and \(1<q\leq\frac{n}{n-2}\), if \(n\geq 3\). The corresponding Euler-Lagrange equation to (1.5) is
\[\left\{\begin{array}{ll}\Delta u=0&in\ B^{n},\\ u_{\nu}+\frac{1}{q-1}u=u^{q}&on\ S^{n-1}.\end{array}\right. \tag{1.6}\]
It is apparent that the case \(n\geq 3\) and \(q=\frac{n}{n-2}\) of (1.5) and (1.6) are just (1.4) and (1.3) respectively. Also, in the same paper, Beckner [1] confirmed
\[|S^{n-1}|^{\frac{q-1}{q+1}}\bigg{(}\int_{S^{n-1}}u^{q+1}\bigg{)}^{ \frac{2}{q+1}}\leq\frac{q-1}{n-1}\int_{S^{n-1}}|\nabla u|^{2}+\int_{S^{n-1}}u^ {2},\quad\forall u\in C^{\infty}(S^{n-1}), \tag{1.7}\]
provided \(1<q<\infty\), if \(n=2\) or \(3\), and \(1<q\leq\frac{n+1}{n-3}\), if \(n\geq 4\). By considering the Euler-Language equation and using integral method, Bidaut-Veron and Veron [2] gave a new proof of (1.7).
A natural question is: Now that (1.7) can be proved by the method of integration, can one prove (1.5) with the same strategy? Inspired by the arguments, Guo-Wang [7] proposed the following conjecture.
**Conjecture** ([7]).: _If \(u\in C^{\infty}(\bar{B}^{n})\) is positive solution of the following equation_
\[\left\{\begin{array}{ll}\Delta u=0&\mbox{in }B^{n},\\ u_{\nu}+\lambda u=u^{q}&\mbox{on }S^{n-1}.\end{array}\right.\]
_Then \(u\) is constant provided \(1<q<\frac{n}{n-2}\) and \(0<\lambda\leq\frac{1}{q-1}\)._
In recent years, there are some partial results about the conjecture, see [5, 7, 8]. A remarkable one is that Guo-Hang-Wang [5] confirmed the conjecture in \(n=2\). In this paper, we investigate satisfactory Obata type identities. Combine with auxiliary functions, we give a proof of the conjecture when \(n\geq 3\).
**Theorem 1.1**.: _For \(n\geq 3\), suppose \(u\in C^{\infty}(\bar{B}^{n})\) is a positive solution of the following equation_
\[\left\{\begin{array}{ll}\Delta u=0&\mbox{in }B^{n},\\ u_{\nu}+\lambda u=u^{q}&\mbox{on }S^{n-1}.\end{array}\right. \tag{1.8}\]
_If \(1<q\leq\frac{n}{n-2}\) and \(0<\lambda\leq\frac{1}{q-1}\), then \(u\) must be constant \(\lambda^{\frac{1}{q-1}}\) unless \(q=\frac{n}{n-2}\) and \(\lambda=\frac{1}{q-1}\), \(u\) is given by the following formula_
\[u_{\xi}(x)=\bigg{(}\frac{n-2}{2}\frac{1-|\xi|^{2}}{1+|\xi|^{2}|x|^{2}-2\langle \xi,x\rangle}\bigg{)}^{\frac{n-2}{2}}, \tag{1.9}\]
_for some \(\xi\in B^{n}\)._
From Theorem 1.1, we immediately get
**Corollary 1.2**.: _The conjecture holds for \(n\geq 3\)._
The paper is organized as follows. In Sect.2, we establish a Pohozaev identity by calculating divergence of a given tensor field with two parameters, and designate one of the parameters to make Pohozaev identity works. In Sect.3, we give an Obata type identity and determine the other parameter to ensure the effectiveness of Obata's skill. In Sect.4, we introduce auxiliary functions involving the length of position vector with one parameter to obtain improved identities. In Sect.5, we give the proof of Theorem 1.1 and a new proof of Beckner's inequality (1.5) by integral method.
## 2. A Pohozaev identity
In this section, we use divergence theorem to prove an identity, and use Pohozaev identity to simplify it. During the simplification, we will determine the parameter as what we have hoped.
### Preliminary
To begin with, let \(u=v^{-\frac{1}{q-1}}\) in (1.8), then \(v\) satisfies
\[\left\{\begin{array}{ll}\Delta v=\frac{q}{q-1}\frac{|\nabla v|^{2}}{v}&in\ B^{ n},\\ v_{\nu}=(q-1)(\lambda v-1)&on\ S^{n-1}.\end{array}\right. \tag{2.1}\]
In this case, the boundary condition is concise, which only involves normal derivatives and linear expression of the solution \(v\). In the text that follows, we define
\[M:=\operatorname{div}(v^{a}\nabla_{\nabla v}\nabla v),\qquad N:=\operatorname {div}(v^{a}\Delta v\nabla v), \tag{2.2}\]
and focus on the quantity \(M-bN\), where parameters \(a,b\in\mathbb{R}\) are to be determined later.
In order to integrate by parts and handle the boundary term, we choose an orthonormal frame \(\{e_{i}\}_{i=1}^{n}\) such that \(e_{n}=\nu\) is the unit outward normal vector on \(S^{n-1}\) and the second fundamental form of \(S^{n-1}\) equals to identity. Thus on \(S^{n-1}\), by use of Reilly's formula, see [10], we have
\[v_{\alpha n}= \big{(}\lambda(q-1)-1\big{)}v_{\alpha},\qquad\forall\ 1\leq \alpha\leq n-1, \tag{2.3}\] \[\sum_{\alpha=1}^{n-1}v_{\alpha\alpha}= \Delta_{S^{n-1}}v+(n-1)v_{n}, \tag{2.4}\]
and
\[v_{nn}= \Delta v-\sum_{\alpha=1}^{n-1}v_{\alpha\alpha}=\frac{q}{q-1}\frac {|\nabla v|^{2}}{v}-\Delta_{S^{n-1}}v-(n-1)v_{n}. \tag{2.5}\]
For simplicity of presentation, we always set subscripts \(1\leq\alpha\leq n-1\) and \(1\leq i,j\leq n\). Einstein summation convention for these subscripts is always used in what follows.
### Choice of parameter a
With these preliminaries, we integrate \(M\) and \(N\) in \(B^{n}\) respectively. Combining with (2.3), (2.5) and (2.1), we obtain by use of divergence theorem
\[\int_{B^{n}}M= \int_{B^{n}}(v^{a}v_{ij}v_{i})_{j}=\int_{S^{n-1}}v^{a}v_{nn}v_{n} +\int_{S^{n-1}}v^{a}v_{\alpha n}v_{\alpha}\] \[= \frac{q}{q-1}\int_{S^{n-1}}v^{a-1}|\nabla v|^{2}v_{n}-\int_{S^{n- 1}}v^{a}\Delta_{S^{n-1}}vv_{n}-(n-1)\int_{S^{n-1}}v^{a}v_{n}^{2}\] \[+\Big{(}\lambda(q-1)-1\Big{)}\int_{S^{n-1}}v^{a}|\nabla_{S^{n-1} }v|^{2}, \tag{2.6}\] \[\int_{B^{n}}N= \int_{B^{n}}(v^{a}\Delta vv_{i})_{i}=\int_{S^{n-1}}v^{a}\Delta vv _{n}=\frac{q}{q-1}\int_{S^{n-1}}v^{a-1}|\nabla v|^{2}v_{n}. \tag{2.7}\]
Now we deal the term \(\int_{S^{n-1}}v^{a}\Delta_{S^{n-1}}vv_{n}\) in (2.6) by applying divergence theorem on \(S^{n-1}\):
\[\int_{S^{n-1}}v^{a}\Delta_{S^{n-1}}vv_{n}= (q-1)\lambda\int_{S^{n-1}}v^{a+1}\Delta_{S^{n-1}}v-(q-1)\int_{S^ {n-1}}v^{a}\Delta_{S^{n-1}}v\] \[= -(q-1)(a+1)\lambda\int_{S^{n-1}}v^{a}|\nabla_{S^{n-1}}v|^{2}+(q- 1)a\int_{S^{n-1}}v^{a-1}|\nabla_{S_{n-1}}v|^{2}. \tag{2.8}\]
Combining formulas (2.6), (2.7) through (2.8) and splitting \(|\nabla v|^{2}\) into \(|\nabla_{S^{n-1}}v|^{2}+v_{n}^{2}\) on \(S^{n-1}\), we derive
\[\int_{B^{n}}(M-bN)= (1-b)\frac{q}{q-1}\int_{S^{n-1}}v^{a-1}|\nabla_{S^{n-1}}v|^{2}v_{ n}+(1-b)\frac{q}{q-1}\int_{S^{n-1}}v^{a-1}v_{n}^{3}\] \[+\lambda(q-1)(a+1)\int_{S^{n-1}}v^{a}|\nabla_{S^{n-1}}v|^{2}-(q-1 )a\int_{S^{n-1}}v^{a-1}|\nabla_{S^{n-1}}v|^{2} \tag{2.9}\] \[-(n-1)\int_{S^{n-1}}v^{a}v_{n}^{2}+\Big{(}\lambda(q-1)-1\Big{)} \int_{S^{n-1}}v^{a}|\nabla_{S^{n-1}}v|^{2}.\]
Use (2.1) to eliminate \(v_{n}\) in the term \((1-b)\frac{q}{q-1}\int_{S^{n-1}}v^{a-1}|\nabla_{S^{n-1}}v|^{2}v_{n}\) and one of the three \(v_{n}\)'s in the term \((1-b)\frac{q}{q-1}\int_{S^{n-1}}v^{a-1}v_{n}^{3}\), (2.9) becomes
\[\begin{split}\int_{B^{n}}(M-bN)=&\lambda q(1-b) \int_{S^{n-1}}v^{a}|\nabla_{S^{n-1}}v|^{2}-q(1-b)\int_{S^{n-1}}v^{a-1}|\nabla_{ S^{n-1}}v|^{2}\\ &+\lambda q(1-b)\int_{S^{n-1}}v^{a}v_{n}^{2}-q(1-b)\int_{S^{n-1}} v^{a-1}v_{n}^{2}\\ &+\lambda(q-1)(a+1)\int_{S^{n-1}}v^{a}|\nabla_{S^{n-1}}v|^{2}-(q -1)a\int_{S^{n-1}}v^{a-1}|\nabla_{S^{n-1}}v|^{2}\\ &-(n-1)\int_{S^{n-1}}v^{a}v_{n}^{2}+\left(\lambda(q-1)-1\right) \int_{S^{n-1}}v^{a}|\nabla_{S^{n-1}}v|^{2}.\end{split} \tag{2.10}\]
We obtain from (2.10) that
\[\begin{split}\int_{B^{n}}(M-bN)=&\Big{(}\lambda q (1-b)+\lambda(q-1)(a+2)-1\Big{)}\int_{S^{n-1}}v^{a}|\nabla_{S^{n-1}}v|^{2}\\ &-\Big{(}q(1-b)+a(q-1)\Big{)}\int_{S^{n-1}}v^{a-1}|\nabla_{S^{n- 1}}v|^{2}\\ &+\Big{(}\lambda q(1-b)-(n-1)\Big{)}\int_{S^{n-1}}v^{a}v_{n}^{2} \\ &-q(1-b)\int_{S^{n-1}}v^{a-1}v_{n}^{2}.\end{split} \tag{2.11}\]
The term \(\int_{S^{n-1}}v^{a-1}|\nabla_{S^{n-1}}v|^{2}\) in (2.11) appears for the reason that the boundary condition of (2.1) is not homogeneous. It is desirable to eliminate it with some equalities. The key idea is to confirm Pohozaev identities from conditions (2.1). Only if we choose \(a=-\frac{q+1}{q-1}\) makes it work.
**Proposition 2.1**.: _Let \(v\) be a positive solution of (2.1). For \(a=-\frac{q+1}{q-1}\), we derive the following Pohozaev identity_
\[\int_{S^{n-1}}v^{a-1}|\nabla_{S^{n-1}}v|^{2}=\int_{S^{n-1}}v^{a-1}v_{n}^{2}+(n -2)\int_{S^{n-1}}v^{a}v_{n}^{2}-(n-2)(q-2)\lambda\int_{B^{n}}v^{a}|\nabla v|^{ 2}. \tag{2.12}\]
Proof.: We note \(x_{ij}=\delta_{ij}\) in \(B^{n}\). Thus, we have the following calculation
\[\begin{split}\int_{B^{n}}\operatorname{div}(v^{a-1}|\nabla v|^{2 }x)=\\ (a-1)\int_{B^{n}}v^{a-2}|\nabla v|^{2}x_{i}v_{i}+2\int_{B^{n}}v^{ a-1}v_{ij}x_{i}v_{j}+n\int_{B^{n}}v^{a-1}|\nabla v|^{2}.\end{split} \tag{2.13}\]
On the other hand, take (2.1) into consideration, we obtain
\[\begin{split}\int_{B^{n}}\operatorname{div}(v^{a-1}& \langle\nabla v,x\rangle\nabla v)=\\ &\Big{(}a-1+\frac{q}{q-1}\Big{)}\int_{B^{n}}v^{a-2}|\nabla v|^{2} x_{i}v_{i}+\int_{B^{n}}v^{a-1}v_{ij}x_{i}v_{j}+\int_{B^{n}}v^{a-1}|\nabla v|^{2}. \end{split} \tag{2.14}\]
Note the condition \(a=-\frac{q+1}{q-1}\) is the only choice to satisfy
\[\frac{a-1}{a-1+\frac{q}{q-1}}=2.\]
Then (2.13)\(-\)2\(\times\)(2.14) implies
\[(n-2)\int_{B^{n}}v^{a-1}|\nabla v|^{2}=\int_{B^{n}}\operatorname{div}(v^{a-1}| \nabla v|^{2}x)-2\int_{B^{n}}\operatorname{div}(v^{a-1}\langle\nabla v,x \rangle\nabla v). \tag{2.15}\]
Notice on \(S^{n-1}\), we have \(x=\nu\), thus \(\langle x,\nu\rangle=1\) and \(\langle\nabla v,x\rangle=\langle\nabla v,\nu\rangle=v_{n}\). By divergence theorem, we have from (2.15) that
\[\begin{split}(n-2)\int_{B^{n}}v^{a-1}|\nabla v|^{2}=& \int_{S^{a-1}}v^{a-1}|\nabla v|^{2}\langle x,\nu\rangle-2\int_{S^{n-1}}v^{a- 1}\langle\nabla v,x\rangle\langle\nabla v,\nu\rangle\\ =&\int_{S^{a-1}}v^{a-1}|\nabla v|^{2}-2\int_{S^{n-1 }}v^{a-1}v_{n}^{2}\\ =&\int_{S^{a-1}}v^{a-1}|\nabla_{S^{n-1}}v|^{2}-\int_ {S^{n-1}}v^{a-1}v_{n}^{2}.\end{split} \tag{2.16}\]
Now it remains to deal with the term \(\int_{B^{n}}v^{a-1}|\nabla v|^{2}\) on the left-hand side of (2.16). A key observation follows from divergence theorem and \(a=-\frac{q+1}{q-1}\) that
\[\int_{S^{n-1}}v^{a}v_{n}=\int_{B^{n}}\operatorname{div}(v^{a}\nabla v)=\Big{(} a+\frac{q}{q-1}\Big{)}\int_{B^{n}}v^{a-1}|\nabla v|^{2}=-\frac{1}{q-1}\int_{B^{n} }v^{a-1}|\nabla v|^{2}, \tag{2.17}\]
and
\[\int_{S^{n-1}}v^{a+1}v_{n}=\int_{B^{n}}\operatorname{div}(v^{a+1}\nabla v)= \Big{(}a+1+\frac{q}{q-1}\Big{)}\int_{B^{n}}v^{a}|\nabla v|^{2}=\frac{q-2}{q-1 }\int_{B^{n}}v^{a}|\nabla v|^{2}. \tag{2.18}\]
Using the fact that the boundary condition of (2.1) yields
\[\int_{S^{n-1}}v^{a}v_{n}^{2}=(q-1)\int_{S^{n-1}}v^{a}v_{n}(\lambda v-1)= \lambda(q-1)\int_{S^{n-1}}v^{a+1}v_{n}-(q-1)\int_{S^{n-1}}v^{a}v_{n}. \tag{2.19}\]
Putting (2.17) and (2.18) into (2.19), we conclude
\[\int_{S^{n-1}}v^{a}v_{n}^{2}=\lambda(q-2)\int_{B^{n}}v^{a}|\nabla v|^{2}+\int_ {B^{n}}v^{a-1}|\nabla v|^{2}. \tag{2.20}\]
Then (2.12) follows from substituting (2.20) into (2.16) to eliminate the term \(\int_{B^{n}}v^{a-1}|\nabla v|^{2}\).
By means of Proposition 2.1, we eliminate \(\int_{S^{n-1}}v^{a-1}|\nabla_{S^{n-1}}v|^{2}\) in (2.11) and derive
**Corollary 2.2**.: _Let \(v\) be a positive solution of (2.1). The quantities \(M,N\) are defined as (2.2). Then for \(a=-\frac{q+1}{q-1}\),_
\[\begin{split}\int_{B^{n}}(M-bN)=&\Big{(}\lambda q(1 -b)+\lambda(q-3)-1\Big{)}\int_{S^{n-1}}v^{a}|\nabla_{S^{n-1}}v|^{2}\\ &+\Big{(}q(1-b)-(q+1)\Big{)}(n-2)(q-2)\lambda\int_{B^{n}}v^{a}| \nabla v|^{2}\\ &+\Big{(}\lambda q(1-b)+(n-2)qb-1\Big{)}\int_{S^{n-1}}v^{a}v_{n} ^{2}\\ &-\Big{(}2q(1-b)-(q+1)\Big{)}\int_{S^{n-1}}v^{a-1}v_{n}^{2}.\end{split} \tag{2.21}\]
In the next section, we will determine the value of \(b\), which is a rather subtle issue. One can guess that we should control the coefficient of \(\int_{S^{n-1}}v^{a-1}v_{n}^{2}\) as
\[2q(1-b)-(q+1)\geq 0, \tag{2.22}\]
since we have no other ways to decompose it. In fact, we will make the equality sign in (2.22) hold. More logical reasons for such choice will be explained later.
## 3. On Obata's skill
### Obata type identities
It is time to expand \(M-bN\), which gives
\[M-bN= (v^{a}v_{ij}v_{i})_{j}-b(v^{a}\Delta vv_{j})_{j}\] \[= \Big{(}v^{a}v_{ij}v_{ij}+av^{a-1}v_{ij}v_{i}v_{j}+v^{a}(\Delta v) _{i}v_{i}\Big{)}-b\Big{(}av^{a-1}\Delta v|\nabla v|^{2}+v^{a}(\Delta v)_{j}v_{j }+v^{a}(\Delta v)^{2}\Big{)}\] \[= v^{a}v_{ij}v_{ij}+av^{a-1}v_{ij}v_{i}v_{j}+(1-b)v^{a}(\Delta v) _{i}v_{i}-abv^{a-1}\Delta v|\nabla v|^{2}-bv^{a}(\Delta v)^{2}. \tag{3.1}\]
Recall that \(\Delta v\) satisfies (2.1), so
\[v^{a}(\Delta v)_{i}v_{i}=\frac{q}{q-1}v^{a}\Big{(}\frac{|\nabla v|^{2}}{v} \Big{)}_{i}v_{i}=\frac{2q}{q-1}v^{a-1}v_{ij}v_{i}v_{j}-\frac{q}{q-1}v^{a-2}| \nabla v|^{4}. \tag{3.2}\]
If we put (3.2) and (2.1) into (3.1), we will obtain
\[M-bN= v^{a}v_{ij}v_{ij}+\Big{(}a+(1-b)\frac{2q}{q-1}\Big{)}v^{a-1}v_{ ij}v_{i}v_{j}\] \[-\Big{(}(1-b)\frac{q}{q-1}+ab\frac{q}{q-1}+b(\frac{q}{q-1})^{2} \Big{)}v^{a-2}|\nabla v|^{4}. \tag{3.3}\]
For simplicity, we define \(d\) as
\[d:=\frac{a}{2}+(1-b)\frac{q}{q-1}=\frac{2q(1-b)-(q+1)}{2(q-1)}.\]
After putting \(a=-\frac{q+1}{q-1}\), (3.3) becomes
\[M-bN= v^{a}v_{ij}v_{ij}+2dv^{a-1}v_{ij}v_{i}v_{j}-\Big{(}\frac{q}{q-1}d+ \frac{q}{2(q-1)}\Big{)}v^{a-2}|\nabla v|^{4}\] \[= v^{a}\Big{|}v_{ij}+d\frac{v_{i}v_{j}}{v}\Big{|}^{2}-\Big{(}d^{2} +\frac{q}{q-1}d+\frac{q}{2(q-1)}\Big{)}v^{a-2}|\nabla v|^{4}. \tag{3.4}\]
By means of the technique of Obata, we define a trace-free 2-tensor
\[E_{ij}:=v_{ij}+d\frac{v_{i}v_{j}}{v}-\frac{1}{n}\Big{(}\frac{q}{q-1}+d\Big{)} \frac{|\nabla v|^{2}}{v}\delta_{ij}. \tag{3.5}\]
The tensor satisfies
\[\Big{|}v_{ij}+d\frac{v_{i}v_{j}}{v}\Big{|}^{2}=\Big{|}E_{ij}+\frac{1}{n}\Big{(} \frac{q}{q-1}+d\Big{)}\frac{|\nabla v|^{2}}{v}\delta_{ij}\Big{|}^{2}=|E|^{2}+ \frac{1}{n}\Big{(}\frac{q}{q-1}+d\Big{)}^{2}\frac{|\nabla v|^{4}}{v^{2}}. \tag{3.6}\]
As what we have anticipated, (3.4) can be expressed as Obata type identities
**Proposition 3.1**.: _Let \(v\) be a positive solution of (2.1). The quantities \(M,N\) are defined as (2.2). Then for \(a=-\frac{q+1}{q-1}\) and \(b,d\) satisfies \(d=\frac{2q(1-b)-(q+1)}{2(q-1)}\), we have_
\[M-bN=v^{a}|E|^{2}+\bigg{(}\frac{1}{n}\Big{(}\frac{q}{q-1}+d\Big{)}^{2}-\Big{(} d^{2}+\frac{q}{q-1}d+\frac{q}{2(q-1)}\Big{)}\bigg{)}v^{a-2}|\nabla v|^{4}. \tag{3.7}\]
### Choice of parameter b
As what we have mentioned, (2.22) hopes \(d\geq 0\). However, to make Obata's skill work, we apply
**Proposition 3.2**.: _For \(n\geq 3\), \(1<q<\frac{n}{n-2}\), we have_
\[\frac{1}{n}\Big{(}\frac{q}{q-1}+d\Big{)}^{2}-\Big{(}d^{2}+\frac{q}{q-1}d+\frac{ q}{2(q-1)}\Big{)}\geq 0, \tag{3.8}\]
_provided_
\[-\frac{2q}{(q-1)(q+1)}\leq d\leq 0. \tag{3.9}\]
Proof.: Based on the observation that the left-hand side of (3.8) is decreasing on \(n\). It suffices to show the case when \(n\to\frac{2q}{q-1}\), i.e.
\[\frac{q-1}{2q}\Big{(}\frac{q}{q-1}+d\Big{)}^{2}-\Big{(}d^{2}+\frac{q}{q-1}d+ \frac{q}{2(q-1)}\Big{)}\geq 0,\]
equivalently,
\[-\frac{q+1}{2q}d^{2}-\frac{1}{q-1}d\geq 0.\]
we complete the proof.
Combining with (2.22) and Proposition 3.2, it is sure that we need \(d=0\), i.e. \(b=\frac{q-1}{2q}\). In this case, we may both eliminate the term \(\int_{S^{n-1}}v^{a-1}v_{n}^{2}\) in (2.21) and make sure the coefficient of \(v^{a-2}|\nabla v|^{4}\) in (3.7) is positive. To conclude what we have stated, from (3.7) and (2.21),
**Corollary 3.3**.: _Let \(v\) be a positive solution of (2.1). The quantities \(M,N\) are defined as (2.2). Then for \(a=-\frac{q+1}{q-1}\) and \(b=\frac{q-1}{2q}\), we have_
\[M-bN=v^{a}|E|^{2}+\frac{q}{2n(q-1)^{2}}\Big{(}n-(n-2)q\Big{)}v^{a-2}|\nabla v| ^{4}\geq 0, \tag{3.10}\]
_and_
\[\begin{split}\int_{B^{n}}(M-bN)=&\frac{\lambda(3q -5)-2}{2}\int_{S^{n-1}}v^{a}|\nabla_{S^{n-1}}|^{2}\\ &-(n-2)(q-2)\frac{q+1}{2}\lambda\int_{B^{n}}v^{a}|\nabla v|^{2} \\ &+\left(\lambda\frac{q+1}{2}+\frac{(n-2)q-n}{2}\right)\int_{S^{n- 1}}v^{a}v_{n}^{2}.\end{split} \tag{3.11}\]
_Remark 3.4_.: The proof of Proposition 3.2 also suggests that for \(a=-\frac{q+1}{q-1}\) and any \(b^{\prime}\in\mathbb{R}\) satisfies
\[\frac{q-1}{2q}\leq b^{\prime}\leq\frac{q^{2}+4q-1}{2q(q+1)}, \tag{3.12}\]
we always have
\[M-b^{\prime}N\geq 0, \tag{3.13}\]
where \(v\) is a positive solution of (2.1) and \(M,N\) are defined as (2.2). In fact, if we define
\[d^{\prime}:=\frac{2q(1-b^{\prime})-(q+1)}{2(q-1)}, \tag{3.14}\]
then correspondingly, the range of \(d^{\prime}\) is precisely
\[-\frac{2q}{(q-1)(q+1)}\leq d^{\prime}\leq 0. \tag{3.15}\]
It is easy to check that (3.15) is equivalent to (3.12).
## 4. Auxiliary function and Improved identities
To deal with more complicate situations, an introduction of auxiliary function \(\phi\) here is essential. Set
\[\phi(x)=\frac{|x|^{2}+c}{2} \tag{4.1}\]
with \(c>0\) to be determined. It is evident to see
\[\phi_{i}=x_{i},\quad\phi_{ij}=\delta_{ij}\quad\text{and}\quad\phi(x)\in\Big{[} \frac{c}{2},\frac{1+c}{2}\Big{)}\quad\text{in }B^{n}, \tag{4.2}\]
and
\[\phi(x)\equiv\frac{1+c}{2},\quad\text{and}\quad\nabla\phi=\nu,\quad\text{on }S^{n-1}. \tag{4.3}\]
### Improved inequalities involving auxiliary function
We define
\[P:=\text{div}(v^{a}|\nabla v|^{2}\nabla\phi),\qquad Q:=\text{div}(v^{a}\langle \nabla v,\nabla\phi\rangle\nabla v), \tag{4.4}\]
where \(a=-\frac{q+1}{q-1}\). Direct computations lead
\[P= 2v^{a}v_{ij}v_{i}x_{j}-\frac{q+1}{q-1}v^{a-1}|\nabla v|^{2}v_{i }x_{i}+nv^{a}|\nabla v|^{2}, \tag{4.5}\] \[Q= v^{a}v_{ij}v_{i}x_{j}-\frac{1}{q-1}v^{a-1}|\nabla v|^{2}v_{i}x_{ i}+v^{a}|\nabla v|^{2}. \tag{4.6}\]
If we focus on the quantity \((q-3)P+4Q\), then by (4.5)-(4.6),
\[(q-3)P+4Q= 2(q-1)v^{a}v_{ij}v_{i}x_{j}-(q-1)v^{a-1}|\nabla v|^{2}v_{i}x_{i} +\Big{(}4-(3-q)n\Big{)}v^{a}|\nabla v|^{2}. \tag{4.7}\]
Recall the definition of \(M,N\) in (2.2), and note clearly \(\phi\equiv\frac{1+c}{2}\) on \(S^{n-1}\). If we multiply \(\phi\) on \(M,N\) and integrate them on \(B^{n}\) respectively, with the help of divergence theorem and (4.2)-(4.3), we will obtain
\[\int_{B^{n}}M\phi= \int_{B^{n}}(v^{a}v_{ij}v_{i})_{j}\phi=\int_{B^{n}}(v^{a}v_{ij}v_ {i}\phi)_{j}-\int_{B^{n}}v^{a}v_{ij}v_{i}x_{j}\] \[= \int_{S^{n-1}}v^{a}v_{in}v_{i}\phi-\int_{B^{n}}v^{a}v_{ij}v_{i}x_ {j}\] \[= \frac{1+c}{2}\int_{S^{n-1}}v^{a}v_{in}v_{i}-\int_{B^{n}}v^{a}v_{ ij}v_{i}x_{j}\] \[= \frac{1+c}{2}\int_{B^{n}}M-\int_{B^{n}}v^{a}v_{ij}v_{i}x_{j}, \tag{4.8}\]
and
\[\int_{B^{n}}N\phi= \int_{B^{n}}(v^{a}\Delta vv_{i})_{i}\phi=\int_{B^{n}}(v^{a}\Delta v v _{i}\phi)_{i}-\int_{B^{n}}v^{a}\Delta vv_{i}x_{i}\] \[= \int_{S^{n-1}}v^{a}\Delta vv_{n}\phi-\frac{q}{q-1}\int_{B^{n}}v^{ a-1}|\nabla v|^{2}v_{i}x_{i}\] \[= \frac{1+c}{2}\int_{S^{n-1}}v^{a}\Delta vv_{n}-\frac{q}{q-1}\int_{ B^{n}}v^{a-1}|\nabla v|^{2}v_{i}x_{i}\] \[= \frac{1+c}{2}\int_{B^{n}}N-\frac{q}{q-1}\int_{B^{n}}v^{a-1}| \nabla v|^{2}v_{i}x_{i}. \tag{4.9}\]
Putting (4.8)-(4.9) into the integration of (4.7) and noting \(b=\frac{q-1}{2q}\), we arrive at a neat result that
\[\int_{B^{n}}\Big{(}(q-3)P+4Q\Big{)}= 2(q-1)\int_{B^{n}}(M-bN)\Big{(}\frac{1+c}{2}-\phi\Big{)}-\Big{(}(3 -q)n-4\Big{)}\int_{B^{n}}v^{a}|\nabla v|^{2}. \tag{4.10}\]
On the other hand, if we right use divergence theorem on the integration of \(P,Q\) in \(B^{n}\), we will derive by use of (4.3) that
\[\int_{B^{n}}P= \int_{S^{n-1}}v^{a}|\nabla v|^{2}\langle\nabla\phi,\nu\rangle= \int_{S^{n-1}}v^{a}|\nabla v|^{2}=\int_{S^{n-1}}v^{a}|\nabla_{S^{n-1}}v|^{2}+ \int_{S^{n-1}}v^{a}v_{n}^{2}, \tag{4.11}\] \[\int_{B^{n}}Q= \int_{S^{n-1}}v^{a}\langle\nabla v,\nabla\phi\rangle\langle \nabla v,\nu\rangle=\int_{S^{n-1}}v^{a}v_{n}^{2}. \tag{4.12}\]
Combine with (4.11) and (4.12), we conclude
\[\int_{B^{n}}\Big{(}(q-3)P+4Q\Big{)}= (q-3)\int_{S^{n-1}}v^{a}|\nabla_{S^{n-1}}v|^{2}+(q+1)\int_{S^{n- 1}}v^{a}v_{n}^{2}. \tag{4.13}\]
Thus (4.10) and (4.13) imply
\[\int_{B^{n}}(M-bN)\phi= \frac{1+c}{2}\int_{B^{n}}(M-bN)+\frac{3-q}{2(q-1)}\int_{S^{n-1}}v ^{a}|\nabla_{S^{n-1}}v|^{2} \tag{4.14}\] \[-\frac{(3-q)n-4}{2(q-1)}\int_{B^{n}}v^{a}|\nabla v|^{2}-\frac{q+1 }{2(q-1)}\int_{S^{n-1}}v^{a}v_{n}^{2}.\]
Putting (3.11) into (4.14), we obtain an improved identity
**Corollary 4.1**.: _Let \(v\) be a positive solution of (2.1). The quantities \(M,N\) are defined as (2.2) where \(a=-\frac{q+1}{q-1}\), \(b=\frac{q-1}{2q}\). Set \(\phi(x)=\frac{|x|^{2}+c}{2}\) with \(c>0\) to be determined. Then we have_
\[\int_{B^{n}}(M-bN)\phi= \bigg{(}\frac{1+c}{2}\Big{(}\frac{\lambda(3q-5)-2}{2}\Big{)}+ \frac{3-q}{2(q-1)}\bigg{)}\int_{S^{n-1}}v^{a}|\nabla_{S^{n-1}}v|^{2} \tag{4.15}\] \[-\bigg{(}\frac{1+c}{2}(n-2)(q-2)\frac{q+1}{2}\lambda+\frac{(3-q) n-4}{2(q-1)}\bigg{)}\int_{B^{n}}v^{a}|\nabla v|^{2}\] \[+\bigg{(}\frac{1+c}{2}\Big{(}\lambda\frac{q+1}{2}+\frac{(n-2)q-n }{2}\Big{)}-\frac{q+1}{2(q-1)}\bigg{)}\int_{S^{n-1}}v^{a}v_{n}^{2}.\]
## 5. Proofs of Theorem 1.1
After having addressed all the preceding conclusions, we are now in a position to establish the proof of Theorem 1.1. Integrating (4.6) in \(B^{n}\) and combining with (4.12), (4.8) and (4.9), it is obvious that
\[\int_{S^{n-1}}v^{a}v_{n}^{2}=\int_{B^{n}}Q=\int_{B^{n}}(M-\frac{1}{q}N)(\frac{ 1+c}{2}-\phi)+\int_{B^{n}}v^{a}|\nabla v|^{2}. \tag{5.1}\]
Choose \(c=\frac{2}{\lambda(q-1)}-1\geq 1\) in Corollary 4.1. In this case, (4.15) becomes
\[\int_{B^{n}}(M-bN)\phi= \bigg{(}1-\frac{1}{\lambda(q-1)}\bigg{)}\int_{S^{n-1}}v^{a}| \nabla_{S^{n-1}}v|^{2} \tag{5.2}\] \[-\frac{1}{2}\Big{(}(n-2)q-n\Big{)}\int_{B^{n}}v^{a}|\nabla v|^{2}\] \[+\frac{(n-2)q-n}{2\lambda(q-1)}\int_{S^{n-1}}v^{a}v_{n}^{2}.\]
Putting (5.1) into (5.2) to eliminate \(\int_{B^{n}}v^{a}|\nabla v|^{2}\), we obtain
\[\begin{split}\int_{B^{n}}(M-bN)\phi=&\frac{1}{2}((n-2 )q-n)\int_{B^{n}}(M-\frac{1}{q}N)(\frac{1+c}{2}-\phi)\\ &+\left(1-\frac{1}{\lambda(q-1)}\right)\int_{S^{n-1}}v^{a}|\nabla _{S^{n-1}}v|^{2}\\ &-\frac{(n-(n-2)q)(1-\lambda(q-1))}{2\lambda(q-1)}\int_{S^{n-1}}v ^{a}v_{n}^{2}.\end{split} \tag{5.3}\]
Recall when \(n\geq 3\), we have \(1<q\leq\frac{n}{n-2}\leq 3\), Thus
\[\frac{q-1}{2q}\leq\frac{1}{q}\leq\frac{q^{2}+4q-1}{2q(q+1)}.\]
By Remark 3.4, we conclude that
\[M-\frac{1}{q}N\geq 0.\]
So the conditions \(1<q\leq\frac{n}{n-2}\) and \(0<\lambda\leq\frac{1}{q-1}\) imply the right-hand side of (5.3) is no greater than \(0\). Combining with (3.10) that the left-hand side of (5.3) is no less than \(0\), we conclude both sides of (5.3) equal to \(0\). Then the continuity of non-negative function \((M-bN)\phi\) implies
\[(M-bN)\phi\equiv 0,\qquad\forall x\in B^{n}. \tag{5.4}\]
Taking into account that the range of \(\phi\) strictly exceeds \(0\), we attain
\[M-bN\equiv 0,\qquad\forall x\in B^{n}. \tag{5.5}\]
Again by (3.10), if \(1<q<\frac{n}{n-2}\), we derive \(|\nabla v|\equiv 0\) in \(B^{n}\). Thus \(v\) is constant.
On the other hand, if \(0<\lambda<\frac{1}{q-1}\), then the right-hand side of (5.3) equals to \(0\) forces
\[\int_{S^{n-1}}v^{a}|\nabla_{S^{n-1}}v|^{2}\equiv 0.\]
Thus \(v\) is constant on \(S^{n-1}\). So \(u\) is a harmonic function with constant boundary value, which must be constant by maximum principle.
It remains to discuss the case \(q=\frac{n}{n-2}\) and \(\lambda=\frac{1}{q-1}\). In this case, (5.5) and (3.10) only implies \(|E|=0\). Putting \(a=-\frac{q+1}{q-1}\) and \(b=\frac{q-1}{2q}\) into the definition (3.5) of \(E\), we have for any \(1\leq i,j\leq n\),
\[v_{ij}=\frac{\Delta v}{n}\delta_{ij}. \tag{5.6}\]
By taking derivative on (5.6) and taking a sum, we derive for any \(1\leq i\leq n\),
\[(\Delta v)_{i}=\sum_{j=1}^{n}v_{jji}=\sum_{j=1}^{n}v_{ijj}=\sum_{j=1}^{n}\left( \frac{\Delta v}{n}\delta_{ij}\right)_{j}=\frac{1}{n}\sum_{j=1}^{n}(\Delta v)_ {j}\delta_{ij}=\frac{1}{n}(\Delta v)_{i}.\]
Since \(n\geq 3\), we have \((\Delta v)_{i}=0\) for any \(1\leq i\leq n\). Thus \(\Delta v\) is constant. Suppose \(\Delta v\equiv 2nr\) for some \(r\in\mathbb{R}\), then from (5.6), we obtain
\[v(x)=r|x|^{2}+\langle\zeta,x\rangle+s\qquad\forall x\in B^{n}, \tag{5.7}\]
for some \(s\in\mathbb{R}\) and \(\zeta\in\mathbb{R}^{n}\). Direct computations imply
\[\nabla v=2rx+\zeta\qquad\forall x\in B^{n}. \tag{5.8}\]
Putting (5.7) and (5.8) into (2.1), we get the relations between \(r,s,\zeta\) that
\[4rs=|\zeta|^{2}, \tag{5.9}\] \[r=s-\frac{2}{n-2}. \tag{5.10}\]
Since \(v\) is a positive funciton, necessarily, we have \(v(0)>0\), i.e. \(s>0\). Combining with (5.9) and (5.10), we solve
\[r=\sqrt{\Big{(}\frac{1}{n-2}\Big{)}^{2}+\frac{1}{4}|\zeta|^{2}}- \frac{1}{n-2},\qquad s=\sqrt{\Big{(}\frac{1}{n-2}\Big{)}^{2}+\frac{1}{4}|\zeta |^{2}}+\frac{1}{n-2}. \tag{5.11}\]
Using the fact that for any \(\zeta\in\mathbb{R}^{n}\), there is a unique \(\xi\in B^{n}\), s.t.
\[\zeta=-\frac{4}{n-2}\frac{\xi}{1-|\xi|^{2}}. \tag{5.12}\]
Putting (5.12) into (5.11), we derive
\[r=\frac{2}{n-2}\frac{|\xi|^{2}}{1-|\xi|^{2}},\qquad s=\frac{2}{n -2}\frac{1}{1-|\xi|^{2}}. \tag{5.13}\]
Then putting (5.12) and (5.13) into (5.7), we obtain
\[v(x)=\frac{2}{n-2}\frac{1+|\xi|^{2}|x|^{2}-2\langle\xi,x\rangle }{1-|\xi|^{2}}, \tag{5.14}\]
i.e.
\[u(x)=\bigg{(}\frac{n-2}{2}\frac{1-|\xi|^{2}}{1+|\xi|^{2}|x|^{2}- 2\langle\xi,x\rangle}\bigg{)}^{\frac{n-2}{2}}, \tag{5.15}\]
for some \(\xi\in B^{n}\). We complete Theorem 1.1.
Combing Guo-Hang-Wang [5] in \(n=2\) and Theorem 1.1 in \(n\geq 3\), the conjecture is true. And thus we obtain a new proof of Beckner's inequality (1.5) by integral method.
**Corollary 5.1**.: _The following inequalities hold_
\[|S^{n-1}\Big{|}^{\frac{q-1}{q+1}}\bigg{(}\int_{S^{n-1}}u^{q+1} \bigg{)}^{\frac{2}{q+1}}\leq(q-1)\int_{B^{n}}|\nabla u|^{2}+\int_{S^{n-1}}u^{2 },\quad\forall u\in C^{\infty}(\bar{B}^{n}),\]
_provided \(1<q<\infty\), if \(n=2\), and \(1<q\leq\frac{n}{n-2}\), if \(n\geq 3\)._
Proof.: We define the Sobolev quotient of a function \(u\in H^{1}(B^{n})-\{0\}\) as
\[Q_{\lambda,q}(u):=\frac{\int_{B^{n}}|\nabla u|^{2}+\lambda\int_{ S^{n-1}}u^{2}}{\Big{(}\int_{S^{n-1}}|u|^{q+1}\Big{)}^{\frac{2}{q+1}}}. \tag{5.16}\]
For the case \(1<q<\frac{n}{n-2}\), the trace operator \(H^{1}(B^{n})\to L^{q+1}(S^{n-1})\) is compact. Thus the minimization problem
\[S_{\lambda,q}:=\inf_{u\in H^{1}(B^{n})}Q_{\lambda,q}(u) \tag{5.17}\]
is achieved by smooth positive functions which satisfies (1.8). By use of the conjecture, such minimizer \(u\) must be
\[u(x)\equiv\lambda^{\frac{1}{q-1}},\qquad\forall x\in B^{n}. \tag{5.18}\]
Putting (5.18) into (5.16), since the constant achieves \(S_{\lambda,q}\), we have for any \(u\in C^{\infty}(\bar{B}^{n})\),
\[\frac{\int_{B^{n}}|\nabla u|^{2}+\lambda\int_{S^{n-1}}u^{2}}{ \Big{(}\int_{S^{n-1}}|u|^{q+1}\Big{)}^{\frac{2}{q+1}}}\geq\lambda|S^{n-1}|^{ \frac{q-1}{q+1}},\qquad\forall 0<\lambda\leq\frac{1}{q-1}. \tag{5.19}\]
Let \(\lambda=\frac{1}{q-1}\), we obtain (1.5) while \(1<q<\frac{n}{n-2}\).
For the critical case \(q=\frac{n}{n-2}\), we follow the method of continuity of Aubin and Trudinger, see for example [6]. It suffices to show the function \(q\mapsto S_{\lambda,q}\) is continuous on the left at \(q=\frac{n}{n-2}\). To prove, for any \(\epsilon>0\), \(\exists u_{1}\in H^{1}(B^{n})\), s.t.
\[S_{\lambda,\frac{n}{n-2}}\geq Q_{\lambda,\frac{n}{n-2}}(u_{1})-\frac{ \epsilon}{2}. \tag{5.20}\]
Since the function \(q\mapsto Q_{\lambda,q}(u_{1})\) is continuous in \((1,\frac{n}{n-2}]\). \(\exists\delta>0\), s.t. \(\forall q^{\prime}\in(\frac{n}{n-2}-\delta,\frac{n}{n-2}]\), we have
\[Q_{\lambda,\frac{n}{n-2}}(u_{1})\geq Q_{\lambda,q^{\prime}}(u_{1})-\frac{ \epsilon}{2}. \tag{5.21}\]
Combining with (5.20) and (5.21), we obtain
\[S_{\lambda,\frac{n}{n-2}}\geq Q_{\lambda,\frac{n}{n-2}}(u_{1})-\frac{ \epsilon}{2}\geq Q_{\lambda,q^{\prime}}(u_{1})-\epsilon\geq S_{\lambda,q^{ \prime}}-\epsilon. \tag{5.22}\]
On the other hand, assume \(u_{2}\in H^{1}(B^{n})\) satisfies
\[S_{\lambda,q^{\prime}}\geq Q_{\lambda,q^{\prime}}(u_{2})-\frac{\epsilon}{2}. \tag{5.23}\]
By Holder inequality, we have
\[Q_{\lambda,q^{\prime}}(u_{2})=\frac{\int_{B^{n}}|\nabla u_{2}|^{2}+\lambda \int_{S^{n-1}}u_{2}^{2}}{\Big{(}\int_{S^{n-1}}|u_{2}|^{q^{\prime}+1}\Big{)}^{ \frac{2}{q^{\prime}+1}}}\geq\frac{\int_{B^{n}}|\nabla u_{2}|^{2}+\lambda\int_ {S^{n-1}}u_{2}^{2}}{\Big{(}\int_{S^{n-1}}|u_{2}|^{\frac{2(n-1)}{n-2}}\Big{)}^ {\frac{n-2}{n-1}}}|S^{n-1}|^{\left(\frac{n-2}{n-1}-\frac{2}{q^{\prime}+1} \right)}. \tag{5.24}\]
Since the function \(q^{\prime}\mapsto|S^{n-1}|^{\left(\frac{n-2}{n-1}-\frac{2}{q^{\prime}+1} \right)}\) is continuous in \(q^{\prime}\), we may decrease the above \(\delta>0\), s.t. \(\forall q^{\prime}\in(\frac{n}{n-2}-\delta,\frac{n}{n-2}]\), the following inequalities hold
\[\frac{\int_{B^{n}}|\nabla u_{2}|^{2}+\lambda\int_{S^{n-1}}u_{2}^{2}}{\Big{(} \int_{S^{n-1}}|u_{2}|^{\frac{2(n-1)}{n-2}}\Big{)}^{\frac{n-2}{n-1}}}|S^{n-1}| ^{\left(\frac{n-2}{n-1}-\frac{2}{q^{\prime}+1}\right)}\geq S_{\lambda,\frac{ n}{n-2}}|S^{n-1}|^{\left(\frac{n-2}{n-1}-\frac{2}{q^{\prime}+1}\right)}\geq S_{ \lambda,\frac{n}{n-2}}-\frac{\epsilon}{2}. \tag{5.25}\]
Combine with (5.23), (5.24) and (5.25), we obtain
\[S_{\lambda,q^{\prime}}\geq S_{\lambda,\frac{n}{n-2}}-\epsilon. \tag{5.26}\]
By (5.22), (5.26) and the arbitrariness of \(\epsilon\), the function \(q\mapsto S_{\lambda,q}\) is continuous on the left at \(q=\frac{n}{n-2}\). Thus
\[S_{\lambda,\frac{n}{n-2}}=\lim_{q\rightarrow(\frac{n}{n-2})^{-}}S_{\lambda,q}= \lim_{q\rightarrow(\frac{n}{n-2})^{-}}\lambda|S^{n-1}|^{\frac{q-1}{q+1}}= \lambda|S^{n-1}|^{\frac{1}{n-1}}. \tag{5.27}\]
We finish the new proof of Beckner's inequality.
**Acknowledgment**.: The authors were partially supported by NSFC Grant No.11831005. We would like to express our thanks to Yao Wan for his helpful discussions and suggestions.
|
2307.02330 | Bibliometric Analysis of NIME References and Citations | This paper presents a bibliometric analysis that examines the works cited in,
as well as those citing, NIME papers; for brevity, we refer to these as
`references` and `citations`. Utilizing existing tools, we have computationally
extracted data from the NIME proceedings archive and retrieved metadata from an
academic database, including details of associated references and citations.
From this data, we computed a range of metrics and statistics, which we present
in this paper. We offer quantitative insights into NIME as a scholarly
publication venue, its connections to other venues, and its relationship with
various fields of study and authors. Based on our data interpretations, we
provide several recommendations for the community's future. In sharing the
software we developed for this study, and the summarized raw data, we enable
other NIME researchers to conduct more in-depth investigations and examine
specific trends. | Stefano Fasciani | 2023-07-05T14:40:23Z | http://arxiv.org/abs/2307.02330v3 | # References in and citations to NIME papers
###### Abstract
This paper presents a bibliographic study that analyzes works cited in as well as works that cite NIME papers. We build on existing tools to computationally analyze data retrieved from publicly available databases. We present a variety of metrics, statistics, visualizations and trends aiming to provide quantitative figures on the scholarly impact of NIME, influential authors, related publication venues, associated fields of study, and key works published in other venues.
1
## 1 Introduction
References are essential components in scientific literature and scholarly publications. Despite the differences in citation styles used across disciplines, referencing is commonly used to acknowledge other's work, to differentiate one's work from existing ones, or as a supporting argument [1]. Moreover, through references readers can access original sources and get informed about related theories, methods, data or results. Almost 50 years ago, Gilbert [2] argued that references also serve more subtle purposes: coping with intellectual property issues and increase the persuasiveness of a paper. Indeed, the quality of a paper is often assessed considering also the size of the reference list and the reputation of the associated publishers. Cross-checking the accuracy of citations and of the reference list is a complex task, and these are often not verified during the revision process. Errors are common and often propagates across papers as 70 to 90% of citations are copied from the lists of references used in other papers [3]. The poor practice of copying citations and the favorable reputation that some papers have acquired over time, significantly contribute to skew the number of citations that papers receive within a certain field or publication venue [1, 2, 3]. Indeed, the assumption that citations within journals or conference proceedings are normally distributes is generally wrong, and therefore the use of their impact factors to measure the quality of individual articles is fundamentally flawed [4]. To date, a number of theories on citations in scientific literature have been proposed [5, 6, 7, 8], which also attempt to frame and study "science" as a social system. Associated methodologies require the systematic analysis of reference-related data in a selected academic discipline or publication venue. Generalization is not possible because figures and statistics change significantly across fields of study [9].
References represent the previous works on which a paper build upon. In turn the same paper will represent in future the basis for other works citing it. Each paper is a point of convergence in a network of existing knowledge as well as point of divergence for new knowledge, which unfolds over time. From the analysis of works cited in and citing a paper is possible to estimate how cited and citing work are connected and have indirectly influenced each other. Extending the analysis to an entire publication venue, we can discover deeper insights on the impact and of journals or conference proceedings.
Since 2001, more than two thousand papers have been published in the proceedings of the international conference on New Interfaces for Musical Expression (NIME). Fasciani and Goode analyzed the proceedings of the first 20 editions of NIME, providing a variety of figures and metrics on the papers, authorship, their affiliations, travel and topics [10]. This work includes statistics on number of citations received by NIME papers, such as their count and distribution over the different editions of the conference. We have extended this work performing a more comprehensive and extensive analysis of works cited in NIME and works that cite NIME papers. In particular, we extract metadata and unique identifiers for all works appearing in the list of reference of NIME papers, as well as for the works in which a NIME paper appears in the list of references. The metadata includes information such as authorship, field of study, year of publication, and embeddings, that when analyzed allows to discover:
* disciplines and publication venues that have an impact on NIME or on which NIME has an impact;
* key works and authors strongly related to NIME including those published in other journals or proceedings;
* self-referentially in the NIME community and authorship.
* distribution of citations and references against time and against the NIME corpus itself.
In this paper, to avoid verbosity, we use the following terminology: a _reference_ is work that is referenced (i.e. cited) in a NIME paper, whereas a _citation_ is work that cites (i.e. references) a NIME paper. The rest of the paper is organized as follows: Section 2 summarizes the methodology, techniques and data used for this study. Section 3 and Section 4 are focused on references in and citations to NIME papers respectively. Conclusions and final remarks are included in Section 6.
## 2 Methodology
The corpus of papers published in NIME proceedings has a size making manual data extraction practically infeasible. Therefore, we employ computational means to mine existing databases and analyze references- and citations-related data. As starting point, we used the _NIME Proceedings Analyzer1_ (NIME PA) [11], a collection of Python methods that aggregate, scrape and retrieve meta-data related to NIME papers starting uniquely from the public list of BibTeX entries2, compiled by the community in a single file. The extracted data is arranged in a tabular data structure while body text of the papers is arranged in a structured collection of text files. These are then further analyzed returning a variety of bibliometric figures and statistics. With respect to the data needed for this study, the tabular data structure generated by the PA includes only: the total number citation and the total number of highly influential citations (i.e. when the cited publication has a significant impact on the citing publication) retrieved from Semantic Scholar3. Among the various academic search engines, Semantic Scholar, was selected because it provides the most comprehensive and accurate indexing of NIME papers, and because it
provides Application Programming Interface (API), as explained by Goode and Fasciani [11].
In Semantic Scholar, papers and authors are associated with a unique identifier, which allows to reliably mine the database across the network of references and citations. However, it is not trivial to computationally identify the right paper given the information included in BibTeX file, which is primarily authors and title. Indeed, although a paper is present in Semantic Scholar, the search may fail because of: errors in how the paper is registered in the BibTeX file; errors in how the paper is registered in the in the Semantic Scholar database (e.g. only the first author is registered, paper is duplicated for each author); inconsistent handling of non-ASCII characters in title and authors' name. Including the 'NIME' or 'New Interfaces for Musical Expression' in the search query has a detrimental effect because publication venues are not systematically indexed and not searchable in Semantic Scholar. The NIME PA integrates an iterative search algorithm which we further improved in terms of robustness and accuracy. This progressively attempts up to 12 different search strings to identify the right paper, verified against number and last name of authors. This is a critical aspect for this study: results can be significantly altered just by one or few wrong papers, especially if these are from popular disciplines where citation counts are generally much higher than NIME. Moreover, after finding the correct paper we use its unique identifier to perform a lookup in the Semantic Scholar database, retrieving and adding in the tabular data structure of the NIME PA the following data for all NIME paper:
* unique identifier of the paper;
* unique identifiers of all authors;
* TLDRs (Too Long; Didn't Read) short summary [12];
* SPECTER (Scientific Paper Embeddings using Citation-informed TransformERs) 768-dimensional vector representing the paper [13];
* number of references;
* number of citations and highly influential citations;
* lists of references and citations, including for each paper:
* title and unique identifier of the paper;
* names and unique identifiers of all authors;
* publication year;
* publication venue and type;
* fields of study (estimated by Semantic Scholar).
Semantic Scholar can index only papers with publicly accessible PDF. However, a paper with missing PDF may still appear in the database because cited elsewhere, but retrieved information is likely incomplete or not reliable. In this study, especially when analyzing references, we consider only data retrieved from NIME papers with a non-empty list of references. Instead, a non-empty list of references suggests that the PDF is available and appropriately analyzed.
The lists of references and citations are processed and consolidated in few more tabular data structures, in which each referenced or cited paper appears only once, including all information retrieved from Semantic Scholar, and with the following added.
* total citing or referencing count;
* distribution citing or referencing count over the years;
* whether the paper belongs to the NIME corpus.
The lists of references and citations are further processed to generate another tabular data structure including in which each referenced or citing author appears only once, together with:
* author name and unique identifier;
* total citing or referencing count;
* distribution citing or referencing count over the years;
* whether the author has ever published in NIME.
Author's citing and referencing counts are processed at individual level, therefore if a paper has multiple authors, the tally is incremented for all of them. All these tabular data structures are further intersected, merged and mined to generate the figures and metrics presented in this paper. The source-code producing the results included in this paper is available as open-source software, published as an extension of the NIME PA4.
Footnote 4: [https://github.com/stefanofasciani/NIME-proceedings-analyzer](https://github.com/stefanofasciani/NIME-proceedings-analyzer)
### Limitations
The following affect the accuracy of the figures presented in this paper.
First, the computational analysis has to handle exceptions in the NIME corpus and changes in the proceeding's publication over the different editions of the conference. Although we have built on top of a reliable software developed for more than two years, we cannot exclude that minor bugs are still present or that all exceptions have been properly handled.
Second, NIME conferences accepts different type of works, such as papers (short and full), music or performances, installations, demos, work-in-progress, and workshops. The diversity in types of accepted submissions has also increased over the years. All types of work are generally accompanied by a text document submitted by the authors. However, there is a noticeable inconsistency in which types have been included in the proceedings over the years, which affects some of the metrics presented in this paper. Automatic filtering out non-paper works from the NIME paper proceedings BibTeX file is not possible, because entries do not include information on the submission type. However, since the non-paper works are generally a minority in the NIME proceedings, since they include no or few reference, and since they are seldomly cited, their impact on this study is almost negligible.
Third, we cannot assume absolute completeness and correctness of the information in the Semantic Scholar database. The natural language processing and machine learning techniques used to extract information from the PDF of NIME indexed papers may be inaccurate. A recent study estimated that Semantic Scholar is 98.88% accurate [14]. Moreover, these techniques are more likely to fail if PDF files are malformed, and the NIME PA suggests that approximately 34 papers are affected by this issue. The issue of having non-paper works included in the NIME proceedings is somehow mitigated by Semantic Scholar. Indeed, these are often missing from the database, likely due to the different formating, short size or absence of references. Also, paper metadata such as publication venue and publication type are generally detected only for journal articles, which PDF files are consistently formatted and of high quality, as not generated by authors. Therefore, figures on conference proceedings appearance in NIME references and citations may have limited accuracy. Finally, the rate at which the Semantic Scholar database can be mined using their API is limited to 100 requests per 5 minutes. Therefore, we are unable to make a lookup for all NIME references and citations to retrieve further information, and build a more extensive graph of citation network.
Fourth, in 2021 and 2022 NIME proceedings have been published in PubPub, while previous proceedings have been published in Zenodo6. PubPub is a relatively new platform which allows media-rich publications and readers to comment articles. With respect to the NIME PA, PubPub presents theoretical key advantages as papers are available as XML files (while papers published as PDF files are converted to XML with tools which may fail). However, this platform still presents several shortcomings, such as the absence of essential author information in the XML files, recent changes that no longer allow data scraping, and problem with indexing in several academic search engines. For example, in the popular Google Scholar, NIME papers published in PubPub have often missing information (such as title or authors) and are often duplicated or triplicated, which in turn |
2305.17622 | Semi-inclusive decays of $B$ meson into a dark anti-baryon and baryons | Using the recently developed $B$-Mesogenesis scenario, we studied the
semi-inclusive decays of $B$ meson into a dark anti-baryon $\psi$ plus any
possible states $X$ containing $u/c$ and $d/s$ quarks with unit baryon number.
The two types of effective Lagrangians proposed by the scenario are both
considered in the study. The semi-inclusive decay branching fractions of $B\to
X \psi$ are calculated by the method of heavy quark expansion, where the
non-perturbative contributions from the matrix elements of dimension-5
operators are included. We obtained the branching fractions as functions of the
dark anti-baryon mass. Using the experimental upper limits of the branching
fractions, we presented the constraints of the coupling constants in the
$B$-Mesogenesis scenario. | Yu-Ji Shi, Ye Xing, Zhi-Peng Xing | 2023-05-28T03:57:12Z | http://arxiv.org/abs/2305.17622v2 | # Semi-inclusive decays of \(B\) meson into a dark anti-baryon and baryons
###### Abstract
Using the recently developed \(B\)-Mesogenesis scenario, we studied the semi-inclusive decays of \(B\) meson into a dark anti-baryon \(\psi\) plus any possible states \(X\) containing \(u/c\) and \(d/s\) quarks with unit baryon number. The two types of effective Lagrangians proposed by the scenario are both considered in the study. The semi-inclusive decay branching fractions of \(B\to X\psi\) are calculated by the method of heavy quark expansion, where the non-perturbative contributions from the matrix elements of dimension-5 operators are included. We obtained the branching fractions as functions of the dark anti-baryon mass. Using the experimental upper limits of the branching fractions, we presented the constraints of the coupling constants in the \(B\)-Mesogenesis scenario.
## I Introduction
The Standard Model of particle physics and the standard cosmological model are two highly successful frameworks for describing the most microscopic and macroscopic physics respectively. However, these two models are not consistent with each other, which leaves many unanswered questions including the existence of dark matter and the asymmetry of matter and anti-matter. To answer these questions, many mechanisms have been proposed since Sakharov firstly introduced the conditions necessary for baryogenesis [1]. The traditional mechanisms generally include high scales and extremely massive particles which makes them difficult to be tested by experiments. Recently, a new \(B\)-Mesogenesis scenario is proposed by Refs. [2; 3; 4], which can simultaneously explain the relic dark matter abundance and the baryon asymmetry and in our Universe. The main advantage of this scenario is that it is not only directly testable at hadron colliders and \(B\)-factories [3; 5], but also indirectly testable at Kaon and Hyperon factories [6; 7]. Nowadays, the search for \(B\) meson decays into baryon with missing energy through \(B\)-Mesogenesis has been independently started by the Belle-II collaboration [8] and the LHCb collaboration [9].
In the \(B\)-Mesogenesis scenario, a new mechanism for Baryogenesis and DM production is proposed. The \(b,\bar{b}\) quarks are produced by decays of some heavy scalar field \(\Phi\) during a late era in
the history of the early universe. The produced \(b,\bar{b}\) quarks hadronize to charged and neutral \(B\)-mesons. The neutral ones \(B^{0},\bar{B}^{0}\) quickly undergo CP violating oscillations, and then decay into a dark sector baryon with baryon number \(-1\) as well as visible hadron states with baryon number \(+1\). As a result, the asymmetry of baryon and anti-baryon is produced in the \(B\)-Mesogenesis without violating the baryon number. The exclusive decay \(B\to p\psi\) in the framework of \(B\)-Mesogenesis was firstly studied by Ref.[10] using light-cone sum rules (LCSR). After that, with the use of LCSR, a more complete study of \(B\) meson decays into an octet baryon or charmed anti-triplet baryon and \(\psi\) was given by Ref.[11].
Recently, there are no strict theoretical studies on inclusive \(B\) meson decays in the \(B\)-Mesogenesis. Compared with the exclusive decays, inclusive decay branching fractions are more likely to be measured in the experiments. On the other hand, from the theoretical point of view, another advantage of inclusive decays is that the summation over various of hadronic final states eliminates bound-state effects of individual hadrons, which is due to the hypothesis of quark-hadron duality [12]. In Ref.[3], using the data of bottom hadron decays with missing energy from the ALEPH experiment [13; 14; 15], the authors obtained the upper limits on the inclusive decay branching fractions of \(B\to X_{u/c,d/s}\psi\), where \(X_{u/c,d/s}\) denotes any possible hadron states containing \(u/c\) and \(d/s\) quarks with unit baryon number. Therefore, compared with the experimental upper limits, a strict theoretical calculation on the \(B\to X\psi\) branching fraction enables us to determine the upper limits on the coupling constants in the \(B\)-Mesogenesis. Nowadays the heavy quark expansion (HQE)[16; 17; 18; 19] has been successfully applied for the studies of inclusive decays as well as lifetime calculations of heavy hadron decays [20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. In this work, we will use HQE to calculate the inclusive decay branching fractions of \(B\to X_{u/c,d/s}\psi\), where the bound-state effects related to the initial state can be can systematically accounted for by introducing matrix elements of high dimension operators.
This article is organized as follows: Section II is a brief introduction to the \(B\)-Mesogenesis scenario proposed by Refs. [2; 3; 4]. Section III present a detailed HQE calculation for the \(B\to X_{u/c,d/s}\psi\) decays. Section IV gives the numerical results for decay branching fractions and constraints on the coupling constants in the \(B\)-Mesogenesis.
## II \(B\)-Mesogenesis scenario
The \(B\)-Mesogenesis scenario firstly proposed by Refs. [2; 3; 4] aims to simultaneously explain the baryon asymmetry and the existence of dark matter in our Universe. This B-Mesogenesis model offers a mechanism where an anti-\(b\) quark can decays into \(u/c,d/s\) quarks and a dark anti-baryon \(\psi\). Although the baryon number is conserved, \(\psi\) is invisible so that only the baryons composed of \(u,d/s\) quarks can be detected by the experiments. There are two types of effective Lagrangians
given by the \(B\)-Mesogenesis model:
\[\mathcal{L}^{I}_{\rm eff}= -y_{ub}\epsilon_{ijk}Y^{*i}\bar{u}^{j}_{R}b^{c,k}_{R}-y_{cb}\epsilon _{ijk}Y^{*i}\bar{c}^{j}_{R}b^{c,k}_{R}-y_{\psi d}Y_{i}\bar{\psi}d^{c,i}_{R}-y_{ \psi s}Y_{i}\bar{\psi}s^{c,i}_{R}+{\rm h.c},\] \[\mathcal{L}^{II}_{\rm eff}= -y_{ud}\epsilon_{ijk}Y^{*i}\bar{u}^{j}_{R}d^{c,k}_{R}-y_{us} \epsilon_{ijk}Y^{*i}\bar{u}^{j}_{R}s^{c,k}_{R}-y_{cd}\epsilon_{ijk}Y^{*i}\bar{ c}^{j}_{R}d^{c,k}_{R}-y_{cs}\epsilon_{ijk}Y^{*i}\bar{c}^{j}_{R}s^{c,k}_{R}\] \[-y_{\psi b}Y_{i}\bar{\psi}b^{c,i}_{R}+{\rm h.c}, \tag{1}\]
where all the quark fields are taken as right handed and the superscript \(c\) indicates charge conjugate. \(Y\) is a heavy color triplet scalar with electric charge \(Q_{Y}=-1/3\) and mass \(M_{Y}\). The \(y\) s are unknown coupling constants. In the Type-I model the \(b\) quark couples with \(u,c\) quarks, while in the Type-II model the \(b\) quark couples with the dark anti-baryon \(\psi\). Integrating out the heavy boson \(Y\), one arrives at the effective Hamiltonian for the two types of models as:
\[\mathcal{H}^{I,uq}_{\rm eff}=-\frac{y_{ub}y_{\psi q}}{M_{Y}^{2}}i \epsilon_{ijk}(\bar{\psi}q^{c,i}_{R})(\bar{u}^{j}_{R}b^{c,k}_{R})=-G^{I}_{(uq) }\tilde{\mathcal{O}}^{I}_{(uq)}\psi^{c},\] \[\mathcal{H}^{II,uq}_{\rm eff}=-\frac{y_{\psi b}y_{uq}}{M_{Y}^{2}} i\epsilon_{ijk}(\bar{\psi}b^{c,i}_{R})(\bar{u}^{j}_{R}d^{c,k}_{R})=-G^{II}_{(uq) }\tilde{\mathcal{O}}^{II}_{(uq)}\psi^{c}. \tag{2}\]
Here for simplicity, \(q=s,d\) and \(u\) denotes \(u\) or \(c\) quark simultanously. We have defined three-quark operators \(\tilde{\mathcal{O}}^{I}_{(q)}=-i\epsilon_{ijk}(\bar{b}^{i}_{R}u^{c,j}_{R}) \bar{q}^{k}_{R}\) and \(\tilde{\mathcal{O}}^{II}_{(q)}=-i\epsilon_{ijk}(\bar{q}^{i}_{R}u^{c,j}_{R}) \bar{b}^{k}_{R}\), which transform an anti-\(b\) quark into two light quarks \(u,q\). In this work, we will calculate the semi-inclusive decay width of \(B\to X_{uq}\psi\) induced by \(\mathcal{H}^{I,uq}_{\rm eff}\) and \(\mathcal{H}^{II,uq}_{\rm eff}\) respectively, with \(X_{uq}\) being the summation of any states containing \(u,q\) quarks.
## III \(B\to X_{uq}\psi\) decay in heavy quark expansion
### Differential decay width of \(B\to X_{uq}\psi\)
In the rest frame of \(B\) meson, denoting the momentum and energy of the outgoing dark anti-baryon \(\psi\) as \(q\) and \(E\), we can express the differential decay width of \(B\to X_{uq}\psi\) as
\[\frac{d}{dE}\Gamma(b\to uq\psi)= \int\frac{d^{4}q}{(2\pi)^{4}}(2\pi)\delta(q^{2}-m_{\psi}^{2}) \delta(E-q^{0})\] \[\times\sum_{X,s_{\psi}}\frac{1}{2m_{B}}|\langle X(p_{X})\psi(q,s_{ \psi})|\mathcal{H}^{uq}_{\rm eff}(0)|B(p_{B})\rangle|^{2}(2\pi)^{4}\delta^{4} (p_{B}-q-p_{X}), \tag{3}\]
where the spin of \(\psi\) and any possible \(X_{uq}\) states with momentum \(p_{X}\) are summed. The integration of \(E\) is equivalent to averaging over a range of final-state hadronic masses. Since \(\psi\) has no strong interaction with quarks, the matrix element in Eq. (3) can be factoraized as
\[\langle X(p_{X})\psi(q,s_{\psi})|\mathcal{H}^{uq}_{\rm eff}(0)|B(p_{B})\rangle =-G_{(uq)}\langle X(p_{X})|\bar{\mathcal{O}}_{(uq),a}(0)|B(p_{B})\rangle u^{c}_ {\psi,a}(q,s_{\psi}), \tag{4}\]
with \(a\) being a spinor index. For simplicity we have omitted the superscripts \(I,II\) here. Now we introduce a rank-two tensor \(W\) with two spinor indexes:
\[W_{ba}= \sum_{X}(2\pi)^{3}\delta^{4}(p_{B}-q-p_{X})\frac{1}{2m_{B}} \langle B(p_{B})|\bar{\mathcal{O}}^{\dagger}_{(uq),b}(0)|X(p_{X})\rangle \langle X(p_{X})|\bar{\mathcal{O}}_{(uq),a}(0)|B(p_{B})\rangle, \tag{5}\]
which can be generally parameterized as
\[W=\gamma^{0}\left[A_{1}\frac{\not{q}}{m_{B}}+A_{2}\frac{\not{p}_{B}}{m_{B}} \right]P_{L}. \tag{6}\]
Note that the appearance of \(P_{L}\) on the right hand side is due to the identity \(\bar{\mathcal{O}}_{(uq)}P_{R}=0\). Now the differential decay width can be expressed in terms of \(W\) or \(A_{1,2}\) as
\[\frac{d}{dE}\Gamma(b\to uq\psi)= \frac{G_{(q)}^{2}}{(2\pi)^{2}}\int d^{4}q\delta(q^{2}-m_{\psi}^{2} )\delta(E-q^{0})\text{Tr}\left[(\not{q}-m_{\psi})\gamma^{0}W\right]\] \[= \frac{G_{(s)}^{2}}{\pi\ m_{B}}\sqrt{E^{2}-m_{\psi}^{2}}\left[A_{1} (m_{\psi},E)m_{\psi}^{2}+A_{2}(m_{\psi},E)m_{B}E\right]. \tag{7}\]
It is difficult to calculate the \(W\) tensor directly due to the infinite summation on the \(X_{uq}\) states. Actually, the \(W\) tensor can be extracted from the imaginary part of a correlation function:
\[W_{ba}=-\frac{1}{\pi}\text{Im}T_{ba} \tag{8}\]
with
\[T_{ba}= -i\int d^{4}x\ e^{-iq\cdot x}\frac{1}{2m_{B}}\langle B(p_{B})|T \left\{\bar{\mathcal{O}}_{(uq),b}^{\dagger}(x)\bar{\mathcal{O}}_{(uq),a}(0) \right\}|B(p_{B})\rangle. \tag{9}\]
The correlation function defined in Eq. (9) can be calculated by HQE, where it is expanded according to the power of \(1/m_{b}\). Each term in the expansion is factorized into perturbative part and non-perturbative part. The former one can be calculated perturbatively, while the later one are parameterized by matrix elements of \(B\) meson. We will perform an explicit calculation of \(T_{ba}\) by HQE in the next section.
### Heavy quark expansion in the Type-I model
We firstly consider the type-I model. The \(T_{ba}\) is calculated by HQE with the expansion for the power of \(1/m_{b}\). Using the explicit form of \(\bar{\mathcal{O}}_{(uq)}^{I}\)
\[\bar{\mathcal{O}}_{(uq)}^{I}=-i\epsilon_{ijk}(\bar{b}_{R}^{i}u_{R}^{c,j})\bar{ q}_{R}^{k},\quad\bar{\mathcal{O}}_{(uq)}^{I\dagger}=-i\epsilon_{ijk}(\bar{u}_{R}^ {c,i}b_{R}^{j})\gamma^{0}q_{R}^{k} \tag{10}\]
and free quark propagators, one can obtain
\[T_{ba}= \frac{i}{m_{B}}\int d^{4}xe^{-iq\cdot x}\int\frac{d^{4}l_{1}}{(2 \pi)^{4}}\frac{d^{4}l_{2}}{(2\pi)^{4}}e^{-il_{1}\cdot x}e^{-il_{2}\cdot x}\] \[\times\left[\gamma^{0}P_{R}\frac{i(\not{l}_{1}+m_{q})}{l_{1}^{2}- m_{q}^{2}}P_{L}\right]_{ba}\langle B(p_{B})|\bar{b}^{i}(0)\frac{if_{2}}{l_{2}^{2 }-m_{u}^{2}}P_{R}b^{i^{\prime}}(x)|B(p_{B})\rangle. \tag{11}\]
To extract the perturbative part of the matrix element above, one can temporary replace the initial and final \(B\) meson with free \(\bar{b}\) quark, namely \(|B(p_{B})\rangle\rightarrow|p_{b}\rangle\) with \(p_{b}=m_{b}v+k\) and \(k\) is of order \(\Lambda_{QCD}\). Accordingly we can do the replacement
\[\langle p_{b}|\bar{b}^{i}(0)\not{l}_{2}P_{R}b^{i^{\prime}}(x)|p_{b}\rangle \rightarrow-e^{ip_{b}\cdot x}\ \bar{b}^{i}(p_{b})\not{l}_{2}P_{R}b^{i}(p_{b})\to e^{ip_{b}\cdot x} \ \langle B(p_{B})|\bar{b}(0)\not{l}_{2}P_{R}b(0)|B(p_{B})\rangle, \tag{12}\]
where \(b(p_{b})\) denotes the \(\bar{b}\) quark spinor. In the last step the external states are transformed back to \(B\) meson. Now the diagram of the correlation function \(T\) is shown by Fig.1, where the two crossed dots denote \(\bar{\cal O}^{\dagger}_{(q)}(x)\) and \(\bar{\cal O}_{(q)}(0)\) respectively. The \(W_{ba}\) can be calculated by extracting the discontinuity part of \(T_{ba}\) using cutting rules, namely all the internal quark lines in Fig.1 are set on-shell: \(1/(l_{1}^{2}-m_{q}^{2})\rightarrow(-2\pi i)\delta(l_{1}^{2}-m_{q}^{2})\), \(1/(l_{2}^{2}-m_{u}^{2})\rightarrow(-2\pi i)\delta(l_{1}^{2}-m_{u}^{2})\). Then we arrive at
\[W_{ba}= -\frac{1}{\pi}{\rm Im}\ T_{ba}=-\frac{1}{2\pi i}{\rm Disc}\ T_{ba}\] \[= -\frac{(2\pi)^{3}}{m_{B}}\left\{A_{\rm 2bd}[(Q+k)^{2},m_{q}^{2},m_ {u}^{2}](Q+k)^{\mu}(Q+k)^{\nu}+B_{\rm 2bd}[(Q+k)^{2},m_{q}^{2},m_{u}^{2}](Q+k)^{2}g^{ \mu\nu}\right\}\] \[\times\left[\gamma^{0}\gamma_{\mu}P_{L}\right]_{ba}\langle B(p_{B })|\bar{b}(0)\gamma_{\nu}P_{R}b(0)|B(p_{B})\rangle, \tag{13}\]
where \(Q=m_{b}v-q\). The \(A_{\rm 2bd},B_{\rm 2bd}\) are the two scalar functions of the rank-2 two-body phase space integration, which is generally defined as:
\[\int\frac{d^{4}l_{1}}{(2\pi)^{3}}\frac{d^{4}l_{2}}{(2\pi)^{3}} \delta^{4}(P-l_{1}-l_{2})\delta(l_{1}^{2}-m_{1}^{2})\delta(l_{2}^{2}-m_{2}^{2} )l_{1}^{\mu}l_{2}^{\nu}\] \[= A_{\rm 2bd}[P^{2},m_{1}^{2},m_{2}^{2}]P^{\mu}P^{\nu}+B_{\rm 2bd} [P^{2},m_{1}^{2},m_{2}^{2}]P^{2}g^{\mu\nu}. \tag{14}\]
The explicit expression of \(A_{\rm 2bd}\) and \(B_{\rm 2bd}\) are given in the Appendix A.
The \(1/m_{b}\) expansion is equivalent to the expansion in terms of the small momentum \(k\). At \({\cal O}(k^{0})\) all the \(k\) s in Eq. (13) vanishes and the \(b\) quark field are replaced by the effective one \(b_{v}\). The axial-vector matrix element in Eq. (13) vanishes due to the parity. The vector matrix element can be calculated straightforwardly as
\[\langle B(p_{B})|\bar{b}_{v}(0)\gamma_{\nu}b_{v}(0)|B(p_{B})\rangle=-2m_{B}v_ {\mu} \tag{15}\]
since \(\bar{b}(0)\gamma_{\nu}b(0)\) is the conserved \(b\) quark number current and the \(b\) quark number of \(B\) meson is \(-1\). In terms of the \(k\) expansion, the lowest order of \(W_{ba}\) reads as
\[W_{ba}^{k^{0}}= (2\pi)^{3}\left\{A_{\rm 2bd}[Q^{2},m_{q}^{2},m_{u}^{2}]Q^{\mu}v \cdot Q+B_{\rm 2bd}[Q^{2},m_{q}^{2},m_{u}^{2}]Q^{2}v^{\mu}\right\}\left[\gamma^{0} \gamma_{\mu}P_{L}\right]_{ba}, \tag{16}\]
Figure 1: The diagram of \(T_{ba}\). The initial and final \(B\) mesons are replaced byt free \(\bar{b}\) quarks with momentum \(p_{b}=m_{b}v+k\). The two crossed dots denote \(\bar{\cal O}^{\dagger}_{(q)}(x)\) and \(\bar{\cal O}^{I}_{(uq)}(0)\) respectively.
where \(v\cdot Q=E\) and \(Q^{2}=m_{b}^{2}+m_{\psi}^{2}-2m_{b}E\). The explicit expression of \(A_{1,2}\) at \({\cal O}(k^{0})\) are given in the Appendix B.
The \({\cal O}(k^{1})\) contribution to \(W_{ba}\) comes from the terms linear in \(k\) in Eq. (13). The procedure to extract the perturbative part by temporarily changing the external \(B\) mesons to free \(\bar{b}\) quark is almost the same as that at \({\cal O}(k^{0})\). However, now the non-perturbative matrix element becomes \(\langle B(p_{B})|\bar{b}\gamma_{\nu}k_{\rho}P_{R}b|B(p_{B})\rangle\), which can be written as \(\langle B(p_{B})|\bar{b}\gamma_{\nu}(iD_{\rho}-m_{b}v_{\rho})P_{R}b|B(p_{B})\rangle\) if transferred to coordinate space. Note that the \(\gamma_{5}\) term vanishes again due to the reason of parity conservation. Changing the \(b\) quark field into the heavy quark field in HQET, one arrives at
\[\frac{1}{2}\langle B(p_{B})|\bar{b}\gamma_{\nu}(iD_{\rho}-m_{b}v _{\rho})b|B(p_{B})\rangle\] \[= \frac{1}{2}\langle B(p_{B})|\bar{b}_{v}\gamma_{\nu}iD_{\rho}b_{v }|B(p_{B})\rangle+\frac{i}{2}\int d^{4}x\langle B(p_{B})|{\rm T}\left\{\bar{b} _{v}\gamma_{\nu}iD_{\rho}b_{v}(0){\cal L}_{1}(x)\right\}|B(p_{B})\rangle\] \[+\frac{1}{2}\langle B(p_{B})|\bar{b}_{v}\frac{-i\overleftarrow{ \cal D}}{2m_{b}}\gamma_{\nu}iD_{\rho}b_{v}|B(p_{B})\rangle+\frac{1}{2}\langle B (p_{B})|\bar{b}_{v}\gamma_{\nu}iD_{\rho}\frac{i\overleftarrow{\cal D}}{2m_{b} }b_{v}|B(p_{B})\rangle, \tag{17}\]
where
\[{\cal L}_{1}=-\bar{b}_{v}\frac{D^{2}}{2m_{b}}b_{v}-\bar{b}_{v}\frac{g}{4m_{b}} G_{\alpha\beta}\sigma^{\alpha\beta}b_{v} \tag{18}\]
is the \({\cal O}(1/m_{b})\) interaction term of the HQET Lagrangian. The matrix element of the first term in Eq. (17) vanishes because of the equation of motion, while the second term in Eq. (17) can be parameterized as [16]:
\[\langle B(p_{B})|\frac{i}{2}\int d^{4}x{\rm T}\left\{\bar{b}_{v}\gamma_{\nu}iD _{\rho}b_{v}(0){\cal L}_{1}(x)\right\}|B(p_{B})\rangle=\frac{1}{2}m_{B}A~{}v_{ \nu}v_{\rho} \tag{19}\]
with
\[A=-\langle B(v)|{\cal L}_{1}(0)|B(v)\rangle=-\frac{\lambda_{1}}{m_{b}}-\frac{ 3\lambda_{2}}{m_{b}}, \tag{20}\]
where \(\sqrt{m}_{B}|B(v)\rangle=|B(p_{B})\rangle\). The matrix element of the last two terms in Eq. (17) can be parameterized as
\[\frac{1}{2}\langle B(p_{B})|\bar{b}_{v}\frac{-i\overleftarrow{\cal D}}{2m_{b} }\gamma_{\nu}iD_{\rho}b_{v}+\bar{b}_{v}\gamma_{\nu}iD_{\rho}\frac{i\overleftarrow {\cal D}}{2m_{b}}b_{v}|B(p_{B})\rangle=m_{B}\frac{Y-Z}{4m_{b}}(g_{\nu\rho}-v_{ \nu}v_{\rho}), \tag{21}\]
where \(Y=(2/3)\lambda_{1},Z=-4\lambda_{2}\)[16]. Using the non-perturbative matrix elements defined in Eq. (19) and Eq. (21), we obtain the \({\cal O}(k^{1})\) contribution to \(W_{ba}\) as
\[W_{ba}^{k^{1}}= -(2\pi)^{3}\big{\{}A_{\rm 2bd}[Q^{2},m_{q}^{2},0](Q^{\mu}g^{\nu\rho}+Q ^{\nu}g^{\mu\rho})+A_{\rm 2bd}[Q^{2},m_{q}^{2},0]^{(1)}2Q^{\rho}Q^{\mu}Q^{\nu}\] \[+B_{\rm 2bd}[Q^{2},m_{q}^{2},0]2Q^{\rho}g^{\mu\nu}+B_{\rm 2bd}[Q^{2},m_{q}^{2},0]^{(1)}2Q^{\rho}Q^{2}g^{\mu\nu}\big{\}}\] \[\times\left[\gamma^{0}\gamma_{\mu}P_{L}\right]_{ba}\left\{\frac{1} {2}Av_{\nu}v_{\rho}+\frac{Y-Z}{4m_{b}}(g_{\nu\rho}-v_{\nu}v_{\rho})\right\}. \tag{22}\]
Similarly, the \({\cal O}(k^{2})\) contribution to \(W_{ba}\) comes from:
\[W_{ba}^{k^{2}}= -\frac{(2\pi)^{3}}{m_{B}}\Big{\{}A_{\rm 2bd}[Q^{2},m_{q}^{2},0]g^{\mu \rho}g^{\nu\sigma}+A_{\rm 2bd}^{(1)}[Q^{2},m_{q}^{2},0]2Q^{\rho}(Q^{\mu}g^{\nu \sigma}+Q^{\nu}g^{\mu\sigma})\] \[+B_{\rm 2bd}[Q^{2},m_{q}^{2},0]2Q^{\rho}g^{\mu\nu}+B_{\rm 2bd}[Q^{2},m_{q}^{2 },0]^{(1)}2Q^{\rho}Q^{2}g^{\mu\nu}\big{\}}\] \[\times\left[\gamma^{0}\gamma_{\mu}P_{L}\right]_{ba}\left\{\frac{1} {2}Av_{\nu}v_{\rho}+\frac{Y-Z}{4m_{b}}(g_{\nu\rho}-v_{\nu}v_{\rho})\right\}. \tag{23}\]
Similarly, the \({\cal O}(k^{2})\) contribution to \(W_{ba}\) comes from:
\[W_{ba}^{k^{2}}= -\frac{(2\pi)^{3}}{m_{B}}\Big{\{}A_{\rm 2bd}[Q^{2},m_{q}^{2},0]g^{\mu \rho}g^{\nu\sigma}+A_{\rm 2bd}^{(1)}[Q^{2},m_{q}^{2},0]2Q^{\rho}(Q^{\mu}g^{\nu\sigma}+Q^{\nu} g^{\mu\sigma})\] \[+B_{\rm 2bd}[Q^{2},m_{q}^{2},0]2Q^{\rho}g^{\mu\nu}+B_{\rm 2bd}[Q^{2},m_{q}^{2 },0]^{(1)}2Q^{\rho}Q^{2}g^{\mu\nu}\big{\}}\] \[\times\left[\gamma^{0}\gamma_{\mu}P_{L}\right]_{ba}\left\{\frac{1} {2}Av_{\nu}v_{\rho}+\frac{Y-Z}{4m_{b}}(g_{\nu\rho}-v_{\nu}v_{\rho})\right\}. \tag{24}\]
Similarly, the \({\cal O}(k^{2})\) contribution to \(W_{ba}\) comes from:
\[W_{ba}^{k^{2}}= -\frac{(2\pi)^{3}}{m_{B}}\Big{\{}A_{\rm 2bd}[Q^{2},m_{q}^{2},0]g^{\mu \rho}g^{\nu\sigma}+A_{\rm 2bd}^{(1)}[Q^{2},m_{q}^{2},0]2Q^{\rho}(Q^{\mu}g^{\nu\sigma}+Q^{\nu} g^{\mu\sigma})\] \[+B_{\rm 2bd}[Q^{2},m_{q}^{2},0]2Q^{\rho}g^{\mu\nu}+B_{\rm 2bd}[Q^{2},m_{q}^{2},0]^{(1)}2Q^{ \rho}Q^{2}g^{\mu\nu}\big{\}}\] \[\times\left[\gamma^{0}\gamma_{\mu}P_{L}\right]_{ba}\left\{\frac{1} {2}Av_{\nu}v_{\rho}+\frac{Y-Z}{4m_{b}}(g_{\nu\rho}-v_{\nu}v_{\rho})\right\}. \tag{25}\]
Similarly, the \({\cal O}(k^{2})\) contribution to \(W_{ba}\) comes from:
\[W_{ba}^{k^{2}}= -\frac{(2\pi)^{3}}{m_{B}}\Big{\{}A_{\rm 2bd}[Q^{2},m_{q}^{2},0]g^{\mu \rho}g^{\nu\sigma}+A_{\rm 2bd}^{(1)}[Q^{2},m_{q}^{2},0]2Q^{\rho}(Q^{\mu}g^{\nu\sigma}+Q^{\nu} g^{\mu\sigma})\] \[+B_{\rm 2bd}[Q^{2},m_{q}^{2},0]2Q^{\rho}g^{\mu\nu}+B_{\rm 2bd}[Q^{2},m_{q}^{2},0]2Q^{ \rho}g^{\mu\nu}\big{\}}\] \[\times\left[\gamma^{0}\gamma_{\mu}P_{L}\right]_{ba}\left\{\frac{1} {2}Av_{\nu}v_{\rho}+\frac{Y-Z}{4m_{b}}(g_{\nu\rho}-v_{\nu}v_{\rho})\right\}. \tag{26}\]
\[+\left[g^{\rho\sigma}A_{\rm 2bd}^{(1)}[Q^{2},m_{q}^{2},0]+2Q^{ \rho}Q^{\sigma}A_{\rm 2bd}^{(2)}[Q^{2},m_{q}^{2},0]\right]Q^{\mu}Q^{\nu}+B_{\rm 2bd }[Q^{2},m_{q}^{2},0]g^{\mu\nu}g^{\rho\sigma}\] \[+B_{\rm 2bd}^{(1)}[Q^{2},m_{q}^{2},0]4Q^{\rho}Q^{\sigma}g^{\mu\nu}+ \left[g^{\rho\sigma}B_{\rm 2bd}^{(1)}[Q^{2},m_{q}^{2},0]+2Q^{\rho}Q^{\sigma}B_{\rm 2bd }^{(2)}[Q^{2},m_{q}^{2},0]\right]Q^{2}g^{\mu\nu}\right\}\] \[\times\left[\gamma^{0}\gamma_{\mu}P_{L}\right]_{ba}\langle B(p_{B })|\bar{b}(0)\gamma_{\nu}k_{\rho}k_{\sigma}P_{R}b(0)|B(p_{B})\rangle, \tag{23}\]
where \(A_{\rm 2bd}^{(n)}=\partial^{n}/(\partial Q^{2})^{n}A_{\rm 2bd}\) and similar for \(B_{\rm 2bd}^{(n)}\). Each \(k\) in the matrix element above is replaced by \(iD-mv\) when transferred to coordinate space. Transforming the \(b\) into \(b_{v}\) and using the results given in Ref.[16], we have
\[\langle B(p_{B})|\bar{b}\gamma_{\nu}k_{\rho}k_{\sigma}P_{R}b|B(p_{B})\rangle= \frac{1}{2}\langle B(p_{B})|\bar{b}_{v}v_{\nu}(g_{\rho\sigma}-v_{\rho}v_{ \sigma})b_{v}|B(p_{B})\rangle=\frac{1}{2}m_{B}Yv_{\nu}(g_{\rho\sigma}-v_{\rho} v_{\sigma}). \tag{24}\]
The explicit expressions of \(A_{1,2}\) at \({\cal O}(k^{1})\) and \({\cal O}(k^{2})\) are given in the Appendix B.
Up to now we have only considered the case of free quark propagation when calculating the \(T_{ba}\) as shown in Fig.1. When considering the interaction of the internal quarks and the background gluon fields, one has to calculated the one gluon emission diagrams as shown in Fig. 2. Here the external \(B\) mesons are also replaced by free \(\bar{b}\) states. We have set the incoming and outgoing \(b\) quark momentums as \(p_{1}=m_{b}v+k/2\) and \(p_{2}=m_{b}v-k/2\) respectively. Here we take the \(u\) quark emission as an example, the corresponding \(T_{ba}\) tensor is:
\[T_{ba}^{ug}= \frac{ig}{2m_{B}}t_{ij}^{a}\epsilon_{\mu}^{a*}(k)\frac{1}{(2\pi) ^{4}}\int d^{4}l_{1}\frac{[\gamma^{0}\gamma_{\rho}P_{L}]_{ba}}{(l_{1}^{2}-m_{ s}^{2})\left[\left(Q-l_{1}+\frac{k}{2}\right)^{2}-m_{u}^{2}\right]\left[\left(Q-l_ {1}-\frac{k}{2}\right)^{2}-m_{u}^{2}\right]}\] \[\times\left\{\left(Q-l_{1}+\frac{k}{2}\right)_{\alpha}\left(Q-l_ {1}-\frac{k}{2}\right)_{\beta}\bar{b}^{i}(p_{2})\gamma^{\alpha}\gamma^{\mu} \gamma^{\beta}P_{R}b^{j}(p_{1})+m_{u}^{2}\bar{b}^{i}(p_{2})\gamma^{\mu}P_{R}b ^{j}(p_{1})\right\}. \tag{25}\]
The emitted gluon has momentum \(k\) and note that now the \({\cal O}(k^{1})\) term in the denominator vanishes. The \({\cal O}(k^{1})\) contribution to \(T_{ba}^{ug}\) is
\[T_{ba}^{ug,k^{1}}= \frac{ig}{2m_{B}}t_{ij}^{a}\epsilon_{\mu}^{a*}(k)\frac{1}{2(2\pi )^{4}}\frac{\partial}{\partial M^{2}}\int d^{4}l_{1}d^{4}l_{2}\delta^{4}(Q-l_ {1}-l_{2})\frac{1}{(l_{1}^{2}-m_{s}^{2})(l_{2}^{2}-M^{2})}\]
Figure 2: One gluon emission diagrams of \(T_{ba}\). The incoming and outgoing \(B\) mesons are replaced by free \(\bar{b}\) states, with momentums being \(p_{1}=m_{b}v+k/2\) and \(p_{2}=m_{b}v-k/2\) respectively.
\[\times(k_{\alpha}l_{1\rho}l_{2\beta}-k_{\beta}l_{1\rho}l_{2\alpha})\, \bar{b}^{i}(p_{2})\gamma^{\alpha}\gamma^{\mu}\gamma^{\beta}P_{R}b^{j}(p_{1})[ \gamma^{0}\gamma_{\rho}P_{L}]_{ba}, \tag{26}\]
where we have used the trick \(1/(l_{2}^{2}-m_{2}^{2})^{2}\to\partial_{M^{2}}\{1/(l_{2}^{2}-M^{2})\}|_{M^{2}=m _{2}^{2}}\). The corresponding \(W_{ba}\) can still be extracted by cutting rules, and thus we obtain
\[W_{ba}^{ug,k^{1}}= \frac{ig(2\pi)^{3}}{4m_{B}}t_{ij}^{a}\epsilon_{\mu}^{a*}(k) \partial_{M^{2}}\left[A_{2\rm bd}[Q^{2},m_{q}^{2},M^{2}]Q^{\rho}Q_{\sigma} \epsilon^{\mu\nu\alpha\sigma}-B_{2\rm bd}[Q^{2},m_{q}^{2},M^{2}]Q^{2}\epsilon ^{\mu\nu\rho\alpha}\right]\] \[\times\langle B(p_{B})|\bar{b}^{i}(0)\gamma_{\nu}k_{\alpha}(1+ \gamma_{5})b^{j}(0)|B(p_{B})\rangle[\gamma^{0}\gamma_{\rho}P_{L}]_{ba}. \tag{27}\]
The combination of \(k\) and \(\epsilon^{a*}(k)\) can be replaced by the gluon tensor field when transferred to coordinate space. Explicitly, we can do the replacement \(k_{\alpha}\epsilon_{\mu}^{a*}t_{ij}^{a}\to(-i/2)G_{\alpha\mu}^{a}t_{ij}^{a}\) and \(b\to b_{v}\), and also note that \(\langle B(p_{B})|\bar{b}^{i}_{v}(0)\gamma_{\nu}gG_{\alpha\mu}b^{j}_{v}(0)|B(p_ {B})\rangle=0\) and
\[\langle B(p_{B})|\bar{b}^{i}_{v}(0)\gamma_{\nu}gG_{\alpha\mu}\gamma_{5}b^{j}_{ v}(0)|B(p_{B})\rangle=m_{B}N\epsilon_{\alpha\mu\nu\kappa}v^{\kappa}, \tag{28}\]
then we obtain the \(W_{ba}\) for \(u\) and \(q\) emission as
\[W_{ba}^{ug,k^{1}}= -\frac{3}{4}(2\pi)^{3}N\partial_{M^{2}}\Big{\{}A_{2\rm bd}[Q^{2},m_{q}^{2},M^{2}](v\cdot Q)Q^{\rho}\] \[+B_{2\rm bd}[Q^{2},m_{q}^{2},M^{2}]Q^{2}v^{\rho}\Big{\}}[\gamma^{ 0}\gamma_{\rho}P_{L}]_{ba}\Big{|}_{M^{2}\to m_{u}^{2}}\] \[W_{ba}^{qg,k^{1}}= -\frac{(2\pi)^{3}}{2}N\partial_{M^{2}}\Big{\{}3B_{2\rm bd}[Q^{2},M^{2},m_{u}^{2}]Q^{2}v^{\rho}\] \[-A_{2\rm bd}[Q^{2},M^{2},m_{u}^{2}](Q\cdot v\ Q^{\rho}-Q^{2}v^{ \rho})\Big{\}}[\gamma^{0}\gamma_{\rho}P_{L}]_{ba}\Big{|}_{M^{2}\to m_{q}^{2}}. \tag{29}\]
The corresponding explicit expressions of their contribution to \(A_{1,2}\) are given in the Appendix B.
### Heavy quark expansion in the Type-II model
In this section we will consider the type-II model. Now the HQE calculation of \(T_{ba}\) is almost the same as that in the type-I model. Using the explicit form of \(\bar{\cal O}_{(uq)}^{II}\)
\[\bar{\cal O}_{(uq)}^{II}=-i\epsilon_{ijk}(\bar{q}_{R}^{i}u_{R}^{c,j})\bar{b}_{R }^{k},\quad\bar{\cal O}_{(uq)}^{I\dagger}=-i\epsilon_{ijk}(\bar{u}_{R}^{c,i}q_ {R}^{j})\gamma^{0}b_{R}^{k}, \tag{30}\]
and extracting the imaginary part of \(T_{ba}\) as shown in Fig.1, we can obtain the corresponding \(W_{ba}\) through \(W=-(1/\pi){\rm Im}T\):
\[W_{ba}=\frac{(2\pi)^{4}}{\pi m_{B}}C_{2bd}[(Q+k)^{2},m_{q}^{2},m_{u}^{2}](Q+k) ^{2}\langle B(p_{B})|[\gamma^{0}P_{R}b^{i}(0)]_{b}[\bar{b}^{i}(0)P_{L}]_{a}|B(p _{B})\rangle, \tag{31}\]
where \(C_{2bd}=A_{2bd}+4B_{2bd}\). Note that now the spinor structure of the matrix element above is different from that of Eq. (13), and it seems not straightforward to read out the \(A_{1,2}\) defined in Eq. (6). However, instead one can use the following trick:
\[{\rm tr}[\gamma_{\mu}\gamma^{0}W]=2A_{1}\frac{q_{\mu}}{m_{B}}+2A_{2}v_{\mu}. \tag{32}\]
to extract \(A_{1,2}\). At \({\cal O}(k^{0})\), \({\cal O}(k^{1})\) and \({\cal O}(k^{2})\) we have
\[\mbox{tr}[\gamma_{\mu}\gamma^{0}W^{k^{0}}]= \frac{(2\pi)^{4}}{\pi}C_{2bd}[Q^{2},m_{q}^{2},m_{u}^{2}]Q^{2}v_{\mu},\] \[\mbox{tr}[\gamma_{\mu}\gamma^{0}W^{k^{1}}]= -\frac{(2\pi)^{4}}{\pi}\left(C_{2bd}[Q^{2},m_{q}^{2},m_{u}^{2}]+C _{2bd}^{(1)}[Q^{2},m_{q}^{2},m_{u}^{2}]\right)Q^{\rho}\] \[\times\left[Av\cdot Qv_{\mu}+\frac{Y-Z}{2m_{b}}(Q_{\mu}-v\cdot Qv _{\mu})\right],\] \[\mbox{tr}[\gamma_{\mu}\gamma^{0}W^{k^{2}}]= -(2\pi)^{3}Yv_{\mu}\left[3C_{2bd}[Q^{2},m_{q}^{2},m_{u}^{2}]+2C _{2bd}^{(2)}[Q^{2},m_{q}^{2},m_{u}^{2}](Q^{2})^{2}\right.\] \[\left.+Q^{2}\left(7C_{2bd}^{(1)}[Q^{2},m_{q}^{2},m_{u}^{2}]-2C_{2 bd}^{(2)}[Q^{2},m_{q}^{2},m_{u}^{2}](v\cdot Q)^{2}\right)\right.\] \[\left.-4C_{2bd}^{(1)}[Q^{2},m_{q}^{2},m_{u}^{2}](v\cdot Q)^{2} \right]. \tag{33}\]
On the other hand, it can be found that in the Type-II model the \({\cal O}(k^{1})\) contribution from the one gluon emission diagrams as shown in Fig. 2 vanishes. The explicit expression of \(A_{1,2}\) at \({\cal O}(k^{0})\), \({\cal O}(k^{1})\) and \({\cal O}(k^{2})\) are given in the Appendix C, which are proportional to \(m_{q}^{2}\). Therefore, in the Type-II model the decay width of \(B\to X_{ud}/X_{cd}\psi\) vanishes in the chiral limit \(m_{u,d}=0\), and the decay width of \(B\to X_{us}/X_{cs}\psi\) is suppressed compared with that in the Type-I model.
## IV Numerical results
In this section, we will present the numerical results on the various of \(B\to X_{uq}\psi\) branching fractions as functions of \(\psi\) mass. The mass parameters are \(m_{B}=5.28\) GeV, \(m_{s}=87\) MeV, \(m_{c}=1.0\) GeV, \(m_{b}=4.47\) GeV[30], where the quark masses are chosen at \(\mu=3\) GeV as that used in Ref. [10]. The non-perturbative parameters \(\lambda_{1,2}\) are related with the kinetic term \(\mu_{\pi}^{2}\) and the chromo-magnetic term \(\mu_{G}^{2}\) of \(B\) meson as: \(\lambda_{1}=-\mu_{\pi}^{2}=-0.414\pm 0.078\) GeV\({}^{2}\) and \(\lambda_{2}=\mu_{G}^{2}/3=0.117\pm 0.023\) GeV\({}^{2}\) respectively [31; 32].
Before calculating the decay width by Eq. (7), one has to determine the integration range of \(E\). Obviously, the lower bound of \(E\) must be \(m_{\psi}\). On the other hand, the upper bound of \(E\) seems to be \(E_{\rm upper}=[m_{b}^{2}+m_{\psi}^{2}-(m_{q}+m_{u})^{2}]/2m_{b}\), which is reached when the invariant momentum square \(Q^{2}\) flowing into the loop bubble as shown in Fig.1 and Fig.2 becomes \(Q^{2}=(m_{q}+m_{u})^{2}\). However, it can be found that the terms proportional to \(\lambda_{1,2}\) in the results of \(A_{1,2}\) contain end point singularities at \(E=E_{\rm upper}\), which can be seen from the pole structures \(1/[Q^{2}-(m_{q}+m_{u})^{2}]^{(n)}\) of \(A_{2bd}^{(1,2)}\) and \(B_{2bd}^{(1,2)}\), with \(n\) being \(1/2\) or \(3/2\). Note that although \(A_{2bd}\) and \(B_{2bd}\) also have such pole structures, they are actually finite in the limit \(Q^{2}\rightarrow(m_{q}+m_{u})^{2}\). The reason why this end point singularity emerges is due to the fact that HQE breaks down at this region, where single states or resonances dominate. Before the expansion of \(k\), the \(W_{ba}\) contain the terms like \(1/[Q^{2}-(m_{q}+m_{u})^{2}+2Q\cdot k+k^{2}]^{(n)}\). When \(Q^{2}-(m_{q}+m_{u})^{2}\) is large, the expansion of \(k\) is right. However, when \(Q^{2}\sim(m_{q}+m_{u})^{2}\), this expansion is forbidden.
It should be noted that the final states \(X_{uq}\) observed in the experiment are baryons, not the quarks. Practically, one has to sum the inclusive states \(X_{uq}\) from the lowest baryon state \({\cal B}_{ug}\). For
example, in terms of the \(B_{0}\to X_{ud}\psi\) decay, \({\cal B}_{uq}\) is a proton or neutron. Accordingly, the lower bounds of \(Q^{2}\) should be set as the mass square of the corresponding lowest baryon state, namely \(Q^{2}>m_{{\cal B}_{uq}}^{2}\) or equivalently \(E_{\rm upper}=[m_{b}^{2}+m_{\psi}^{2}-m_{{\cal B}_{uq}}^{2}]/2m_{b}\), and thus the end point singularity is avoided. Here we have omitted the contribution of the spectator quark in \(B\) meson to \(X_{uq}\), because the energy of \(X_{uq}\) is mostly given by the heavy \(\bar{b}\) quark. The lower bounds of \(Q^{2}\) corresponding to various of \(b\to uq\psi\) transitions are listed in Table.1.
Now, integrating \(E\) in the region \(m_{\psi}<E<E_{\rm upper}\), and using the lifetime of \(B_{0}\): \(\tau_{B_{0}}=1.519\times 10^{-12}\) fs, we can obtain the branching fractions of \(B_{0}\to X_{uq}\psi\) as functions of \(m_{\psi}\). The branching fractions calculated in the type-I model are shown in Fig. 3 in the unit of \(G_{uq}^{2}\times 10^{10}\). The band width shows the uncertainty coming from the uncertainties of \(\lambda_{1,2}\) and \(m_{b}\). In fact, the
\begin{table}
\begin{tabular}{|c|c c c c|} \hline \hline Decay & \(\bar{b}\to ud\psi\) & \(\bar{b}\to us\psi\) & \(\bar{b}\to cd\psi\) & \(\bar{b}\to cs\psi\) \\ \hline Lowest \(X_{uq}\) & \(p/n\) & \(\Lambda\) & \(\Lambda_{c}\) & \(\Xi_{c}\) \\ \(m_{{\cal B}_{uq}}\)(GeV) [30] & 1.0 & 1.115 & 2.286 & 2.468 \\ \hline \end{tabular}
\end{table}
Table 1: The lower bounds of \(Q^{2}\) is set as the mass square of the lowest baryon state in the summation of \(X_{uq}\): \(Q^{2}=m_{{\cal B}_{uq}}^{2}\). The contribution of the spectator quark in \(B\) meson to \(X_{uq}\) is omitted since the energy of \(X_{uq}\) is mostly given by the heavy \(\bar{b}\) quark.
results are insensitive to the values of \(\lambda_{1,2}\), and most of the uncertainties come from the \(b\) quark mass since in the type-I model \(A_{1,2}\) are proportional to \(m_{b}^{n}\). The maximum \(m_{\psi}\) is reached when \(m_{\psi}=E_{\rm upper}\). The branching fractions calculated in the type-II model are shown in Fig. 4 in the unit of \(G_{uq}^{2}\times 10^{8}\). It should be mentioned that in the type-II model, the \(A_{1,2}\) are proportional to \(m_{q}\), and thus vanishes in the case of \(B_{0}\to X_{ud}\psi\) and \(B_{0}\to X_{cd}\psi\). In Fig. 4 only the branching fractions of \(B_{0}\to X_{us}\psi\) and \(B_{0}\to X_{cs}\psi\) are presented. The uncertainty mainly comes from \(\lambda_{1,2}\), which is tiny and can be ignored. Since the masses and lifetimes of \(B^{\pm}\) and \(B_{s}\) are similar to those of \(B_{0}\), in this work we only present the branching fractions of \(B_{0}\) decays and the decay branching fractions of \(B^{\pm}\) and \(B_{s}\) are assumed to be the same.
In the Ref. [3], 95% CL constraints on the inclusive \(B\) meson decays into baryons and missing energy are estimated according to the ALEPH analysis [13]. Using these constraints and according to the branching fractions predicted in Fig. 3 and Fig. 4, we can estimate the upper limits of the coupling constants \(G_{uq}\). The restriction on the range of \(m_{\psi}\) is given by [2] as: \(1.5{\rm GeV}<m_{\psi}<4.2{\rm GeV}\). From Fig. 3 and Fig. 4, it can be found that the branching fraction decreases with the increasing of \(m_{\psi}\). Therefore, setting \(m_{\psi}\) as its minimum value: \(m_{\psi}=1.5\) GeV and using the constraints set by Ref. [3]:
\[{\cal B}(B\to X_{ud}\psi) <3.67\times 10^{-4},\] \[{\cal B}(B\to X_{us}\psi) <7.1\times 10^{-4},\] \[{\cal B}(B\to X_{cd}/X_{cs}\psi) <3.8\times 10^{-3}, \tag{34}\]
one can determine the upper limits of \(G_{uq}\) as
\[{\rm Type\ I:} G_{ud}^{2}<1.8\times 10^{-14}{\rm GeV}^{-4}, G_{us}^{2}<3.6\times 10^{-14}{\rm GeV}^{-4},\] \[G_{cd}^{2}<1.1\times 10^{-12}{\rm GeV}^{-4}, G_{cs}^{2}<1.7\times 10^{-12}{\rm GeV}^{-4};\]
Figure 4: The type-II model branching fractions of \(B_{0}\to X_{us}\psi\) and \(B_{0}\to X_{cs}\psi\) as functions of \(m_{\psi}\) in the unit of \(G_{uq}^{2}\times 10^{8}\). The maximum \(m_{\psi}\) is reached when \(m_{\psi}=E_{\rm upper}\). The branching fractions of \(B_{0}\to X_{ud}\psi\) and \(B_{0}\to X_{cd}\psi\) vanish due to \(m_{u}=m_{d}=0\). The uncertainty mainly comes from \(\lambda_{1,2}\), which is tiny and can be ignored.
\[{\rm Type\ II}:\quad\ G_{us}^{2}<1.0\times 10^{-11}{\rm GeV}^{-4},\quad G_{cs}^{2}<3.7\times 10^{-10}{\rm GeV}^{-4}. \tag{35}\]
Note that the branching fractions of \(B_{0}\to X_{ud}\psi\) and \(B_{0}\to X_{cd}\psi\) vanish in the type-II model due to the chiral limit, thus they cannot be used to constrain \(G_{ud}^{2}\) and \(G_{cd}^{2}\). The branching fractions of \(B_{0}\to X_{us}\psi\) and \(B_{0}\to X_{cs}\psi\) are suppressed by \(m_{s}^{2}\), so they produce larger upper limits for \(G_{uq}^{2}\).
## V Conclusion
In this work, using the recently developed \(B\)-Mesogenesis scenario, we have studied the semi-inclusive decays of \(B\) meson into a dark anti-baryon \(\psi\) plus any possible states \(X\) containing \(u/c\) and \(d/s\) quarks with unit baryon number. The two types of effective Lagrangians proposed by the scenario are both considered in this work. The semi-inclusive decay branching fractions of \(B\to X\psi\) are calculated by the method of heavy quark expansion, where the non-perturbative contributions from the matrix elements of dimension-5 operators are included. We obtained the branching fractions as functions of the dark anti-baryon mass. Using the experimental upper limits of the branching fractions, we provided the upper limits on the coupling constants in the \(B\)-Mesogenesis scenario. In the Type-I model, the upper limits on \(G_{ud}^{2}\) and \(G_{us}^{2}\) are around \(10^{-14}{\rm GeV}^{-4}\), while the upper limits on \(G_{cd}^{2}\) and \(G_{cs}^{2}\) are around \(10^{-12}{\rm GeV}^{-4}\). The upper limits on \(G_{us}^{2}\) and \(G_{cs}^{2}\) in the Type-II model are around \(10^{-11}{\rm GeV}^{-4}\) and \(10^{-10}{\rm GeV}^{-4}\) respectively.
## Acknowledgements
The work of Y.J. Shi is supported by Opening Foundation of Shanghai Key Laboratory of Particle Physics and Cosmology under Grant No.22DZ229013-2. The work of Y. Xing is supported by National Science Foundation of China under Grant No.12005294. The work of Z.P. Xing is supported by China Postdoctoral Science Foundation under Grant No.2022M72210.
## Appendix A Two-body phase space integration
In this work, the rank-0 two-body phase space integration is defined as
\[\int\frac{d^{4}l_{1}}{(2\pi)^{3}}\frac{d^{4}l_{2}}{(2\pi)^{3}} \delta^{4}(P-l_{1}-l_{2})\delta(l_{1}^{2}-m_{1}^{2})\delta(l_{2}^{2}-m_{2}^{2})\] \[= \frac{\pi\sqrt{(P^{2}-(m_{1}+m_{2})^{2})(P^{2}-(m_{1}-m_{2})^{2} )}}{2(2\pi)^{6}P^{2}}. \tag{36}\]
the rank-2 two-body phase space integration is defined as
\[\int\frac{d^{4}l_{1}}{(2\pi)^{3}}\frac{d^{4}l_{2}}{(2\pi)^{3}} \delta^{4}(P-l_{1}-l_{2})\delta(l_{1}^{2}-m_{1}^{2})\delta(l_{2}^{2}-m_{2}^{2} )l_{1}^{\mu}l_{2}^{\nu}\] \[= A_{\rm 2bd}[P^{2},m_{1}^{2},m_{2}^{2}]P^{\mu}P^{\nu}+B_{\rm 2bd}[ P^{2},m_{1}^{2},m_{2}^{2}]P^{2}g^{\mu\nu}, \tag{37}\]
with
\[A_{\rm 2bd} = \frac{\pi\sqrt{(P^{2}-(m_{1}+m_{2})^{2})(P^{2}-(m_{1}-m_{2})^{2})}}{6 (2\pi)^{6}(P^{2})^{3}}\left[m_{1}^{4}+m_{1}^{2}(P^{2}-2m_{2}^{2})+(P^{2}-2m_{2}^{ 2})^{2}\right],\] \[B_{\rm 2bd} = -\frac{\pi\left[(P^{2}-(m_{1}+m_{2})^{2})(P^{2}-(m_{1}-m_{2})^{2} )\right]^{3/2}}{24(2\pi)^{6}(P^{2})^{3}}. \tag{10}\]
## Appendix B Expressions of \(A_{1,2}\) in the type-I model
For convenience we define the following dimensionless variables to simplify the expressions: \(s=m_{q}/m_{b},u=m_{u}/m_{b},q_{s}=Q^{2}/m_{b}^{2}\) and \(\Delta=q_{s}^{2}-2q_{s}(s+u)+(s-u)^{2}\). The expressions of \(A_{1,2}\) in the type-I model read as
\[A_{1}^{k^{0}}= \frac{m_{B}m_{b}\sqrt{\Delta}}{48\pi^{2}q_{s}^{3}}(\epsilon-1) \left(q_{s}^{2}+q_{s}(s-2u)+(s-u)^{2}\right), \tag{11}\] \[A_{2}^{k^{0}}= -\frac{m_{b}^{2}\sqrt{\Delta}}{192\pi^{2}q_{s}^{3}}\left[q_{s}^{2 }(4\epsilon-2(s+u+2))+q_{s}\left(s(4\epsilon-2u-4)+u(-8\epsilon+u+8)+s^{2} \right)\right.\] \[\left.+4(\epsilon-1)(s-u)^{2}+q_{s}^{3}\right];\] \[A_{1}^{k^{1}}= \frac{m_{B}}{96\pi^{2}q_{s}^{4}\sqrt{\Delta}}A\left[2q_{s}^{3}u \left(3\epsilon^{2}-6\epsilon-s+3u+3\right)-q_{s}^{2}\left(-su\left(6\epsilon ^{2}-12\epsilon+7u+6\right)\right.\right.\] \[\left.\left.+2u^{2}\left(9\epsilon^{2}-18\epsilon+2u+9\right)+s^ {3}+2s^{2}u\right)+q_{s}(s-u)^{2}\left(2s\left(3\epsilon^{2}-6\epsilon-u+3 \right)\right.\right.\] \[\left.+u\left(18\epsilon^{2}-36\epsilon+u+18\right)+s^{2}\right) -6(\epsilon-1)^{2}(s-u)^{4}+q_{s}^{5}-q_{s}^{4}(s+4u)\right]\] \[+\frac{m_{B}}{384\pi^{2}m_{b}q_{s}^{4}\sqrt{\Delta}}(Y-Z)\left[-1 2\epsilon^{2}q_{s}^{3}u-12\epsilon^{2}q_{s}^{2}su+36\epsilon^{2}q_{s}^{2}u^{2 }-12\epsilon^{2}q_{s}s^{3}\right.\] \[\left.-12\epsilon^{2}q_{s}s^{2}u+60\epsilon^{2}q_{s}su^{2}-36 \epsilon^{2}q_{s}u^{3}+12\epsilon^{2}s^{4}-48\epsilon^{2}s^{3}u+72\epsilon^{2 }s^{2}u^{2}-48\epsilon^{2}su^{3}\right.\] \[\left.+12\epsilon^{2}u^{4}+24\epsilon q_{s}^{3}u+24\epsilon q_{s} ^{2}su-72\epsilon q_{s}^{2}u^{2}+24\epsilon q_{s}s^{3}+24\epsilon q_{s}s^{2}u -120\epsilon q_{s}su^{2}\right.\] \[\left.+72\epsilon q_{s}u^{3}-24\epsilon s^{4}+96\epsilon s^{3}u- 144\epsilon s^{2}u^{2}+96\epsilon su^{3}-24\epsilon u^{4}+7q_{s}^{5}-7q_{s}^ {4}s\right.\] \[\left.-19q_{s}^{4}u+3q_{s}^{3}s^{2}-2q_{s}^{3}su+15q_{s}^{3}u^{2} -12q_{s}^{3}u-q_{s}^{2}s^{3}+q_{s}^{2}s^{2}u+q_{s}^{2}su^{2}-12q_{s}^{2}su\right.\] \[\left.-q_{s}^{2}u^{3}+36q_{s}^{2}u^{2}-2q_{s}s^{4}+8q_{s}s^{3}u-1 2q_{s}s^{3}-12q_{s}s^{2}u^{2}-12q_{s}s^{2}u+8q_{s}su^{3}\right.\] \[\left.+60q_{s}su^{2}-2q_{s}u^{4}-36q_{s}u^{3}+12s^{4}-48s^{3}u+72 s^{2}u^{2}-48su^{3}+12u^{4}\right],\] \[A_{2}^{k^{1}}= \frac{m_{b}}{192\pi^{2}q_{s}^{4}\sqrt{\Delta}}A\left[q_{s}^{3} \left(3u\left(-4\epsilon^{2}+\epsilon(5u+8)-9u-4\right)+3(\epsilon-1)s^{2}-2 (\epsilon-3)su\right)\right.\] \[\left.+q_{s}^{2}\left(su\left(-12\epsilon^{2}+19\epsilon u+24 \epsilon-33u-12\right)+u^{2}\left(36\epsilon^{2}-\epsilon(13u+72)+21u+36 \right)\right.\right.\] \[\left.+(9-7\epsilon)s^{3}+(\epsilon+3)s^{2}u\right)+2q_{s}(s-u)^ {2}\left(-2s\left(3\epsilon^{2}+2\epsilon(u-3)-3u+3\right)\right.\] \[\left.+u\left(-18\epsilon^{2}+2\epsilon(u+18)-3(u+6)\right)+(2 \epsilon-3)s^{2}\right)\] \[\left.+(\epsilon-3)q_{s}^{5}+q_{s}^{4}((15-7\epsilon)u-(\epsilon-3 )s)+12(\epsilon-1)^{2}(s-u)^{4}\right]\] \[+\frac{1}{384\pi^{2}q_{s}^{4}\sqrt{\Delta}}(Y-Z)\left[12\epsilon^{2 }q_{s}^{3}u+12\epsilon^{2}q_{s}^{2}su-36\epsilon^{2}q_{s}^{2}u^{2}+12\epsilon^{2 }q_{s}s^{3}+12\epsilon^{2}q_{s}s^{2}u\right.\] \[\left.-60\epsilon^{2}q_{s}su^{2}+36\epsilon^{2}q_{s}u^{3}-12 \epsilon^{2}s^{4}+48\epsilon^{2}s^{3}u-72\epsilon^{2}s^{2}u^{2}+48\epsilon^{2}su ^{3}-12\epsilon^{2}u^{4}\right.\]
\[-\epsilon q_{s}^{5}+\epsilon q_{s}^{4}s+7\epsilon q_{s}^{4}u-3 \epsilon q_{s}^{3}s^{2}+2\epsilon q_{s}^{3}su-15\epsilon q_{s}^{3}u^{2}-24\epsilon q _{s}^{3}u+7\epsilon q_{s}^{2}s^{3}-\epsilon q_{s}^{2}s^{2}u\] \[-19\epsilon q_{s}^{2}su^{2}-24\epsilon q_{s}^{2}su+13\epsilon q_{s }^{2}u^{3}+72\epsilon q_{s}^{2}u^{2}-4\epsilon q_{s}s^{4}+16\epsilon q_{s}s^{3 }u-24\epsilon q_{s}s^{3}\] \[-24\epsilon q_{s}s^{2}u^{2}-24\epsilon q_{s}s^{2}u+16\epsilon q_{s }su^{3}+120\epsilon q_{s}su^{2}-4\epsilon q_{s}u^{4}-72\epsilon q_{s}u^{3}+24 \epsilon s^{4}\] \[-96\epsilon s^{3}u+144\epsilon s^{2}u^{2}-96\epsilon su^{3}+24 \epsilon u^{4}-6q_{s}^{5}+6q_{s}^{4}s+12q_{s}^{4}u+12q_{s}^{3}u-6q_{s}^{2}s^{3}\] \[+18q_{s}^{2}su^{2}+12q_{s}^{2}su-12q_{s}^{2}u^{3}-36q_{s}^{2}u^{2} +6q_{s}s^{4}-24q_{s}s^{3}u+12q_{s}s^{3}+36q_{s}s^{2}u^{2}\] \[+12q_{s}s^{2}u-24q_{s}su^{3}-60q_{s}su^{2}+6q_{s}u^{4}+36q_{s}u^{ 3}-12s^{4}+48s^{3}u-72s^{2}u^{2}\] \[+48su^{3}-12u^{4}]\;; \tag{10}\] \[A_{1}^{k^{2}}= \frac{m_{B}}{32\pi^{2}m_{b}q_{s}^{5}\Delta^{3/2}}Y(1-\epsilon) \left[q_{s}^{5}u\left(4\epsilon^{2}-8\epsilon-5s-u+4\right)\right.\] \[\left.-3q_{s}^{4}\left(2u^{2}\left(4\epsilon^{2}-8\epsilon+u+4 \right)+s^{3}-3su^{2}\right)+q_{s}^{3}\left(s^{3}\left(8\epsilon^{2}-16 \epsilon-11u+8\right)\right.\right.\] \[\left.\left.-3su^{2}\left(8\epsilon^{2}-16\epsilon+7u+8\right)+14u ^{3}\left(4\epsilon^{2}-8\epsilon+u+4\right)+9s^{4}+9s^{2}u^{2}\right)\right.\] \[\left.-q_{s}^{2}(s-u)^{2}\left(s^{2}\left(24\epsilon^{2}-48 \epsilon-7u+24\right)+su\left(32\epsilon^{2}-64\epsilon-13u+32\right)\right.\right.\] \[\left.+u^{2}\left(64\epsilon^{2}-128\epsilon+11u+64\right)+9s^{3} \right)+3q_{s}(s-u)^{4}\left(2s\left(4\epsilon^{2}-8\epsilon-u+4\right)\right.\] \[\left.+u\left(12\epsilon^{2}-24\epsilon+u+12\right)+s^{2}\right)- 8(\epsilon-1)^{2}(s-u)^{6}+q_{s}^{6}u\right],\] (11) \[A_{2}^{k^{2}}= \frac{1}{128\pi^{2}q_{s}^{5}\Delta^{3/2}}Y\left[4q_{s}^{6}\left( u\left(2\epsilon^{2}-3\epsilon+u+1\right)+s^{2}\right)\right.\] \[\left.-12(\epsilon-1)q_{s}(s-u)^{4}\left(-2s\left(4\epsilon^{2}+ \epsilon(u-8)-2u+4\right)\right.\right.\] \[\left.\left.+u\left(-12\epsilon^{2}+\epsilon(u+24)-2(u+6)\right)+( \epsilon-2)s^{2}\right)-2q_{s}^{5}\left(s^{2}\left(2\epsilon^{2}-4\epsilon-3u +2\right)\right.\right.\] \[\left.+su\left(4\epsilon^{2}+2\epsilon-3(u+2)\right)+u\left(-8 \epsilon^{3}+\epsilon^{2}(22u+24)-6\epsilon(7u+4)+3u^{2}+20u+8\right)+3s^{3} \right)\] \[+3q_{s}^{4}\left(4s^{3}\left(2\epsilon^{2}-5\epsilon-u+3\right)- 4su^{2}\left(2\epsilon^{2}-7\epsilon+u+5\right)\right.\] \[\left.+u^{2}\left(-32\epsilon^{3}+32\epsilon^{2}(u+3)-24\epsilon(3 u+4)+3u^{2}+40u+32\right)+3s^{4}+2s^{2}u^{2}\right)\] \[-q_{s}^{3}\left(3s^{4}\left(16\epsilon^{2}-44\epsilon-7u+28\right) +2s^{2}u^{2}\left(12\epsilon^{2}-42\epsilon+7u+30\right)\right.\] \[\left.-2s^{3}\left(16\epsilon^{3}+16\epsilon^{2}(u-3)-6\epsilon(9 u-8)-7u^{2}+38u-16\right)\right.\] \[\left.-3su^{2}\left(-32\epsilon^{3}+48\epsilon^{2}(u+2)-4 \epsilon(31u+24)+7u^{2}+76u+32\right)\right.\] \[\left.+u^{3}\left(-224\epsilon^{3}+8\epsilon^{2}(13u+84)-24 \epsilon(11u+28)+7u^{2}+160u+224\right)+7s^{5}\right)\] \[+2q_{s}^{2}(s-u)^{2}\left(s^{3}\left(20\epsilon^{2}-58\epsilon-4u +38\right)\right.\] \[\left.-2s^{2}\left(24\epsilon^{3}+6\epsilon^{2}(u-12)+\epsilon(7 2-19u)-3u^{2}+13u-24\right)\right.\] \[\left.-2su\left(32\epsilon^{3}+6\epsilon^{2}(3u-16)+\epsilon(96-49 u)+2u^{2}+31u-32\right)\right.\] \[\left.+u^{2}\left(-128\epsilon^{3}+4\epsilon^{2}(7u+96)-6\epsilon( 13u+64)+u^{2}+50u+128\right)+s^{4}\right)\] \[-32(\epsilon-1)^{3}(s-u)^{6}+q_{s}^{8}-3q_{s}^{7}(s+u)\right];\] (12) \[A_{1}^{ug}= -\frac{3m_{B}}{64\pi^{2}m_{b}q_{s}^{3}\sqrt{\Delta}}N(1-\epsilon)(q _{s}+s-u)\left[q_{s}^{2}-q_{s}(s+2u)+(s-u)^{2}\right],\] (13) \[A_{2}^{ug}= -\frac{3}{256\pi^{2}q_{s}^{3}\sqrt{\Delta}}N(q_{s}+s-u)\left[4 \epsilon q_{s}^{2}-4\epsilon q_{s}s-8\epsilon q_{s}u+4\epsilon s^{2}-8\epsilon su +4\epsilon u^{2}+q_{s}^{3}\right.\]
\[-2q_{s}^{2}s-2q_{s}^{2}u-4q_{s}^{2}+q_{s}s^{2}-2q_{s}su+4q_{s}s+q_{s }u^{2}+8q_{s}u-4s^{2}+8su-4u^{2}\right]; \tag{111}\] \[A_{1}^{qg}= \frac{m_{B}}{32\pi^{2}m_{b}q_{s}^{3}\sqrt{\Delta}}N(1-\epsilon)(q_ {s}-s+u)\left[q_{s}^{2}-q_{s}(2s+u)+(s-u)^{2}\right],\] (112) \[A_{2}^{qg}= \frac{1}{128\pi^{2}q_{s}^{3}\sqrt{\Delta}}N(q_{s}-s+u)\left[4 \epsilon q_{s}^{2}-8\epsilon q_{s}s-4\epsilon q_{s}u+4\epsilon s^{2}-8\epsilon su +4\epsilon u^{2}+q_{s}^{3}\right.\] \[\left.-2q_{s}^{2}s+2q_{s}^{2}u-4q_{s}^{2}+q_{s}s^{2}-2q_{s}su+8q _{s}s+q_{s}u^{2}+4q_{s}u-4s^{2}+8su-4u^{2}\right].\]
Here the superscript \(ug\) and \(qg\) denote the cases of gluon emission from the \(u\) and \(q\) quarks respectively.
## Appendix C Expressions of \(A_{1,2}\) in the type-II model
For convenience we define the following dimensionless variables to simplify the expressions: \(s=m_{q}/m_{b},u=m_{u}/m_{b},q_{s}=Q^{2}/m_{b}^{2}\) and \(\Delta=q_{s}^{2}-2q_{s}(s+u)+(s-u)^{2}\). The expressions of \(A_{1,2}\) in the type-II model read as
\[A_{1}^{k^{0}}= 0, \tag{113}\] \[A_{2}^{k^{0}}= \frac{m_{b}^{2}s\sqrt{\Delta}}{16\pi^{2}q_{s}};\] (114) \[A_{1}^{k^{1}}= \frac{m_{B}}{32\pi^{2}m_{b}q_{s}^{2}\sqrt{\Delta}}(Y-Z)\left[q_{ s}(s+u)-(s-u)^{2}\right],\] (115) \[A_{2}^{k^{1}}= \frac{1}{32\pi^{2}q_{s}^{2}\sqrt{\Delta}}\left[2(\epsilon-1)m_{b} A+\epsilon(Z-Y)\right]\left[q_{s}(s+u)-(s-u)^{2}\right];\] (116) \[A_{1}^{k^{2}}= 0,\] (117) \[A_{2}^{k^{2}}= \frac{s}{32\pi^{2}q_{s}^{3}\Delta^{3/2}}Y\left[-4\epsilon^{2}q_{ s}^{3}s-4\epsilon^{2}q_{s}^{3}u+12\epsilon^{2}q_{s}^{2}s^{2}+12\epsilon^{2}q_{s}^{2 }u^{2}-12\epsilon^{2}q_{s}s^{3}+12\epsilon^{2}q_{s}s^{2}u\right.\] \[\left.+12\epsilon^{2}q_{s}su^{2}-12\epsilon^{2}q_{s}u^{3}+4 \epsilon^{2}s^{4}-16\epsilon^{2}s^{3}u+24\epsilon^{2}s^{2}u^{2}-16\epsilon^{2 }su^{3}+4\epsilon^{2}u^{4}\right.\] \[\left.+8\epsilon q_{s}^{3}s+8\epsilon q_{s}^{3}u-24\epsilon q_{s} ^{2}s^{2}-24\epsilon q_{s}^{2}u^{2}+24\epsilon q_{s}s^{3}-24\epsilon q_{s}s^{2 }u-24\epsilon q_{s}su^{2}+24\epsilon q_{s}u^{3}\right.\] \[\left.-8\epsilon s^{4}+32\epsilon s^{3}u-48\epsilon s^{2}u^{2}+32 \epsilon su^{3}-8\epsilon u^{4}+q_{s}^{4}s+q_{s}^{4}u-3q_{s}^{3}s^{2}+6q_{s}^{ 3}su-4q_{s}^{3}s\right.\] \[\left.-3q_{s}^{3}u^{2}-4q_{s}^{3}u+3q_{s}^{2}s^{3}-3q_{s}^{2}s^{2 }u+12q_{s}^{2}s^{2}-3q_{s}^{2}su^{2}+3q_{s}^{2}u^{3}+12q_{s}^{2}u^{2}-q_{s}s^{4}\right.\] \[\left.+4q_{s}s^{3}u-12q_{s}s^{3}-6q_{s}s^{2}u^{2}+12q_{s}s^{2}u+4 q_{s}su^{3}+12q_{s}su^{2}-q_{s}u^{4}-12q_{s}u^{3}+4s^{4}\right.\] \[\left.-16s^{3}u+24s^{2}u^{2}-16su^{3}+4u^{4}\right], \tag{118}\]
and \(A_{1,2}^{ug}=A_{1,2}^{qgg}=0\) in the chiral limit.
|
2310.02927 | Joint Network Lifetime Maximization and Relay Selection Design in
Underwater Acoustic Sensor Networks | The paper proposes a new approach to minimize the number of relays while
maximizing the lifetime of underwater acoustic sensor networks (UASNs). This
involves formulating the relay node placement (RNP) problem as a
multi-objective optimization problem and employing the multi-objective
lexico-graphic method (MOLM) to solve it. To achieve the optimal solution, the
MOLM consists of two steps. First, the problem of lifetime maximization is
tackled to find RNP solutions. This transforms the RNP into a non-convex
optimization problem which is then converted into a convex programming
equivalent. The proposed method has the same computational complexity as
previous relay-node adjustment (RA) and difference convex algorithm (DCA)
methods. The second step introduces a novel relay node selection to reach the
optimal number of relays. Simulation results demonstrate that it has superior
network lifetime and efficiency compared to RA and DCA. | Z. Mohammadi, M. Soleimanpour-Moghadam, S. Talebi, H. Ahmadi | 2023-10-04T16:06:18Z | http://arxiv.org/abs/2310.02927v1 | Joint Network Lifetime Maximization and Relay Selection Design in Underwater Acoustic Sensor Networks
###### Abstract
The paper proposes a new approach to minimize the number of relays while maximizing the lifetime of underwater acoustic sensor networks (UASNs). This involves formulating the relay node placement (RNP) problem as a multi-objective optimization problem and employing the multi-objective lexicographic method (MOLM) to solve it. To achieve the optimal solution, the MOLM consists of two steps. First, the problem of lifetime maximization is tackled to find RNP solutions. This transforms the RNP into a non-convex optimization problem which is then converted into a convex programming equivalent. The proposed method has the same computational complexity as previous relay-node adjustment (RA) and difference convex algorithm (DCA) methods. The second step introduces a novel relay node selection to reach the optimal number of relays. Simulation results demonstrate that it has superior network lifetime and efficiency compared to RA and DCA.
Underwater sensor node, relay node, critical node, network lifetime, convex optimization, energy hole.
## I Introduction
Underwater acoustic sensor networks (UASNs) have attracted a great deal of attention for various applications including off-shore oil and gas extraction, oil spills, and natural calamities like tsunami and hurricane forecasts [1]. These networks are composed of multiple nodes that use acoustic transceivers, which are more suitable for underwater environments than electromagnetic ones as acoustic signals experience less attenuation in the water. Each node collects data from its surroundings and transmits it to a surface buoy (SB). Unfortunately, because of the sparse deployment of UASNs, energy consumption is high, particularly for the nodes closest to the SB that send a large amount of data. This can lead to an _energy hole_, where the nodes closest to the SB die before the other nodes, making it impossible to forward the remaining data to the SB [2]. To prolong the lifetime of the network and avoid the energy hole, much research has been conducted to reduce energy consumption in recent years.
### _Literature review_
The work in [3] studied the cluster-based data forwarding to deal with the energy efficacy in UASNs. Based on this approach, the sensor nodes are classified into some clusters, and the cluster heads (CHs) are responsible for gathering data. Also, the residual energy and location of sensor nodes are taken into account to select the optimal CHs to prevent the energy hole problem. Using an autonomous underwater vehicle (AUV) to collect data from sensor nodes is an another solution to reduce energy consumption in UASNs. For instance, [4, 5] demonstrate that by employing an AUV, data can be collected from gateways and gateways can be rotated over time to balance energy consumption. However, the transmission delays caused by the AUV are very long, making it difficult to use in time-sensitive applications such as temperature and salinity evaluations for red tide forecasts [6]. Power control schemes can also be used to adjust the power level of sensors during communication depending on the channel status and network conditions [7, 8]. Additionally, sleeping schemes can be implemented for sensor nodes [9, 10] to help save energy by keeping them in the sleeping mode for as long as possible. However, these techniques pose new challenges when dealing with new technologies such as energy harvesting. To this end, [11, 12] present methods for optimizing time to foster energy harvesting and prolong the lifetime of the network. But energy harvesting-based networks struggle with the unpredictability of the harvested energy. There has been research on designing an optimal routing protocol. For example, routing protocols can balance energy consumption among nodes by assigning more energy to nodes that have higher traffic loads [13], using residual energy as the routing criteria [14], and splitting the transmission range into different power levels [15].
The high costs and intricate characteristics of underwater networks often result in their sparse deployment, leading to increased energy consumption. To overcome this limitation and extend the network's lifetime, some authors propose the utilization of micro-relay nodes. These relay nodes possess identical size and power supply as the sensor nodes. One area of focus to enhance the network's lifetime is the strategic placement of these relay nodes, which is known as relay node placement (RNP). This aspect has garnered significant attention among researchers. For example, the study by Das et al. [16] proposed the utilization of relay nodes that dynamically move between communicating nodes, acting as intermediaries. This strategy aims to decrease the communication distance
between nodes, thereby improving overall energy efficiency. The authors in [17] proposed a heuristic relay-node adjustment (RA) scheme for positioning relay nodes in 3-dimensional UASN. This scheme consists of two steps: initially randomizing the relay nodes on the water surface and then adjusting their depth. However, the heuristic approach employed in [17] may yield suboptimal solutions due to its reliance on simplified rules and assumptions. In contrast, our mathematical solution, as presented in [18, 19], effectively overcomes these limitations, resulting in improved accuracy and efficiency. Specifically, [18] introduced the concept of a line-segment relay node placement (LSRNP), which entailed positioning a relay node between a critical node and its farthest neighbor. However, it was demonstrated that LSRNP posed challenges as a complex, non-convex problem, making practical implementation difficult. Building upon this concept, [19] explored joint deployment of relay and sensor nodes, employing a difference convex algorithm (DCA) to derive a low-complexity solution. In contrast to previous relay node placement schemes, which often employed simplified restrictions on the feasible space of relay nodes (i.e., a line between critical node and its farthest neighbor), our approach seeks to overcome this limitation by seeking the optimal position for relay nodes, ensuring the best possible arrangement.
### _Contribution_
The above mentioned RNP solutions contribute to extending the network lifetime but still fail to optimize the network lifetime. In this paper we introduce a framework that addresses the inherent shortcomings of the LSRNP. By exploring a broader range of potential relay node positions and leveraging advanced optimization techniques, we aim to significantly improve the accuracy and optimality of RNP. Additionally, the number of relay nodes deployed in the RNP is an important factor to consider which is not considered in all previous works. In which the research in [20] has investigated the effect of the number of relay nodes on the LSRNP and found that redundant relay nodes do not increase the network lifetime, but too many can lead to a decrease in network performance due to the extra time and energy spent in packet forwarding and receiving data. Motivated by these, we propose a framework that can solve the RNP efficiently without placing redundant relay nodes in the network. To the authors's best knowledge, while RNP is becoming a crucial topic in UASNs, this is one of the first times that network lifetime maximization is combined with number of relay nodes minimization. Overall, the paper's core contributions are described in the following.
1. We formulate the joint problem of maximizing network lifetime and minimizing the number of relay nodes (NLMA-RNMI) in relay-assisted UASNs as a multi-objective optimization problem (MOOP). We employ the multi-objective lexicographic method (MOLM) to achieve Pareto optimality for the given NLMA-RNMI MOOP. Based on the MOLM, the objectives are prioritized in order of importance, with network lifetime given the highest priority. Thus, our first aim is to maximize the network lifetime and subsequently minimize the number of relay nodes to prevent resource wastage.
2. To ensure that the network lifetime is maximized, we propose the Optimal Relay Node Setting (ORNS) algorithm, which models RNP as a mathematical optimization problem. This problem considers two criteria in the maximization process: balancing the energy consumption between sensor nodes and preventing the relay nodes positioned in outlier positions. We have taken this into account in our problem formulation, leading to a MOOP RNP. To perform the proposed MOOP RNP, we use the \(\epsilon\)-constraint approach. By employing this approach, we prioritize the network lifetime of the critical node as the objective function to address the energy hole in the network. Additionally, we incorporate the relay node's lifetime as a constraint to ensure an improvement in the overall network lifetime. We demonstrate that the resulting RNP is non-convex and introduce a transformation to convert it into a convex optimization problem.
3. We then formulate a mixed-integer convex programming model to obtain the optimal number of relay nodes. To ensure a strong coupling relationship between network lifetime and the number of relay nodes, we consider the optimal value of network lifetime as a constraint in the problem of minimizing relay nodes. This ensures direct communication without the assistance of intermediate relay nodes when the energy consumption is low. Therefore, this approach is referred to as the relay selection design.
To better highlight the contribution of this paper, Table I presents a comparison of this work with different works in the literature. We evaluate the time complexity of our RNP method, as well as the DCA and RA approaches. The results indicate that these approaches exhibit the same order of complexity. Additionally, we conduct several comprehensive simulations assess their performance. Through these simulations, we compare the effectiveness of our RNP approach with the others. The outcomes demonstrate that our approach outperforms the alternative methods in terms of network lifetime.
The rest of the paper is organized as follows: In section II, the model of the system and problem definition are illustrated and described. Some basics and preliminaries on the MOOP, MOLM, convex optimization problems, difference convex functions, and other basics --that are used in this work--are described in section III. The proposed method to RNP and its equivalent convex programming scheme as well as the relay node selection scheme are presented in detail in section IV. In section V, the complexity of the proposed RNP is evaluated. Simulation and comparison results are provided in section VI and finally, the paper concludes in section VII, recapping its contribution.
Lightface letters denotes scalars. Boldface lowercase letters are employed to denote vectors and boldface uppercase letters to denote matrices. The operations \(\mathrm{E}(.),(.)^{\mathrm{T}}\) and \(\|.\|_{p}\) denote the expectation operator, transpose, and p-norm respectively. The \([\mathbf{A}]_{n*}\) and \(\mathbf{a}[n]\) stands for \(n\)-th row and \(n\)-th column of matrix \(\mathbf{A}\) respectively, and to denote the \((m,n)\) entry of matrix \(\mathbf{A}\) we
use \([\mathbf{A}]_{mn}\). The \(\mathbf{1}\) refers to a vector with all elements equal to one while \(\mathbf{e}_{i}\) denotes a unit vector with element \(i\) equals one and zeros everywhere else. We also denote the size, i.e., cardinality, of set \(S\) by \(|S|\). \(\mathrm{dom}\) is an abbreviation of domain where \(\mathrm{dom}(\mathrm{g})\) describes all the values that go into the function \(g\).
## II System model and problem formulation
Consider a 3-dimensional multi-hop UASN, as depicted in Fig 1, which consists of \(\mathrm{N}\) sensor nodes and \(\mathrm{M}\) relay nodes deployed in a designated search field to gather data about the environment. The sensor nodes are constrained by limited battery power and their positions are randomly determined. The relay nodes, on the other hand, are strategically placed to maximize the network lifetime, but are not capable of sensing information from the environment. Additionally, the SB is situated on the ocean surface to communicate with a satellite that forwards the data gathered by the sensor nodes to the onshore sink. The SB is responsible for making RNP decisions in the field and all the sensor nodes are required to communicate with it in order to forward parameters. This scheme is effective in maximizing the network lifetime, while minimizing the number of relay nodes. It should be noted that the origin of the Cartesian coordinates system is located on SB. In the proposed scheme, all nodes have the communication range \(C_{R}\), so any two nodes out of this range will not be able to communicate together. The rate array \(\mathbf{R}\) is given as
\[\mathbf{R}=\left[\mathbf{R}_{1*}^{T},\ldots,\mathbf{R}_{i*}^{T},\ldots, \mathbf{R}_{|\mathcal{N}|*}^{T}\right]^{T}, \tag{1}\]
where \(\mathcal{N}\) is the set of all nodes, including sensor nodes (\(\mathcal{S}\)), relay nodes (\(\mathcal{R}\)), and the SB, and \(\mathbf{R}_{i*}\in\mathbb{Z}_{+}^{|\mathcal{N}|\times 1}\) is the outgoing flow vector from node \(i\) to other nodes. Additionally, \(\mathbf{r}\left[i\right]\in\mathbb{Z}_{+}^{|\mathcal{N}|\times 1}\) is the incoming flow vector from other nodes to node \(n\) for \(i=1,\ldots,|\mathcal{N}|\). In order to ensure that the flow rate array meets certain specifications, there are several conditions that must be taken into account.
1. At each sensor node \(k\), the sum of outgoing flow rates must be equal to the sum of incoming flow rates and the generation rate, given as \[\mathbf{R}_{k*}\mathbf{1}=\mathbf{1}^{T}\mathbf{r}\left[k\right]+g_{k},\] (2) where \(g_{k}\) is the generation rate of sensor node \(k\). Additionally, at each relay node, the sum of outgoing flow rates should be equal to the sum of incoming flow rates, \[\mathbf{R}_{i*}\mathbf{1}=\mathbf{1}^{T}\mathbf{r}\left[i\right].\] (3)
2. Underwater acoustic communication faces challenges such as limited bandwidth, long propagation delays, multipath fading, and high signal attenuation. These factors affect the achievable data rate and overall link capacity in underwater acoustic sensor networks. In the context of UASNs, the node's link capacity (\(L_{c}\)) refers to the maximum data rate or throughput that can be achieved over a communication link between two nodes in the network [21]. Therefore, the sum of outgoing flow rates of each node must be less than the node's link capacity (\(L_{c}\)), \[\mathbf{R}_{n*}\mathbf{1}\leq L_{c},\] (4) for each node \(n\in\mathcal{R}\cup S\).
### _Energy consumption model_
Based on the Urick model [22], the energy consumption of sending one bit of data from one node, \(i\), to another node, \(j\) is expressed as [23, 17]
\[p_{ij}=\begin{cases}p_{s}+\epsilon_{fs}d_{ij}^{2}&d_{ij}<d_{t}\\ p_{s}+\epsilon_{mp}d_{ij}^{4}&d_{ij}\geq d_{t}\end{cases}, \tag{5}\]
where \(d_{ij}\) is the 3-dimensional Euclidean distance between nodes \(i\) and \(j\) given by \(\left\|\mathbf{l}_{i}-\mathbf{l}_{j}\right\|\), with \(\mathbf{l}_{i}\) being the position vector of node \(i\). Additionaly, \(d_{t}\) is a threshold distance to transmit data; \(p_{s}\) is the the power consumption for processing in sending data; \(\epsilon_{fs}\) and \(\epsilon_{mp}\) represent transmit amplifier
Fig. 1: Network model with 3-dimensional Cartesian coordinates
\begin{table}
\begin{tabular}{l l l l} \hline Ref. & Methods Adoption & Problem Description & Performance Metrics \\ \hline
[3] & Cluster-based data gathering & CH selection & Network lifetime \\
[4, 5] & AUV-based data gathering & Path optimization of AUV & Collection delay \\
[7, 8] & Adjusting the power level of sensors & Power allocation & Achievable throughput \\
[9, 10] & Changing the network topology & Link scheduling optimization & Energy consumption \\
[11, 12] & Energy harvesting & Optimizing time to harvest energy & Network lifetime \\
[14] & Designing routing protocol & Optimizing the routing criteria & Balanced energy consumption \\
[18, 19] & Line-segment RNP & Optimizing the position of relay nodes & Network lifetime \\ This work & Multi-objective RNP & Network lifetime maximization and the number of relay nodes & Network lifetime and number of relay nodes \\ \hline \end{tabular}
\end{table} TABLE I: Comparisons between related studies on energy management in UASNs
coefficient of free space and multipath model, respectively. If \(d_{ij}<d_{t}\), the amplifier coefficient of free space model \(\epsilon_{fs}\) is adopted. Otherwise, the amplifier coefficient of free multipath model \(\epsilon_{mp}\) is adopted. In UASNs, no matter the free space module or multipath model, the amplifier coefficient is defined as \(\alpha(f)^{d_{ij}}\)[23, 17], where \(\alpha(f)\) is the absorption coefficient which is derived from Throp's formula [24] as \(10\mathrm{log}\alpha(f)=0.1\frac{f^{2}}{1+f^{2}}+\frac{40f^{2}}{4100+f^{2}}+2.75\times 10^{-4}f^{2}+0.003\) for frequencies of the acoustic signal above a few hundred Hertz and \(f\) is the frequency of acoustic signal. Therefore, the expression (5) is rewritten as:
\[p_{ij}=\begin{cases}p_{s}+\alpha(f)^{d_{ij}}d_{ij}^{2}&d_{ij}<d_{t}\\ p_{s}+\alpha(f)^{d_{ij}}d_{ij}^{4}&d_{ij}\geq d_{t}\end{cases}, \tag{6}\]
The lifetime of an underwater acoustic sensor network (UASN) is defined as the duration until the death of the first node, as effective communication is only possible until then [25, 26]. The lifetime of a node, \(i\), is expressed as the ratio of its residual energy, \(\epsilon_{i}\), to its total energy consumption [17]:
\[\tau_{i}=\frac{\epsilon_{i}}{\sum_{j\in\mathcal{N}}p_{ij}[\mathbf{R}]_{ij}+p_ {r}\sum_{k\in\mathcal{S}\cup\mathcal{R}}^{k\neq i}[\mathbf{R}]_{ki}}, \tag{7}\]
where \(p_{r}\) is the energy consumption for receiving one bit, and \([\mathbf{R}]_{ij}\) is the outgoing flow from node \(i\) to node \(j\), and \([\mathbf{R}]_{ki}\) is the incoming flow from node \(k\) to node \(i\).
The following assumptions are made in the investigation: (1) All sensor and relay nodes have the same communication range; (2) the movement of sensors is predictable, and the position of the nodes is known through the localization process; (3) Link capacity (\(L_{c}\)) is considered constant and equal for all sensor and relay nodes [17]; and (4) Before placing relay nodes, the network is connected, meaning each sensor node has a route to the SB.
### _Problem formulation_
Our goal is to maximize the UASN lifetime while jointly minimizing the number of required relay nodes. The multi-objective optimization problem is formulated as below:
\[\{\min\mathrm{M},\max\{\min_{i\in\mathcal{S}\cup\mathcal{R}}\tau_{i}\}\},\] (8a) s.t. \[\tau_{i}=\frac{\epsilon_{i}}{\sum_{j\in\mathcal{N}}^{i\neq j}p_{ ij}[\mathbf{R}]_{ij}+p_{r}\sum_{k\in\mathcal{S}\cup\mathcal{R}}^{k\neq i}[ \mathbf{R}]_{ki}},i\in\mathcal{S}\cup\mathcal{R} \tag{8b}\] \[|\mathcal{R}|=\mathrm{M}\] (8c) \[p_{ij}=\begin{cases}p_{s}+\alpha^{d_{ij}}d_{ij}^{2}&d_{ij}<d_{t }\\ p_{s}+\alpha^{d_{ij}}d_{ij}^{4}&d_{ij}\geq d_{t}\end{cases},\] (8d) \[d_{ij}^{2}=\left\lVert\mathbf{i}_{i}-\mathbf{j}_{j}\right\rVert^ {2},\forall i,j\in\mathcal{N},\] (8e) \[\mathbf{l}_{r_{i}}\in X_{c},\forall r_{i}\in\mathcal{R}. \tag{8f}\]
Constraint (8c) sets the number of relay nodes to \(\mathrm{M}\). Constraint (8d) and (8e) calculate the energy consumption between node \(i\) and \(j\) based on the 3-dimensional Euclidean distance between them. Lastly, constraint (8f) requires that relay nodes must be positioned within the cylindrical area of the surveillance field with radius \(R_{s}\) and depth \(H_{s}\).
## III Preliminaries
In this section, we introduce preliminaries that we will use in the rest of our study.
**Definition 1**.: _We consider a general MOOP as_
\[\min\{f_{1}(\mathbf{x}),...,f_{K}(\mathbf{x})\}\] (9a) _s.t_ \[\mathbf{x}\in\mathcal{X} \tag{9b}\]
_with \(K\) objectives and a feasible set \(\mathcal{X}\). Using MOLM, the objectives are ranked in order of importance from best to worst. The problem then begins with the most important objective and continues with the objectives in the order of their importance. Specifically, in step \(i\), \(f_{i}^{*}\) is obtained by sequentially minimizing the objective \(f_{i}\). It is worth noting that, the computed optimal value of each objective is added as a constraint for the subsequent optimization steps._
**Definition 2**.: _In a convex optimization problem, we minimize a convex objective function over a convex set. This problem is of the form_
\[\min f_{0}(\mathbf{x})\] (10a) _s.t._ \[f_{i}(\mathbf{x})\leq 0,i=1,\ldots,m \tag{10b}\] \[h_{i}(\mathbf{x})=0,i=1,\ldots,p \tag{10c}\]
_where \(\mathbf{x}\in\mathbb{R}^{n}\) and \(f_{0},\ldots,f_{m}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) are convex functions and \(h_{i}(\mathbf{x}):\mathbb{R}^{n}\rightarrow\mathbb{R}\) are affine functions [27]. The important property of this problem is that any locally optimal solution is also globally optimal._
**Definition 3**.: _Let \(\Omega\) be a convex set in \(\mathbb{R}^{n}\). We say that a function is a difference convex (DC) if it can be expressed as the difference of two convex functions on \(\Omega\), i.e. if \(f(x)=f_{1}(\mathbf{x})-f_{2}(\mathbf{x})\), where \(f_{1}\) and \(f_{2}\) are convex functions on \(\Omega\)[28]. The function \(f(x)\) is convex when \(f_{1}(\mathbf{x})\) and \(f_{2}(\mathbf{x})\) are convex and affine functions, respectively. In general, each convex function is a DC function, but its reverse is not true._
**Lemma 1**.: _The inverse of the function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) will reach its minimum point at \(\mathbf{x}_{0}\in\mathbb{R}^{n}\) if \(\mathbf{x}_{0}\) is the maximum point of \(f\). Additionally, this result also holds for the converse._
Proof.: See the theorem 1. 46 in [29].
## IV Proposed method
Using the lexicographic optimization algorithm, the lifetime of a network and the number of relay nodes are optimized in a hierarchical manner. This process involves two steps to arrive at the optimal solution.
### _Step 1 - Network lifetime maximization: ORNS scheme_
In this case, the problem can be defined as
\[\max\{\min_{i\in\mathcal{S}\cup\mathcal{R}}\tau_{i}\}\] (11a) s.t. \[\tau_{i}=\frac{\epsilon_{i}}{\sum_{j\in\mathcal{N}}^{i\neq j}p_{ij}[\mathbf{R} ]_{ij}+p_{r}\sum_{k\in\mathcal{S}\cup\mathcal{R}}^{k\neq i}[\mathbf{R}]_{ki}}, i\in\mathcal{S}\cup\mathcal{R}, \tag{11b}\] \[|\mathcal{M}|=\mathrm{M}_{0}\] (11c) \[p_{ij}=\begin{cases}p_{s}+\alpha^{d_{ij}}d_{ij}^{2}&d_{ij}<d_{t }\\ p_{s}+\alpha^{d_{ij}}d_{ij}^{4}&d_{ij}\geq d_{t}\end{cases},\] (11d) \[d_{ij}^{2}=\left\|\mathbf{l}_{i}-\mathbf{l}_{j}\right\|^{2}, \forall i,j\in\mathcal{N},\] (11e) \[\mathbf{l}_{r_{i}}\in X_{c},\forall r_{i}\in\mathcal{R}. \tag{11f}\]
The problem at hand is a challenging, non-convex NP-hard problem, which we have attempted to address through the use of ORNS. This method is designed to determine the ideal location of a single relay node (\(r_{i}\)) and can be extended to identify the best coordinates of all the relay nodes. Furthermore, ORNS calculates which nodes (\(N_{ci}^{u}\)) receive data from the most critical node (\(c_{i}\)) of the network. This set can be expressed as:
\[N_{c_{i}}^{u}=\{j;[\mathbf{R}]_{cij}>0\}, \tag{12}\]
where \(j\) is the nodes that is connected to \(c_{i}\). With the optimal coordinates of \(r_{i}\) determined, it can then act as a router to facilitate the connection between \(c_{i}\) and the elements of \(N_{ci}^{u}\). In other words, after placing \(r_{i}\) in the network the following equations can be written:
\[[\mathbf{R}]_{c_{i}r_{i}}=\sum_{j\in N_{c_{i}}^{u}}[\mathbf{R}]_{c_{i}j}, \tag{13}\]
\[[\mathbf{R}]_{r_{i}j}=[\mathbf{R}]_{c_{i}j},\forall j\in N_{c_{i}}^{u}. \tag{14}\]
According to the above equations, \(r_{i}\) acts as an intermediate node to forward the information of node \(c_{i}\) and thus:
\[[\mathbf{R}]_{c_{i}j}=0,\forall j\in N_{c_{i}}^{u}. \tag{15}\]
From the network lifetime perspective, mathematically, two remarks must be made in regard to the RNP:
**Remark 1**.: _In a valid RNP, less energy should be consumed than in a direct transmission, and the network lifetime is maximized when the \(c_{i}\) lifetime is maximized, which can be expressed as_
\[\mathbf{l}_{r_{i}}=\operatorname{argmax}\left(\tau_{c_{i}}\right). \tag{16}\]
**Remark 2**.: _By defining the residual energy factor of \(c_{i}\) as \(\mathrm{RF}_{i}\triangleq\frac{\epsilon_{n}}{E_{n}}\), where \(\epsilon_{p}\) is the primary energy of \(c_{i}\), the efficient RNP (ERNP) will take into account the lifetime of relays especially when the \(\mathrm{RF}_{i}\) is high, which can be expressed as:_
\[\mathbf{l}_{r_{i}}=\operatorname{argmax}\left(\tau_{r_{i}}\right)\ \ s.t.\ p_{r_{i}j}<p_{c_{i}j} \tag{17}\]
To further illustrate this concept, an example of an RNP in two system scenarios is given in Fig. 2 where the ERNP considers the case when the relay node is located within the convex hull of \(c_{i}\) and its upper neighbors \(\left(N_{c_{i}}^{u}\right)\), mathematically expressed as follows:
\[\mathbf{l}_{r_{i}}\in\mathrm{conv}\left\{\mathbf{l}_{j},\ j\in N_{c_{i}}^{u} \cup c_{i}\right\}\triangleq\mathrm{conv}_{i}, \tag{18}\]
where \(\mathrm{conv}_{i}\) is a convex combination of these \(\left|N_{c_{i}}^{u}\right|+1\) vectors. This system is true if and only if there is a solution to the following system:
\[\mathbf{l}_{r_{i}}=\Big{\{}\sum_{j\in N_{c_{i}}^{u}\cup c_{i}}\theta_{j} \mathbf{l}_{j}|\theta_{j}\geq 0,\sum_{j\in N_{c_{i}}^{u}\cup c_{i}}\theta_{j}=1 \Big{\}}. \tag{19}\]
The feasible space created by (19) for \(r_{i}\) is shown in Fig. 3.
In summary, to meet all the above conditions, the problem of network lifetime maximization can be given as
\[\mathbf{y}_{i}=\arg\left(\max\tau_{c_{i}},\max\tau_{r_{i}}\right), \tag{20a}\] \[s.t.\] \[\tau_{c_{i}}=\frac{\epsilon_{c_{i}}}{p_{c_{i}r_{i}}[\mathbf{R}]_{ c_{i}r_{i}}+p_{r}\sum_{k\in\mathcal{S}\cup\mathcal{R}}[\mathbf{R}]_{kc_{i}}},\] (20b) \[\tau_{r_{i}}=\frac{\epsilon_{r_{i}}}{\sum_{j\in N_{c_{i}}^{u}}p_{ rj}[\mathbf{R}]_{rij}+p_{r}[\mathbf{R}]_{c_{i}r_{i}}},\] (20c) \[p_{c_{i}r_{i}}=\begin{cases}p_{s}+\alpha\left(f\right)^{d_{c_{i}r_{i}}}d_ {c_{i}r_{i}}^{2},\ d_{c_{i}r_{i}}<d_{t}\\ p_{s}+\alpha\left(f\right)^{d_{c_{i}r_{i}}}d_{c_{i}r_{i}}^{4},\ d_{c_{i}r_{i}} \geq d_{t}\end{cases},\] (20d) \[d_{c_{i}r_{i}}^{2}=\left\|\mathbf{l}_{r_{i}}-\mathbf{l}_{c_{i}} \right\|^{2},\] (20e) \[p_{r,i,j}=\begin{cases}p_{s}+\alpha\left(f\right)^{d_{c_{i}j}}d_ {r,i}^{2}\,d_{r,j}<d_{t}\\ p_{s}+\alpha\left(f\right)^{d_{c_{i}j}}d_{r,i}^{4},\ d_{r,j}\geq d_{t}\end{cases}, \forall j\in N_{c_{i}}^{u},\] (20f) \[d_{r,i}^{2}=\left\|\mathbf{l}_{r_{i}}-\mathbf{l}_{j}\right\|^{2}, \ \forall j\in N_{c_{i}}^{u},\] (20g) \[\mathbf{l}_{r_{i}}\in\mathrm{conv}_{i},\] (20h) \[\{p_{r,i},d_{r,i},p_{c_{i}r_{i}},d_{c_{i}r_{i}}\}\in\mathbb{R}_{+}, \ \forall j, \tag{20i}\]
Fig. 3: Construction of the feasible space to locate \(r_{i}\) using convex hull when \(|N_{c_{i}}^{u}|=3\)
that \(\mathbf{y}_{i}\) denotes the decision variable where \(\mathbf{y}_{i}\triangleq\left[\mathbf{p}_{r_{i}},\mathbf{d}_{r_{i}},p_{c_{i}r_{i }}d_{c_{i}r_{i}},\mathbf{l}_{r_{i}},\mathbf{\Theta}\right]\in\mathbb{R}^{3} \big{|}_{c_{i}}^{N_{c_{i}}^{*}}\big{|}^{+6}\), \(\mathbf{\Theta}=\left[\theta_{1},\ldots,\theta_{\left|N_{c_{i}}^{*}\right|+1} \right]^{T}\), \(\mathbf{p}_{r_{i}}=\left[p_{r_{1}1},\ldots,p_{r_{i}\left|N_{c_{i}}^{*}\right|} \right]^{T}\), and \(\mathbf{d}_{r_{i}}=\left[d_{r_{i}1},\ldots,d_{r_{i}\left|N_{c_{i}}^{*}\right|} \right]^{T}\).
It is important to note that solely maximizing the lifetime of the critical node can solve the problem of energy hole among sensors and does not guarantee improving the entire network lifetime. Therefore, our proposed method takes into account the maximization of both relay nodes and critical nodes simultaneously. To achieve this, we employ multi-objective optimization techniques to identify optimal values by exploring trade-offs and finding desirable solutions across multiple objectives. In the following, we utilize the \(\epsilon\)-constraint 1 method to solve the problem (20). Based on this approach, the lifetime of critical nodes serves as the main objective function, and the lifetime of the relay node is the secondary objective. The relay node's lifetime, as the secondary objective, must be greater than that of the critical node and is added as a constraint in the optimization problem. Indeed, we incorporate relay node's lifetime as a constraint in the optimization problem to ensure that the entire the network lifetime is improved. By employing this method, we guarantee the resolving the energy hole by maximization the lifetime of critical nodes while preserving the lifetime of relay nodes and avoiding their placement in outlying positions. This approach effectively resolves the energy hole problem caused between sensor nodes and enhances the overall network lifetime. To sum up, the RNP is given as
Footnote 1: Based on the \(\epsilon\)-constraint method, a main and a secondary objective functions are selected, and the purpose is to optimize the main objective function and limit the secondary function by some allowable amount.
\[\mathbf{x}_{i}=\arg\max\tau_{c_{i}}, \tag{21a}\] \[s.t.\] \[\tau_{c_{i}}=\frac{\epsilon_{c_{i}}}{p_{c_{i}r_{i}}[\mathbf{R}]_{ c_{i}r_{i}}+p_{t}\sum_{k\in\mathcal{SUR}}[\mathbf{R}]_{bc_{i}}},\] (21b) \[\tau_{r_{i}}\geq\tau_{c_{i}},\] (21c) \[p_{c_{i}r_{i}}=\begin{cases}p_{s}+\alpha\left(f\right)^{d_{c_{i}r _{i}}}d_{c_{i}r_{i}}^{2}\,d_{c_{i}r_{i}}<d_{t}\\ p_{s}+\alpha\left(f\right)^{d_{c_{i}r_{i}}}d_{c_{i}r_{i}}^{4},\ d_{c_{i}r_{i}} \geq d_{t}\end{cases},\] (21d) \[d_{c_{i}r_{i}}^{2}=\left\|\mathbf{l}_{r_{i}}-\mathbf{l}_{c_{i}} \right\|^{2},\] (21e) \[\mathbf{l}_{r_{i}}=\sum_{j\in N_{c_{i}}^{*}\cup c_{i}}\theta_{j} \mathbf{l}_{j},\] (21f) \[\mathbf{\Theta}\succcurlyeq\mathbf{0},\] (21g) \[\mathbf{1}^{T}\mathbf{\Theta}=1,\] (21h) \[\mathbf{p}_{r_{i}}\succcurlyeq\mathbf{0},\] (21i) \[\mathbf{d}_{r_{i}}\succcurlyeq\mathbf{0}. \tag{21j}\]
where the decision variable is updated as \(\mathbf{x}_{i}\triangleq\left[\mathbf{p}_{r_{i}},p_{c_{i}r_{i}},d_{c_{i}r_{i} },\mathbf{l}_{r_{i}},\mathbf{\Theta}\right]\in\mathbb{R}^{2\left|N_{c_{i}}^{*} \right|+6}\). It can be shown that the (21c) is rewritten as a linear constraint
\[\mathbf{1}^{T}\mathbf{D}\mathbf{x}_{i}\leq\gamma_{0}, \tag{22}\]
where
\[\gamma_{0}=\left(p_{r}\sum_{k\in\mathcal{SUR}}[\mathbf{R}]_{bc_{i}} \right)\left(\frac{\epsilon_{r_{i}}}{c_{c_{i}}}\right)-p_{r}[\mathbf{R}]_{c_{i}r _{i}}, \tag{23}\] \[\mathbf{D}=\mathrm{diag}\ \left(a_{1},\ldots,a_{2\left|N_{c_{i}}^{*} \right|+6}\right), \tag{24}\]
and
\[a_{j}=\begin{cases}[\mathbf{R}]_{r_{j}j},\ j\in\{1,\ldots,\left|N_{c_{i}}^{*} \right|\}\\ -\left(\frac{\epsilon_{r_{i}}}{c_{c_{i}}}\right)[\mathbf{R}]_{c_{i}r_{i}},\ j= \left|N_{c_{i}}^{*}\right|+1\\ 0,\ o.w\end{cases}. \tag{25}\]
Problem (21) is a non-convex problem that includes the convex objective function with convex (and/or linear) and DC constraints. Considering that the constraint (21e) is DC and non-convex, this leads to the non-convexity of the proposed RNP. However, we explore a novel convex programming model equivalent to this problem. To do so, we apply a novel transformation to these problems whose detailed expressions are developed in Appendix A. Based on the proposed transformation, lemma 1, and definition of variable \(t\), the proposed problem to place \(r_{i}\) in the network is the form of
\[[\mathbf{x}_{i},t]=\arg\min p_{c_{i}r_{i}} \tag{26a}\] \[s.t.\] \[p_{c_{i}r_{i}}=\left\{\begin{matrix}p_{s}+\alpha\left(f\right)^{ d_{c_{i}r_{i}}}d_{c_{i}r_{i}}^{2}\,d_{c_{i}r_{i}}<d_{t}\\ p_{s}+\alpha\left(f\right)^{d_{c_{i}r_{i}}}d_{c_{i}r_{i}}^{4},\ d_{c_{i}r_{i}} \geq d_{t}\end{matrix}\right.\] (26b) \[\mathbf{1}^{T}\mathbf{D}\mathbf{x}_{i}\leq\gamma_{0},\] (26c) \[\left\|\mathbf{l}_{r_{i}}-\mathbf{l}_{c_{i}}\right\|^{2}-t=0,\] (26d) \[d_{c_{i}r_{i}}^{2}-t=0\] (26e) \[\mathbf{l}_{r_{i}}=\sum_{j\in N_{c_{i}}^{*}\cup c_{i}}\theta_{j} \mathbf{l}_{j},\] (26f) \[\mathbf{\Theta}\succcurlyeq\mathbf{0},\] (26g) \[\mathbf{1}^{T}\mathbf{\Theta}=1,\] (26h) \[\mathbf{p}_{r_{i}}\succcurlyeq\mathbf{0},\] (26i) \[\mathbf{d}_{r_{i}}\succcurlyeq\mathbf{0}. \tag{26j}\]
which is known as the epigraph form of the problem:
\[[\mathbf{x}_{i},t]=\arg\min\left\{\begin{matrix}&p_{s}+\alpha\left(f\right)^{ d_{c_{i}r_{i}}}d_{c_{i}r_{i}}^{2}\,d_{c_{i}r_{i}}<d_{t}\\ &p_{s}+\alpha\left(f\right)^{d_{c_{i}r_{i}}}d_{c_{i}r_{i}}^{2}\,d_{c_{i}r_{i}}^{2}\geq d_{t}\end{matrix}\right. \tag{27a}\] \[s.t.\] \[\mathbf{1}^{T}\mathbf{D}\mathbf{x}_{i}\leq\gamma_{0},\] (27b) \[\|\mathbf{l}_{r_{i}}-\mathbf{l}_{c_{i}}\|^{2}-t\leq 0,\] (27c) \[d_{c_{i}r_{i}}^{2}-t\leq 0\] (27d) \[\mathbf{l}_{r_{i}}=\sum_{j\in N_{c_{i}}^{*}\cup c_{i}}\theta_{j} \mathbf{l}_{j},\] (27e) \[\mathbf{\Theta}\succcurlyeq\mathbf{0},\] (27f) \[\mathbf{1}^{T}\mathbf{\Theta}=1,\] (27g) \[\mathbf{p}_{r_{i}}\succcurlyeq\mathbf{0},\] (27h) \[\mathbf{d}_{r_{i}}\succcurlyeq\mathbf{0}. \tag{27i}\]
Our proposed RNP belongs to non-differentiable convex optimization problems. The advantage of convex problems over non-convex counterparts is that, in general, a global optimum can be computed with good precision and within a reasonable time, independent of initialization [30]. To obtain the optimal position of the relay node we resort to off-the-shelf convex solver CVX which is a Matlab-based modeling system for convex optimization. CVX can solve much more complex convex optimization problems, including non-differentiable functions. It was developed using interior point methods and gives numerical solutions for the convex optimization problem. By solving it, a set of optimal solutions \((\mathbf{l}_{r_{1}},...,\mathbf{l}_{r_{M_{0}}})\) and \(\mathbf{p}_{r}^{*}=[\mathbf{1}^{T}\mathbf{p}_{r_{1}},...,\mathbf{1}^{T} \mathbf{p}_{r_{M_{0}}}]^{T}\) showing the vector position of relay nodes and overall transmitting energy consumption of them is obtained, respectively. This optimal solution shows the optimal value \(\tau^{*}\). The pseudo-code of the proposed network lifetime is given in algorithm 1.
```
Data: The set of sensor nodes (\(\mathcal{S}\)) Result: Position of relay nodes, rate array
1:for each relay node \(r_{i}\)do
2:for each node \(n\in\mathcal{S}\cup R\)do
3: Compute \(\tau_{n}\) using (7)
4:endfor
5:\(c_{i}=\arg\min\tau_{n}\)
6: Construct the set \(N_{c_{i}}^{u}\) using (12)
7: Define the convex hull system as given in (19)
8: Define the multi-objective problem (20)
9: Apply the \(\epsilon\)-constraint approach to convert the multi-objective problem (20) into the single-objective problem (21)
10: Apply the transformation (35) and Lemma 1 to form problem (26)
11: Define the convex-based RNP (27) by using the epigraph form of (26)
12: Solve problem (27) using CVX tool to obtain \(\mathbf{l}_{r_{i}}\)
13: Update \(\mathbf{R}\) based on Eqs. (13)-(15)
14:endfor
15:return\(\mathbf{R},\mathbf{l}_{r_{i}},i=1,\ldots,\mathrm{M}\)
```
**Algorithm 1**Summary of proposed ORNS approach
### _Step 2 - Relay nodes minimization: RNMI scheme_
By definition, \(\mathrm{M}\) is an indication of the selected relay nodes which are in an acceptable network lifetime extension and we have \(\mathrm{M}\leq\mathrm{M}_{0}\). Moreover, let \(\mathbf{p}_{c}\) includes the total transmitting energy consumption of nodes \(c_{i},i=1,\ldots,\mathrm{M}_{0}\) before placing relays. Considering the effectiveness of relay nodes in the extension of network lifetime, a relay should be introduced if necessary in terms of the network lifetime. Toward this, we cast to the zero-norm and one-norm principles to define the relay node selection problem as:
\[\gamma=\arg\{\min\left\|\mathbf{p}_{r}\right\|_{0},\max\left\| \mathbf{p}_{r}-\mathbf{p}_{c}\right\|_{1}\}, \tag{28a}\] \[s.t.\] \[\min\tau_{i}=\tau^{*},i\in\mathcal{N},\] (28b) \[[\mathbf{p}_{r}]_{i}\in\{0,[\mathbf{p}_{r}^{*}]_{i}\},\] (28c) \[[\mathbf{p}_{c}]_{i}=\begin{cases}p_{c_{i}r_{i}}\times\sum_{j\in N _{c_{i}}^{u}}[\mathbf{R}]_{c_{i}j}&[\mathbf{p}_{r}]_{i}=[\mathbf{p}_{r}^{*}]_{ i}\\ \sum_{j\in N_{c_{i}}^{u}}p_{c_{i}j}[\mathbf{R}]_{c_{i}j}&[\mathbf{p}_{r}]_{i}=0 \end{cases},i=1,\ldots,\mathrm{M}_{0}. \tag{28d}\]
where zero-norm as the cardinality function returns the non-zero entry in the \(\mathbf{p}_{r}\). In addition, \(\tau^{*}\) is the answer from the first optimization step added as the constraint. By employing the scalarization method to mix the zero-norm and one-norm functions, the problem can be given as:
\[\gamma=\arg\{\min\omega_{1}\left\|\mathbf{p}_{r}\right\|_{0}- \omega_{2}\left\|\mathbf{p}_{r}-\mathbf{p}_{c}\right\|_{1}\}, \tag{29a}\] \[s.t.\] \[\min\tau_{i}=\tau^{*},i\in\mathcal{N},\] (29b) \[[\mathbf{p}_{r}]_{i}\in\{0,[\mathbf{p}_{r}^{*}]_{i}\},\] (29c) \[[\mathbf{p}_{c}]_{i}=\begin{cases}p_{c_{i}r_{i}}\times\sum_{j\in N _{c_{i}}^{u}}[\mathbf{R}]_{c_{i}j}&[\mathbf{p}_{r}]_{i}=[\mathbf{p}_{r}^{*}]_{ i}\\ \sum_{j\in N_{c_{i}}^{u}}p_{c_{i}j}[\mathbf{R}]_{c_{i}j}&[\mathbf{p}_{r}]_{i}=0 \end{cases},i=1,\ldots,\mathrm{M}_{0}. \tag{29d}\]
where the weights for each function \(\omega_{1}\) and \(\omega_{2}\) can be chosen according to the kind of tradeoffs we are willing to make and \(\omega_{1}+\omega_{2}=1\). Recall step function \(s(x)\) with \(s(x):\mathbb{R}\rightarrow\mathbb{R}^{+}\) that \(s(x)=1\) for \(x>0\) and \(s(x)=0\) for \(x\leq 0\), the zero-norm can be written as the sum of discontinuous step function as:
\[\left\|\mathbf{p}_{r}\right\|_{0}=\sum_{i=1}^{\mathrm{M}_{0}}s([\mathbf{p}_{r} ]_{i}) \tag{30}\]
Here, by applying the nonnegative feature of \([\mathbf{p}_{r}]_{i}\), we use the following continuously differentiable concave approximation of the step function for nonnegative variable [31]:
\[s([\mathbf{p}_{r}]_{i})=1-\exp(-\eta[\mathbf{p}_{r}]_{i}) \tag{31}\]
that \(\eta>0\). Therefore the problem (29) is equivalently presented as:
\[\gamma=\arg\min\{\omega_{1}\sum_{i=1}^{M_{0}}\left(1-\exp(-\eta[ \mathbf{p}_{r}]_{i})\right)-\omega_{2}\left\|\mathbf{p}_{r}-\mathbf{p}_{c} \right\|_{1}\}, \tag{32a}\] \[s.t.\] \[\tau_{i}\geq\tau^{*},i\in\mathcal{N},\] (32b) \[[\mathbf{p}_{r}]_{i}\in\{0,[\mathbf{p}_{r}^{*}]_{i}\},\] (32c) \[[\mathbf{p}_{c}]_{i}=\begin{cases}p_{c_{i}r_{i}}\times\sum_{j\in N _{c_{i}}^{u}}[\mathbf{R}]_{c_{i}j}&[\mathbf{p}_{r}]_{i}=[\mathbf{p}_{r}^{*}]_{ i}\\ \sum_{j\in N_{c_{i}}^{u}}p_{c_{i}j}[\mathbf{R}]_{c_{i}j}&[\mathbf{p}_{r}]_{i}=0 \end{cases},i=1,\ldots,\mathrm{M}_{0}. \tag{32d}\]
The obtained model is a mixed-integer convex programming model. Models with integer and binary variables must still obey all of the same disciplined convex programming rules
that CVX enforces for continuous models. The above approximation model is a smooth optimization problem with tolerable complexity and show that each relay node should not be deployed if the transmit power consumption by the cooperation of the relay node is more than the direct transmit power consumption and should be otherwise. Let us now evaluate our approach against the RA [17] and DCA [19] schemes in terms of complexity and network lifetime
## V Complexity analysis
In this section, we compute the time complexity of our proposed RNP, DCA and RA methods. We do this by calculating the number of computations as shown in Tables II and III. One can notice that \(\mathrm{iter}_{i},i\in\{1,2,3\}\) represents the number of iterations required for each RNP scheme to converge, which is significantly lower than \(\mathrm{(N+M)}\) in practical scenarios. Additionally, \(\alpha\) and \(\beta\) are small constant values. Consequently, Table II demonstrates that the worst-case time complexity of our proposed method is limited to \(\mathcal{O}(\mathrm{N+M})^{2}\). Furthermore, based on table III, it can be concluded that the time complexity of DCA and RA is also bounded by \(\mathcal{O}(\mathrm{N+M})^{2}\). Subsequently, we will assess the performance of the proposed ORNS method through simulations.
## VI Algorithm evaluation results
In this section, we evaluate and compare the proposed RNP with the heuristic RA method [17] and DCA approach [19] using multiple simulation scenarios. The experiments are performed by MatLab 2017b. The simulation parameters and their notations are provided in Table IV. The depth of water was taken as \(2000\) m, the generation rate of each sensor node is set randomly between \(10\) and \(200\)\(\mathrm{bit/Sec}\), and the primary energy of nodes was set to \(4\times 10^{5}\)J [32]. The frequency of the acoustic signal was set to \(1\mathrm{kHz}\). Similar to [17, 19], we design the deployment of 3-D underwater sensor networks in the cylindrical sensing field where they sense the environment. The gathered data is then transmitted to the SB, which is positioned at the origin. Additionally, Table V outlines the different RNP cases, using \(\gamma_{r}=\frac{|\mathcal{R}|}{|\mathcal{S}|}\) to denote the percentage of employed relays and \(\mathrm{RF}=\frac{c_{\mathrm{i}}}{c_{\mathrm{r}}}\), as the residual energy factor of the most critical node in the sensor network.
The Imbalanced Factor of Energy Consumption (IEC) was calculated as
\[\mathrm{IEC}=\frac{\frac{1}{\mathrm{N}}\sum_{i\in\mathcal{S}}\left(\mathrm{E} \left(\epsilon_{i}\right)-\mathrm{E}\left(\bar{\epsilon}\right)\right)^{2}}{ \sigma_{0}^{2}} \tag{33}\]
where \(\bar{\epsilon}=\frac{1}{\mathrm{N}}\sum_{i\in\mathcal{S}}\epsilon_{i}\), and \(\sigma_{0}^{2}\) is the normalization factor. In the following we present a three-fold approach to evaluate the effectiveness of our relay node placement method. Firstly, we investigate regulation of the positions of relay nodes within the network at different scales. Secondly, we conduct a comprehensive performance evaluation of our proposed method by comparing it with existing approaches. Thirdly, we delve into the details of our relay node selection design, which aims to minimize resource wastage in UASNs.
### _Regulation of relay nodes_
In this section, we examine the regulation of relay nodes in the network. To do so, we consider Case A with \(\mathrm{RF}=0.25\)
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Operation & Computations of formulating (27) & Computations of solving (27) & Total computations \\ \hline Linear search & \(3\) & \(\mathrm{iter}_{1}\) & \(3(\mathrm{N+M-1})+\mathrm{iter}_{1}(\mathrm{N+M-1})\) \\ \hline Assignment & \(3\) & — & \(3\) \\ \hline Addition & \(\mathrm{(N+M-1)(N+M)}\) & \(\mathrm{(N+M+13)~{}iter}_{1}\) & \(\mathrm{(N+M)(N+M-1)+(N+M+13)~{}iter}_{1}\) \\ \hline Division & \(\mathrm{N+M-1}\) & \(\mathrm{iter}_{1}\) & \(\mathrm{(N+M-1)~{}iter}_{1}\) \\ \hline Multiplication & \(\mathrm{(N+M-1)^{2}}\) & \(\mathrm{(N+M+12)~{}iter}_{1}\) & \(\mathrm{(N+M-1)^{2}+(N+M+12)~{}iter}_{1}\) \\ \hline \end{tabular}
\end{table} TABLE II: Computations of proposed ORNS method
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Notation & Definition & Value \\ \hline \(H_{s}\) & Depth of the water & 2000 m \\ \hline \(C_{R}\) & Communication range & 500 m \\ \hline \(f\) & Frequency of acoustic signal & 1 kHz \\ \hline \(p_{r}\) & The power consumption for processing in sending data & 1 mW/bit \\ \hline \(p_{r}\) & The power consumption for processing in receiving data & 1 mW/bit \\ \hline \(d_{e}\) & Distance threshold & 87 m \\ \hline \(L_{e}\) & Link capacity & 10 kHz/sec \\ \hline \(\epsilon_{g}\) & Primary energy of typical node \(i\) & \(4\times 10^{5}\) J \\ \hline \(g_{0}\) & Generation rate of sensor node \(i\) & \(10\to 200\) bit/sec \\ \hline \end{tabular}
\end{table} TABLE IV: List of parameters
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Operation & Computations of DCA [19] & Computations of initializing the position of relay nodes on the surface of the water & Computations of a adjusting the depth & Total computations \\ \hline Linear search & \(2(\mathrm{N+M-1})\) & — & \(\mathrm{N+2M-2}\) & \(\mathrm{N+2M-2}\) \\ \hline Assignment & \(4\) & \(2\mathrm{M+4\alpha}\) & \(\beta\) & \(2\mathrm{M+4\alpha+\beta}\) \\ \hline Addition & \(\mathrm{(N+M)(N+M-1)~{}}+\mathrm{(N+M+17)~{}iter}_{2}\) & \(\mathrm{N(2N-4+\alpha)+\alpha+\alpha^{2}}\) & \(2(\mathrm{N+M)(M+N-2)~{}+}\) & \(2(\mathrm{N+M)(M+N-2)~{}+}\) \\ \hline Division & \(\mathrm{(N+M-1)+5~{}iter}_{2}\) & N+3\(\alpha\) & N + M & \(\mathrm{2N+M+3}\) \\ \hline \(p_{r}\) & The power consumption for processing in receiving data & 1 mW/bit \\ \hline \(d_{e}\) & Distance threshold & 87 m \\ \hline \(L_{e}\) & Link capacity & 10 kHz/sec \\ \hline \(\epsilon_{g}\) & Primary energy of typical node \(i\) & \(4\times 10^{5}\) J \\ \hline \(g_{0}\) & Generation rate of sensor node \(i\) & \(10\to 200\) bit/sec \\ \hline \end{tabular}
\end{table} TABLE III: Computations of previous RNP schemes
Fig. 4: Position of relay nodes in the network when \(|\mathcal{S}|=20\), \(\gamma_{r}=0.3\), and \(\mathrm{RF}=0.25\) in different deployments
Fig. 5: Position of relay nodes in the network when \(|\mathcal{S}|=40\), \(\gamma_{r}=0.3\), and \(\mathrm{RF}=0.25\) in different deployments
and \(\gamma_{r}=0.3\). We consider two network scales in this example, each with a different number of sensor nodes. For each scale, we present the results for two network deployments, as shown in Figs. 4 and 5. In Fig. 4, we assume that there are 20 sensor nodes, while in Fig. 5, we assume that there are 40 sensor nodes.
We observe that in a multi-hop UASN, a higher percentage of relay nodes are positioned near the SB due to the larger amount of data collected by nodes in that area. Comparing Fig. 4 with Fig. 5, as the number of sensor nodes increases, the percentage of relay nodes increases, and the percentage of relay nodes increases.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Case & A & B & C & D \\ \hline RF & 0.25 & 0.75 & 0.25 & 0.25 \\ \(\gamma_{r}\) & 0.3 & 0.3 & 0.6 & 0.9 \\ \hline \end{tabular}
\end{table} TABLE V: Considered RNP characteristics
Fig. 8: Investigation of the performance in different network scales
Fig. 6: Residual energy of sensor nodes when \(|\mathcal{S}|=40\), \(\gamma_{r}=0.3\), and \(\mathrm{RF}=0.25\)
Fig. 7: Imbalanced factor of energy consumption for different \(\mathrm{RFs}\)
of relay nodes near the SB also increases. This observation can be attributed to the fact that an increase in the number of sensors leads to a larger amount of information being gathered by nodes near the SB. In the previous RA method, most relay nodes are not adjusted in depth because of their random location on the water's surface. Furthermore, previous line-segment relay node placement (LSRNP) approaches, such as RA and DCA, resulted in relay nodes being positioned too closely together or even directly on top of sensor nodes, as seen in Fig. 4(a) and Fig. 4(d). In contrast, our proposed approach suggests a more sensible placement for relay nodes by considering a feasible search for the convex hull.
### _Performance evaluation_
In this part, we evaluate the performance of proposed RNP method and compare with RA and DCA approaches in terms of balanced energy consumption and network lifetime. To do so, we design some simulation cases and obtain all the numerical results from 50 deployments. The first case assumes 40 sensor nodes, \(\gamma_{r}=0.3\), and \(\mathrm{RF}=0.25\), as shown in Case A in Table V. Figs. 5(a), 5(b) and 5(c) show the distribution of residual energy among sensor nodes when applying the RA, DCA, and ORNS methods, respectively, along with a scenario without relay nodes for comparison. It is observed that imbalanced energy consumption creates an energy hole in the network that limits the performance of UASNs. This, in turn, prevents the collected data from being forwarded to the SB. Employing relay nodes improves the energy hole issue in the network. Furthermore, the proposed ORNS method highlights the advantages of obtaining a more balanced energy distribution among nodes through optimal positioning of the relay nodes in addressing the energy hole issue. To illustrate further, Fig. 7 depicts the IEC factor for different RF values. It can be seen that the proposed method outperforms previous schemes in terms of energy balance, as evidenced by the smaller IEC values.
We present the network lifetime as a function of the number of sensor nodes in the second case, with the parameters set according to cases A and B in Table V. The results for these cases are shown in Fig. 7(a) and 7(b), respectively. Several important observations can be made based on these results. Firstly, it can be seen that the network lifetime decreases as the number of sensor nodes increases. This is due to the increased amount of data that needs to be relayed, leading to higher energy consumption. Secondly, the proposed RNP method outperforms RA and DCA by utilizing convex programming to determine the optimal location of the relay nodes. Lastly, without relay nodes, the transmission distance between the sensor nodes becomes long, resulting in a shorter network lifetime.
### _Relay node selection design_
So far, we have assumed a fixed number of relay nodes in our scenario. However, in this section, we want to highlight the importance of selecting relay nodes in our proposed approach. To address this concern, we conducted a thorough analysis by comparing the network lifetime with varying numbers of sensor nodes and relay nodes. The results, shown in Fig.9 for cases C and D (where \(\gamma_{r}=0.6\) and \(\gamma_{r}=0.9\)), clearly demonstrate that when the network has a high number of nodes, especially when the communication distance between sensor nodes is small, the need for additional relay nodes decreases. This is because the close proximity of sensor nodes allows for direct communication without relying intermediate relay nodes. In such cases, the presence or absence of extra relay nodes does not significantly affect the network lifetime.
To gain further insight, consider Fig. 10 where we assume there are 80 sensor nodes and plot the positions of relay nodes in the network. It can be observed that the relay nodes are placed very closely together (DCA) or have specific positions beyond a certain number (proposed approach). In this situation, our relay node selection design, which strategically places five relay nodes, proves effective in achieving optimal network performance while minimizing the overall number of required relay nodes. On the other hand, when relay node selection is not considered, there are no criteria to limit the placement of relay nodes. In conclusion, by carefully selecting these relay nodes based on our proposed criteria, we were able to achieve significant improvements in both maximizing network lifetime and minimizing the number of required relay nodes.
## VII Conclusion
This paper aimed to address the joint optimization of maximizing network lifetime and minimizing relay node deployment in relay-assisted UASNs. To achieve a Pareto optimal solution, a multi-objective lexicographic method was employed in which the primary goal was to optimize the network lifetime in the RNP followed by the reduction of the active relay nodes. To accomplish this, a two-step process was employed. First, the ORNS algorithm was utilized to formulate the position of each relay node as a non-convex programming problem. This was then converted into an equivalent convex programming problem using a novel transformation and epigraph form scheme. Subsequently, a relay selection procedure utilizing a mixed-integer convex programming model was applied to
Fig. 9: Effect of increasing the number of relay nodes on the performance
minimize the number of active relay nodes. The proposed approach proved to be more efficient in terms of network lifetime than the existing models (RA and DCA).
## Appendix A Equivalent convex form of the (21)
Here, we present a transformation of a non-convex optimization problem with a convex objective function and convex (and/or linear) and DC constraints into an equivalent convex optimization problem. The non-convex problem can be expressed as
\[\mathbf{x}=\arg\min f_{0}(\mathbf{x})\] (34a) s.t. \[f_{i}(\mathbf{x})\leq 0,\qquad i=1,2,\ldots,I, \tag{34b}\] \[h_{p}(\mathbf{x})=0,\qquad p=1,2,\ldots,P, \tag{34c}\]
where \(\mathbf{x}\in\mathbb{R}^{n}\), \(f_{0},...,f_{I}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) are convex functions and \(h_{1},\ldots,h_{M}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) are DC functions. We take advantage of the property of DC constraints which can be expressed as the difference of convex functions, and introduce a new variable \(\mathbf{t}=[t_{1},t_{2},\ldots,t_{M}]^{T}\in\mathbb{R}^{M}\). This yields a convex optimization problem,
\[(\mathbf{x},\mathbf{t})=\arg\min f_{0}(\mathbf{x})\] (35a) s.t. \[f_{i}(\mathbf{x})\leq 0,\qquad i=1,2,\ldots,I, \tag{35b}\] \[\psi_{m}(\mathbf{x})-t_{m}=0,\qquad m=1,2,\ldots,M,\] (35c) \[\phi_{m}(\mathbf{x})-t_{m}=0,\qquad m=1,2,\ldots,M. \tag{35d}\]
Since (35c) and (35d) are both convex, this problem is the desired convex programming equivalent of the non-convex one.
|
2307.16345 | Full-Stack Quantum Software in Practice: Ecosystem, Stakeholders and
Challenges | The emergence of quantum computing has introduced a revolutionary paradigm
capable of transforming numerous scientific and industrial sectors.
Nevertheless, realizing the practical utilization of quantum software in
real-world applications presents significant challenges. Factors such as
variations in hardware implementations, the intricacy of quantum algorithms,
the integration of quantum and traditional software, and the absence of
standardized software and communication interfaces hinder the development of a
skilled workforce in this domain. This paper explores tangible approaches to
establishing quantum computing software development process and addresses the
concerns of various stakeholders. By addressing these challenges, we aim to
pave the way for the effective utilization of quantum computing in diverse
fields. | Vlad Stirbu, Majid Haghparast, Muhammad Waseem, Niraj Dayama, Tommi Mikkonen | 2023-07-30T23:44:22Z | http://arxiv.org/abs/2307.16345v1 | # Full-Stack Quantum Software in Practice: Ecosystem, Stakeholders and Challenges
###### Abstract
The emergence of quantum computing has introduced a revolutionary paradigm capable of transforming numerous scientific and industrial sectors. Nevertheless, realizing the practical utilization of quantum software in real-world applications presents significant challenges. Factors such as variations in hardware implementations, the intricacy of quantum algorithms, the integration of quantum and traditional software, and the absence of standardized software and communication interfaces hinder the development of a skilled workforce in this domain. This paper explores tangible approaches to establishing quantum computing software development process and addresses the concerns of various stakeholders. By addressing these challenges, we aim to pave the way for the effective utilization of quantum computing in diverse fields.
quantum computing, software development process, operations, quantum software engineering
## I Introduction
Quantum computing holds great promise as a revolutionary technology that has the potential to transform various fields. By harnessing the principles of quantum mechanics, quantum computers can perform complex calculations and solve problems that are currently intractable for classical computers. This promises breakthroughs in areas such as cryptography, optimization, drug discovery, materials science, and machine learning. Quantum computing's ability to leverage quantum mechanics properties like superposition, interference and entanglement can unlock exponential speedups and enable more accurate simulations of quantum systems.
The development of quantum software faces numerous challenges that need to be addressed for harnessing the power of quantum computing effectively. Firstly, the limited availability and instability of quantum hardware pose significant obstacles. Quantum computers are prone to errors and noise, necessitating the development of robust error correction techniques. Additionally, quantum programming languages and tools are still in their nascent stages, requiring advancements to facilitate efficient software development. Furthermore, the scarcity of skilled quantum software developers and a lack of standardization hinder the widespread adoption of quantum software. As quantum systems scale, the complexity of designing and optimizing quantum algorithms increases, demanding novel approaches to algorithm design and optimization. Addressing these challenges is crucial for realizing the full potential of quantum computing and enabling the development of practical quantum software applications.
This paper explores the challenges and approaches to establishing a quantum computing software development process. It highlights the obstacles in realizing practical utilization of quantum software, such as hardware variations, algorithm complexity, integration with traditional software, and the lack of standardized interfaces. Furthermore, the paper emphasizes the need to address these challenges to enable effective utilization of quantum computing.
## II Background
### _Qubit implementation_
The current candidates for building general-purpose quantum computers, as listed in Table I, fall under the category of Noisy Intermediate-Scale Quantum (NISQ) systems. Although these quantum computers are not yet advanced enough to achieve fault-tolerance or reach the scale required for quantum supremacy, they provide an experimentation platform to develop new generations of hardware, develop quantum algorithms and validate quantum technology in real world usecases. Whether a quantum computer is general-purpose or specialized, the selection of quantum qubit implementation technology can significantly enhance hardware efficiency for specific problem classes. To make effective use of the hardware, application developers must consider these differences when designing and optimizing the software's functionality and operations.
### _Quantum algorithms_
Quantum algorithms are computational techniques specifically designed to harness the unique properties of quantum systems [1]. They offer significant advantages over classical algorithms in certain computational tasks. One key advantage is
the ability to solve complex problems exponentially faster. For example, Shor's algorithm enables efficient factoring of large numbers, posing a potential threat to current encryption methods. Also, Grover's algorithm provides substantial speedup in searching large databases. Moreover, quantum algorithms can address optimization problems more effectively, leading to improved solutions in areas like portfolio optimization, logistics, and drug discovery.
### _Software_
A typical quantum program performs a specialized task as part of a larger classical program, see Fig. 1. The quantum program is submitted as a batch task to a classical computer that controls the operation of the quantum computer. The classical computer schedules the task execution and provides the result to the classical program when the job completes.
Application developer use tools like Qiskit1 and Cirq2 for writing, manipulating and optimizing quantum circuits. These Python libraries allow researchers and application developers to interact with nowadays' NISQ computers, allowing them to run quantum programs on a variety of simulators and hardware designs, abstracting away the complexities of low-level operations and allowing researchers and developers to focus on algorithm design and optimization.
Footnote 1: [https://qiskit.org](https://qiskit.org)
Tools like TensorFlow Quantum3 and PennyLane4 play a crucial role in facilitating the development of machine learning quantum software. These frameworks provide the high-level abstractions and interfaces that bridge the gap between quantum computing and classical machine learning. They allow researchers and developers to integrate quantum algorithms seamlessly into machine learning development process by providing access to quantum simulators and hardware, as well as offering a range of quantum-friendly classical optimization techniques. TensorFlow Quantum leverages the power of Google's TensorFlow ecosystem, enabling the combination of classical and quantum neural networks for hybrid quantum-classical machine learning models. PennyLane offers a unified framework for developing quantum machine learning algorithms, supporting various quantum devices and seamlessly integrating them with classical machine learning libraries. These tools provide a foundation for researchers to explore and experiment with quantum machine learning, accelerating the progress and adoption of quantum computing in the field of machine learning.
Footnote 2: [https://quantum.com/praket/](https://quantum.com/praket/)
Footnote 3: [https://quantum.google/cirq](https://quantum.google/cirq)
Footnote 4: [https://pennylane.ai](https://pennylane.ai)
Jupyter Notebooks and quantum simulators play a vital role in supporting developers of quantum programs. Jupyter provides an interactive and collaborative environment where developers can write, execute, and visualize their quantum code in an accessible manner. They allow for the combination of code, explanatory text, and visualizations, making it easier to experiment, iterate, and document the development process. Quantum simulators, on the other hand, enable developers to simulate the behavior of quantum systems without the need for physical quantum hardware. These simulators provide a valuable testing ground for verifying and debugging quantum algorithms, allowing developers to gain insights into their performance and behavior before running them on actual quantum devices. Developers can iterate quickly, gain a deeper understanding of quantum concepts, and refine their quantum programs efficiently.
Traditional cloud computing providers, such as AWS Bracket5, Azure Quantum6, Google Quantum AI7 or IBM Quantum8, offer comprehensive quantum development services. These services are designed to optimize the development
Fig. 1: Quantum computing model
process, with integrated tools like Jupyter notebooks and task schedulers. Developers can create quantum applications and algorithms across multiple hardware platforms simultaneously. This approach ensures flexibility, allowing fine-tune algorithms for specific systems while maintaining the ability to develop applications that are compatible with various quantum hardware platforms.
### _Operations_
The software development lifecycle (SDLC) of quantum programs involves a series of stages tailored to the unique challenges of quantum computing [2]. It typically begins with requirements gathering and problem formulation, where developers identify the specific problem that the quantum program aims to solve. During algorithm design, the developers design quantum algorithms that leverage the unique capabilities of quantum systems. The designed algorithm implementation translates the algorithm into quantum code using quantum programming languages and frameworks like Qiskit or Cirq. After implementation, the program undergoes rigorous testing and debugging, using quantum simulators to validate its functionality and behavior. The tested program is executed on actual quantum hardware, with careful consideration given to the limitations and noise inherent in quantum systems. Finally, ongoing maintenance and optimization are crucial, as quantum hardware, software frameworks, and algorithms evolve rapidly.
Simulators and virtualization offer significant advantages to quantum computing from an operations perspective. Simulators provide a virtual environment for testing and debugging quantum programs without the need for physical quantum hardware. Ops teams can validate code, identify errors, and optimize performance in a controlled and reproducible manner. Simulators also allow ops teams to simulate larger-scale quantum systems than currently available in physical hardware, providing insights into the behavior and scalability of quantum programs. Additionally, virtualization techniques enable the efficient allocation and management of quantum resources, allowing multiple users to access and share quantum computing resources securely. Ops teams can provision virtualized quantum environments, manage access controls, and monitor resource utilization effectively.
## III Full Stack Quantum Computing
In this section we explore the _full stack_ quantum computing from two perspectives: development process - looking at how they are developed, and composition - looking at how quantum applications are structurally organised and the factors that need to be considered when operationalizing the execution of applications utilizing quantum computing components.
### _Development process_
The SDLC of applications incorporating quantum technology involves streams of activities encompassing both classical and quantum components, see Fig. 2. At the top level, the classical software development process begins by identifying user needs and deriving system requirements. These requirements are transformed into a design and implemented, followed by verification against the requirements and validation against user needs. Once the software system enters the operational phase, any detected anomalies are used to inform potential new system requirements, if necessary. Concurrently, a dedicated track for quantum components is followed within the SDLC, specific to the implementation of quantum technology. The requirements for these components are converted into a design, which is subsequently implemented, verified, and integrated into the larger software system. The development occurs on simulators running on classical computers, which can simulate the noise characteristic of actual quantum hardware. During the operational phase, the quantum software components are executed on real hardware. Scheduling ensures efficient utilization of scarce quantum hardware, while monitoring capabilities enable the detection of anomalies throughout the process.
This workflow enables the development of products that include quantum technology using both plan-based and iterative development practices. However, when it comes to the DevOps aspects of quantum computing [3], it becomes crucial to focus on practices and activities that facilitate effective monitoring of the quantum components operating in the production environment.
### _Composition_
From an architecture perspective, we can identify the following three layers: user, infrastructure and hardware (depicted in Fig. 3). The _user_ software refers to the end user programs and the components developed by third parties, such as general purpose (e.g. Qiskit Terra9) or specialised (e.g. OpenFermion10 or TensorFlow Quantum11) libraries of quantum algorithms and circuits (e.g. Cirq and Qiskit). The _infrastructure_ layer contains the software needed to develop (e.g. simulators), test under realistic scenarios (e.g. simulate the noise of NISQ hardware) and run quantum programs at scale (e.g. task schedulers). The _hardware_ layer contains the software specific for each hardware architecture, such as the software that drives the control circuits.
Footnote 9: [https://github.com/Qiskit/qiskit-terra](https://github.com/Qiskit/qiskit-terra)
Footnote 10: [https://github.com/quantumlib/OpenFermion](https://github.com/quantumlib/OpenFermion)
Footnote 11: [https://www.tensorflow.org/quantum](https://www.tensorflow.org/quantum)
## IV Goals, challenges and future research directions
Our exploration of full-stack quantum computing focuses on identifying the challenges and difficulties in quantum software development. By leveraging the principles and practices of continuous software engineering, such as DevOps, which enable small, multidisciplinary teams to iterate quickly and deliver high-quality traditional software, we aim to pinpoint the specific components and interfaces that facilitate the transfer and application of these practices in the context of quantum software applications. Through this exercise, we seek to enhance our understanding of the pain points and opportunities for improvement in quantum software development, ultimately striving to foster the seamless integration of best practices
from traditional software engineering into the emerging field of quantum computing [4].
The main challenges emerge from two areas: technical - integrating classical and quantum components, and process - aligning the technical solution with user needs and requirements. These observations highlight the need to address technical and process-related hurdles in order to successfully utilize quantum technology while effectively meeting user expectations. From a development perspective, the quantum software debugging is fundamentally different than for classical software. The black box nature of the quantum computer, with its limited observability, limits the debugging capabilities. Although new quantum debugging techniques are developed [5], they are far from the ability to stop the execution and inspect its state at any point in time that is typically found in classical computing. Overcoming these limitations require new development approaches that require modular software development and reliable intermediate verification.
Multiple stakeholders contribute various software and hardware components at both the classical and quantum levels. While most stakeholders focus on specific areas like quantum algorithm or hardware development, influential entities such as Google and IBM have a significant presence and influence across the entire technology stack. They are driven by diverse economic and technological interests, which can either align or conflict with one another. Similar to the design principles behind the internet [6], the full-stack of quantum software must be designed to accommodate these inherent conflicts by establishing well-defined trust boundaries and open interfaces. This approach that works along the tussles among the stakeholders is crucial for fostering the development of a robust commercial environment that encourages continuous investments from both public and private entities [7].
## V Conclusion
Despite the novelty and the fundamentally new approach of quantum computing, the software development shares many characteristics with classical software engineering. Making reliable quantum software requires careful design that incorporates the best practices from classical computing, while focusing the development effort on specific high value components that improve the development experience and lower the operational costs.
## Acknowledgement
This work has been supported by the Academy of Finland (project DEQSE 349945) and Business Finland (project TORQS 8582/31/2022).
|
2305.18511 | Contextual Bandits with Budgeted Information Reveal | Contextual bandit algorithms are commonly used in digital health to recommend
personalized treatments. However, to ensure the effectiveness of the
treatments, patients are often requested to take actions that have no immediate
benefit to them, which we refer to as pro-treatment actions. In practice,
clinicians have a limited budget to encourage patients to take these actions
and collect additional information. We introduce a novel optimization and
learning algorithm to address this problem. This algorithm effectively combines
the strengths of two algorithmic approaches in a seamless manner, including 1)
an online primal-dual algorithm for deciding the optimal timing to reach out to
patients, and 2) a contextual bandit learning algorithm to deliver personalized
treatment to the patient. We prove that this algorithm admits a sub-linear
regret bound. We illustrate the usefulness of this algorithm on both synthetic
and real-world data. | Kyra Gan, Esmaeil Keyvanshokooh, Xueqing Liu, Susan Murphy | 2023-05-29T16:18:28Z | http://arxiv.org/abs/2305.18511v3 | # Contextual Bandits with Budgeted Information Reveal
###### Abstract
Contextual bandit algorithms are commonly used in digital health to recommend personalized treatments. However, to ensure the effectiveness of the treatments, patients are often requested to take actions that have no immediate benefit to them, which we refer to as _pro-treatment_ actions. In practice, clinicians have a limited budget to encourage patients to take these actions and collect additional information. We introduce a novel optimization and learning algorithm to address this problem. This algorithm effectively combines the strengths of two algorithmic approaches in a seamless manner, including 1) an online primal-dual algorithm for deciding the optimal timing to reach out to patients, and 2) a contextual bandit learning algorithm to deliver personalized treatment to the patient. We prove that this algorithm admits a sub-linear regret bound. We illustrate the usefulness of this algorithm on both synthetic and real-world data.
## 1 Introduction
In digital health, to ensure the effectiveness of treatments, patients are often requested to take actions that have no immediate benefit to them. We will refer to these actions as _pro-treatment actions_. For example, in personalized treatment for addiction, if patients do not complete self-reports, then the effectiveness of the treatment might be compromised (Carpenter et al., 2020). Another example can be found when a commercial sensor is utilized or when we want to pool data across patients, the data-collecting device (e.g., wearables, electronic toothbrush) may only be able to communicate with the intervention-delivery device (e.g., smartphones) through the cloud. To ensure the proper delivery of personalized treatments, patients might have to open an App on their phones, thus allowing the App to download the most recent treatment recommendation from the cloud (Trella et al., 2022). When patients do not take the pro-treatment actions, clinicians might be able to take _a limited number of_ expensive nudges, e.g., having a clinician follow up _and_ nudge patients to take a pro-treatment action.
We are interested in answering the following question: _given a limited budget for expensive nudges for use when patients fail to take pro-treatment actions, when should these nudges be used?_ To answer this question, we reformulate this problem by introducing _two agents_. The first agent is a **recommender**. This agent is a learning agent that takes all _revealed_ information on the patient up to the current time as input and recommends the treatment action for the next time step. The second agent is a **revealer**. This agent has access to _current_ (or some surrogate of current) and prior patient information and decides whether to reveal this information to the recommender, enabling the learning of personalized treatment. In digital health, the recommender is often a _reinforcement learning_ (RL) algorithm, and the revealer could be a staff member. At each decision point, the staff could observe the sensor data collected from the patient so far and decide whether to remind the patient to take pro-treatment actions. Once the pro-treatment action is taken, the entire history of sensor data is revealed to the recommender.
**Our Contributions** In this work, we provide an algorithm for deciding the "optimal" timing for the revealer to take action when the number of actions that it can take is limited. We focus on the special case where the recommender is a _linear contextual bandit_ algorithm when the revealer decides to reveal information and a _multi-armed bandit_ (MAB) algorithm when no additional information is revealed to the recommender (due to the missing context). However, our method could be generalized to other RL algorithms. We show that our problem can be decomposed into two parts: 1) an online primal-dual optimization algorithm addressing the decision of the revealer, and 2) a contextual bandit learning algorithm with delayed feedback modeling the decision of the recommender.
In the online primal-dual algorithm, we introduce a novel learning constraint and prove that the value of the objective function of our proposed algorithm, Algorithm 1, is at least \(\eta_{\min}(1-1/c)\) times that of an offline clairvoyant benchmark, where \(\eta_{\min}\) is a problem dependent constant and \(1-1/c\) is budget dependent constant that approaches \(1-1/e\) as the budget grows. Furthermore, by introducing the novel learning constraint in the primal-dual algorithm, we are able to separate out the effect of delayed feedback in the bandit learning loss (formally defined in SS 2), thus removing the dependency of the delayed feedback effect on the context dimension. By combining these two parts, Algorithm 2 achieves a sublinear regret under suitable choice of parameters.
Related WorkOur work is related to three streams of literature: (i) online optimization algorithms, (ii) contextual bandits under resource constraints, and (iii) contextual bandits with delayed feedback. Studies in (i) typically focus on two arrival settings: stochastic and adversarial. In the stochastic setting, online algorithms either rely on the forecasted arrival pattern using historical data or assume a stochastic arrival pattern (Goel and Mehta, 2008; Karande et al., 2011; Mahdian and Yan, 2011; Feldman et al., 2009; Jaillet and Lu, 2014; Devanur et al., 2019), while in the adversarial setting, algorithms are robust to possible changes in the arrival pattern (Mehta et al., 2007; Buchbinder et al., 2007; Aggarwal et al., 2011; Keyvanshokooh et al., 2021; Devanur and Jain, 2012). The online primal-dual mechanism is one class of algorithms that use the dual program to guide the decisions of the online algorithm (Buchbinder et al., 2009). In our work, we introduce a new class of the online primal-dual mechanisms with a learning component and incorporate it as an online allocation sub-routine in our proposed framework. We evaluate its performance using _competitive ratio_, which compares the relative performance of our online sub-routine to a clairvoyant policy on the worst-case input instance.
Studies in (ii) usually assume that each action consumes a certain amount of resources. Previous works have proposed online algorithms for standard MAB (Agrawal and Devanur, 2014; Badanidiyuru et al., 2018; Ferreira et al., 2018) and contextual bandit (Badanidiyuru et al., 2014; Agrawal et al., 2016; Agrawal and Devanur, 2016; Wu et al., 2015; Pacchiano et al., 2021) under resource constraints. In contrast, in our algorithm, the recommender is required to take an action at each time step regardless of whether it observes the current context. A few works formulate contextual bandit with resource constraints by integrating bandit algorithms with online optimization algorithms (Cheung et al., 2022; Zhaelchian et al., 2022). In comparison, our work incorporates a learning component into the online allocation mechanisms while learning only happens in the bandit part of their algorithms. This additional level of learning helps our algorithms achieve a better performance.
Studies in (iii) include both delayed feedback in MAB (Joulani et al., 2013; Bistritz et al., 2019; Pike-Burke et al., 2018) and contextual bandit (Zhou et al., 2019; Vernade et al., 2020). In the case of _bounded_ delays, Zhou et al. (2019) derived a regret bound of \(\widehat{\mathcal{O}}((d+\sqrt{dD_{\max}})\sqrt{T})\), where \(D_{\max}\) is an upper bound on delays and \(d\) is the context dimension. In the case of _stochastic_ delays, Zhou et al. (2019) established a regret bound of \(\widehat{\mathcal{O}}((\sqrt{d\mu}+\sqrt{d\sigma}+d)\sqrt{T})\), where \(\mu\) and \(\sigma\) are the delay mean and a parameter to characterize the tail of delay, respectively, and Vernade et al. (2020) provide a regret bound of \(\widehat{\mathcal{O}}(1/\tau_{m}\sqrt{dT})\), where \(\tau_{m}=\mathbb{P}(D_{1}\leq m)\), \(D_{1}\) is the reward delay of the first action, and \(m\) is upper bound on delay. In terms of the lower bounds, Dani et al. (2008) and Abbasi-Yadkori et al. (2011) developed UCB-based algorithms with \(\widehat{\mathcal{O}}(d\sqrt{T})\) worst-case regret bound, where \(d\) is the context dimension, and this is minimax optimal up to logarithmic factors, as proved by Dani et al. (2008) for the infinite number of arms. Chu et al. (2011) derived a lower bound \(\Omega(\sqrt{dT})\) for the finite number of arms. The order of our regret bound is tight (optimal) up to a logarithmic factor compared to the lower bound of Chu et al. (2011). Also, the delayed feedback only impacts our regret bound by an _additive_ factor of \(\sum_{t=1}^{T}\beta_{t}\), which is of order \(\mathcal{O}(\sqrt{T})\). Therefore, unlike Zhou et al. (2019) and Vernade et al. (2020), our theoretical result removes the dependency of the delayed feedback effect on the context dimension \(d\).
## 2 Problem Formulation
We start with the worst-case setting where the recommender _never_ observes any additional information _unless_ the revealer takes an action at each time step. When patients occasionally take pro-treatment actions on their, we expect the relative performance of our algorithm (with respect to the benchmark algorithms in SS 6) to stay the same. In this section, we first introduce the contextual bandit problem and then discuss the setup of each agent. Lastly, we provide an overview of our proposed framework.
Linear Contextual BanditLet \(\mathcal{S}=\{1,...,K\}\) denote the set of contexts. Given time horizon \(T\), at each time \(t\in[T]\), a context \(S_{t}\) arrives. We assume that the contexts are drawn i.i.d. from a known distribution \(\mathbf{p}^{*}\), where \(\mathbf{p}^{*}_{k}:=\mathbb{P}(S_{t}=k)\). (See SS 5, SS 6, and Appx E for the situation where \(\mathbf{p}^{*}\) is unknown.) However, the ordering of the realized contexts can be _adversarially_ chosen, that is, the adversary can choose the ordering in which the contexts appear.1
Footnote 1: As seen in § 3, the performance of our proposed online algorithm depends on the context arrival _sequence_.
Let \(\mathcal{A}\) denote the set of discrete actions that can be taken by the recommender. The reward \(X_{t}\) under context \(S_{t}\) and action \(A_{t}\in\mathcal{A}\) is generated according to \(X_{t}=\langle\theta_{*},\phi(S_{t},A_{t})\rangle+\eta_{t}\), where \(\theta_{*}\in\mathbb{R}^{d}\) is an _unknown_ true reward parameter, \(\phi:\mathcal{S}\times\mathcal{A}\mapsto\mathbb{R}^{d}\) is a _known_ feature mapping, and the noise \(\eta_{t}\) is conditional mean-zero \(1\)-sub-Gaussian.
RecommenderFor a given patient, when the recommender has access to the context \(S_{t}\), it takes action according to a contextual bandit algorithm. When the recommender does _not_ observe the current context \(S_{t}\), it takes actions by treating the bandit problem as a multi-armed bandit problem, where the expected reward of each action is now weighted by the context distribution. We elaborate on this structure in SS 3, and describe our UCB-based bandit algorithms in SS 4 (Alg 2). We note that this problem structure does _not_ affect the reward generating process, but rather it affects whether the expected reward averages out over context or not.
RevealerThe reveler is given a budget of \(B\) information reveals to the recommender throughout the horizon \(T\). We assume that \(B>2|\mathcal{A}|\) for technical ease. At each time, the binary decision variable for the reveler is \(O_{t}\in\{0,1\}\). Consider the general case where the reveler only observes part of the context. Namely, we can partition each state into two components: \(S_{t}=[S_{t}^{1},S_{t}^{2}]\). Let \(S_{t}^{1}\) be the part of the state that is _always_ observed by the reveler at each time \(t\), and let \(S_{t}^{2}\) be the part of the state that can _only_ be observed when the reveler takes the action \(O_{t}=1\). Let \(\ell(t)\) be the time of last reveal. At each decision time \(t\), the reveler observes the history \(\mathcal{H}_{t}^{\text{vl}}=\{A_{1},...,A_{t-1},X_{1},...,X_{t-1},O_{1},...,O_ {t-1},S_{1},...,S_{\ell(t)},S_{\ell(t)+1}^{1},...,S_{t}^{1}\}\), and _decides_ the revealing probability \(o_{t}\); then, \(O_{t}\sim\text{Bernoulli}(o_{t})\). If \(O_{t}=1\), then the reveler additionally observes \(\{S_{\ell(t)+1}^{2},...,S_{t}^{2}\}\) and the recommender observes \(\mathcal{H}_{t}^{\text{cd}}=\{A_{1},...,A_{t-1},X_{1},...,X_{t-1},O_{1},...,O_ {t-1},S_{1},...,S_{t}\}\). Otherwise, the recommender observes the history up to time \(\ell(t)\), \(\mathcal{H}_{\ell(t)}^{\text{cd}}\). For the ease of exposition, we will focus on the special case where the reveler observes the entire context at time \(t\), i.e., \(S_{t}^{1}=S_{t}\), from now on. Further discussion on how our theoretical guarantees apply to the above mentioned general setting is included in Appendix A.
Framework Overview and Regret DecompositionGiven that the number of actions that the reveler can take is limited, our objective is to develop a data-driven optimization and learning framework that can 1) decide the optimal timing for the reveler to reveal, and 2) learn the optimal treatment for the recommender. We achieve the former by designing an online primal-dual algorithm with a novel learning constraint (Alg 1) and achieve the latter by applying the UCB algorithm (Alg 2) which uses the online primal-dual algorithm (Alg 1) as a subroutine.
There are two main sources of uncertainty in this problem: (1) the unknown reward parameter, \(\theta_{*}\), and (2) the _sequence of future context arrivals_, \(\{s_{t},...,s_{T}\}\). In SS 5, we discuss the case where the context distribution \(\mathbf{p}^{*}\) is unknown. We benchmark the performance of our algorithm using an _offline clairvoyant_ benchmark, in which both the reveler and the recommender know the reward distribution, \(\theta_{*}\) and additionally, the reveler knows the entire context arrival sequence \(\{s_{1},...,s_{T}\}\). Note that _no_ algorithm can ever achieve this performance since in practice the future contexts are unknown.
We introduce a novel general regret analysis to evaluate the theoretical performance of our algorithm with respect to the clairvoyant problem. Our analysis allows for the seamless integration of a _competitive ratio_ bound for bounding the sub-optimality gap of the reveler and a _regret_ bound of the recommender. Such integration necessitates 1) defining an _auxiliary problem_ and 2) using a _bridging argument_. In the auxiliary problem, the unknown model parameter (\(\theta_{*}\)) is known, but the _future_ context arrival sequence is _unknown_. Specifically, the auxiliary problem is the _online_ version of the clairvoyant problem where the contexts are arriving online one by one.2
Footnote 2: We note that this terminology also has been used in the existing literature (see § 3.2 of Cheung et al. (2022)).
Let \(V^{\text{Auxiliary}}\) and \(V^{\text{ALG}}\) be the respective value functions of Algorithm 2 when \(\theta_{*}\) is known and unknown, and let \(V^{\text{Clairvoyant}}\) be the value function of the clairvoyant problem; see SS 3 and SS 4 for formal definitions. We decompose the
regret by establishing the following bridging argument:
\[\mathrm{Regret}_{T}\leq\mathbb{E}\left[V^{\mathrm{Clairvoyant}}\right]-\mathbb{E} \left[V^{\mathrm{ALG}}\right]=\underbrace{\mathbb{E}\left[V^{\mathrm{Auxiliary}}-V^{ \mathrm{ALG}}\right]}_{\text{Bandit Learning Loss}}+\underbrace{\mathbb{E}\left[V^{ \mathrm{Clairvoyant}}-V^{\mathrm{Auxiliary}}\right]}_{\text{Information Reveal Loss}},\]
where expectations are over the stochasticity in the algorithms. In the above decomposition, the first expression presents the loss due to contextual bandit learning, i.e., learning the unknown reward, and the second presents the loss due to the optimality gap of the information revealing mechanism for solving the auxiliary problem.
## 3 Bounding Information Reveal Loss
In this section, we first formally introduce the clairvoyant problem. Then, by developing an online primal-dual approach for solving the clairvoyant problem in an online fashion, we provide a feasible solution to the auxiliary problem. Finally, we provide an upper bound on the information reveal loss.
Clairvoyant ProblemRecall that in the clairvoyant problem, both the revealer and recommender know \(\theta_{*}\), and the revealer additionally knows the entire context sequence \(\{s_{1},...,s_{T}\}\). The optimal strategy for the recommender is to take the optimal action corresponding to context \(s_{t}\) when the revealer _reveals_ the history \(\mathcal{H}_{t}^{v1}\) to the recommender at time \(t\) (i.e., \(O_{t}=1\)), and to take the action with the highest expected (where the expectation is taken over the context distribution) reward when the revealer decides _not to reveal_ at time \(t\) (i.e., \(O_{t}=0\)). Let \(u_{s_{t}}^{*}=\max_{a\in\mathcal{A}}\left\langle\theta_{*},\phi(s_{t},a)\right\rangle\), and \(v^{*}=\max_{a\in\mathcal{A}}\langle\theta_{*},\bar{\phi}(a)\rangle\), where \(\bar{\phi}(a)\) is the weighted feature mapping, i.e., \(\bar{\phi}(a)=\sum_{k=1}^{K}\phi(k,a)\mathbf{p}_{k}^{*}\).
Using this optimal strategy for the recommender, a natural objective of the revealer is to maximize the expected reward collected throughout the horizon: \(\max_{o_{t}}\sum_{t=1}^{T}o_{t}\cdot u_{s_{t}}^{*}+(1-o_{t})\cdot v^{*}\). By removing the constant \(v^{*}\), we obtain the following formulation for the clairvoyant problem:
\[\left\{\max_{o_{t}}\sum_{t=1}^{T}o_{t}\cdot u_{s_{t}}^{*}-o_{t}\cdot v^{*}: \sum_{t=1}^{T}o_{t}\leq B,o_{t}\in[0,1],\ \forall t\in[T].\right\}\] ( _Clairvoyant_ )
The optimal policy of the revealer in (_Clairvoyant_) is first to select the contexts that yield more reward than \(v^{*}\), i.e., with positive \(u_{s_{t}}^{*}-v^{*}\), and second set \(o_{t}=1\) for the top \(B\) contexts that have the highest expected reward, \(u_{s_{t}}^{*}\).s.3 Thus, \(V^{\mathrm{Clairvoyant}}=\max_{a\in\mathcal{A}}\sum_{t=1}^{T}(o_{t}\left\langle \theta_{*},\phi(s_{t},a)\right\rangle+(1-o_{t})\left\langle\theta_{*},\bar{ \phi}(a)\right\rangle),\) where the sequence of \(\{o_{t}\}_{t\in[T]}\) is the solution to (_Clairvoyant_). We note that _without_ knowledge of current context \(S_{t}\), each decision point becomes identical to the revealer, resulting in a trivial optimal solution for the revealer - randomly selecting \(B\) times to reveal \(\mathcal{H}_{t}^{v1}\).4
Footnote 3: If there are not enough contexts \(s_{t}\)’s with a positive \(u_{s_{t}}^{*}-v^{*}\), then we do not use the entire budget \(B\). We note that our objective function is suitable for the low-budget regime, i.e., when \(B\) is less than or equal to the number of contexts with positive \(u_{s_{t}}^{*}-v^{*}\). For larger \(B\)’s, one could remove \(-o_{t}v^{*}\) from the objective function, and the rest of the result still holds.
Footnote 4: An alternative objective function in this problem is \(\max_{o_{t}}\sum_{t=1}^{T}o_{t}u_{s_{t}}^{*}\). Indeed, when the budget is low, these two objectives yield the same optimal solution to the clairvoyant problem. However, as we will see in our online primal-dual algorithm (Alg 1), when we do not have access to the _future_ context arrival sequence, \(v^{*}\) serves as a regularization term for spending the budget \(B\) (by ignoring the contexts that yield negative \(u_{s_{t}}^{*}-v^{*}\)), yielding better algorithmic performance.
In (_Clairvoyant_), the optimal strategy for the revealer depends on the entire context arrival sequence, including the future arrivals \(\{s_{t+1},...,s_{T}\}\). While no algorithm in practice can achieve this performance, (_Clairvoyant_) has two critical advantages. First, it provides an _upper bound_ for the optimal solution to the oracle problem: the oracle problem can be viewed as (_Clairvoyant_) with the additional constraint that the context sequence is observed up to time \(t\), \(\{s_{1},...,s_{t}\}\). Second, it naturally provides insight into how to incorporate online primal-dual mechanisms.
Auxiliary ProblemThe _auxiliary problem_ is an _online_ version of (_Clairvoyant_), where the contexts arrive sequentially, and the context reveal decisions should be made to hedge against the adversarial context arrival sequence in the future. In the auxiliary problem, both agents know \(\theta_{*}\), and neither has access to the future context arrival sequence (which might as well be _adversarial_). We develop an _online primal-dual_ algorithm (Alg 1) to provide a feasible solution for the revealer in the auxiliary problem. We rigorously analyze it using the _competitive ratio analysis_.
Intuitively, the probability \(o_{t}\) of revealing at each time step should depend on both (1) the budget that we have spent so far and (2) the rate at which we learn the reward and context distributions. However, as it currently stands,
the dual of (_Clairvoyant_) lacks a mechanism to connect the quality of the estimates that the _recommender_ has at time \(t\) to the revealing decision \(o_{t}\). Ideally, we would like \(o_{t}\) to increase as the time since the last reveal increases.
To solve this technical challenge, we next incorporate a novel **learning constraint**. We provide a road map for deriving this learning constraint, Constraint (1), and proving Proposition 1 in Appendix B. In Appendix B, we provide an algorithm that only takes the budget into account (Alg B.1), and then provide its theoretical guarantee (Prop B.1).
The online primal-dual algorithm that we will develop in this section serves as a subroutine in our bandit learning algorithm. The bandit algorithm provides estimates of \(u^{*}_{s_{t}}\) and \(v^{*}\) to the online primal-dual algorithm. We make the following critical observation: at each time \(t\), the revealer has access to both \(\mathcal{H}^{\text{rl}}_{t}\) and \(\mathcal{H}^{\text{cd}}_{t}\), the revealer can calculate both the recommender's optimal action, \(\hat{a}_{t}\), if the revealer were _not_ to reveal \(\mathcal{H}^{\text{vl}}_{t}\) (\(O_{t}=0\)), and the optimal action, \(\tilde{a}_{t}\), when \(\mathcal{H}^{\text{vl}}_{t}\) is revealed (\(O_{t}=1\)). We describe the calculation of the above in detail in SS 4. Next, we introduce a constraint to force the revealing probability to increase when the estimated optimal treatment differs between the two agents, i.e., \(\hat{a}_{t}\neq\tilde{a}_{t}\), _and_ the distance between the weighted feature mappings, \(\|\bar{\phi}(\tilde{a}_{t})-\bar{\phi}(\hat{a}_{t})\|_{2}\), (recall \(\bar{\phi}(a)=\sum_{k=1}^{K}\phi(k,a)\mathbf{p}^{*}_{k}\)) is large:
\[\left\|\bar{\phi}(\tilde{a}_{t})-\bar{\phi}(\hat{a}_{t})\right\|_{2}\mathbb{1 }\left(\hat{a}_{t}\neq\tilde{a}_{t}\right)(1-o_{t})\leq\beta_{t}(\mathcal{H}^{ \text{vl}}_{t},\mathcal{H}^{\text{cd}}_{t}),\ \forall t\in[T], \tag{1}\]
where \(\beta_{1}(\mathcal{H}^{\text{vl}}_{1},\mathcal{H}^{\text{cd}}_{1}),...,\beta _{T}(\mathcal{H}^{\text{vl}}_{T},\mathcal{H}^{\text{cd}}_{T})\) is a sequence of positive constants that can be _initialized adaptively_ by the expert and _auto-adjusted_ by our algorithm using the histories, to guarantee the feasibility of (_Clairvoyant_) with the above constraint. To ease notation, we abbreviate the \(\beta\)'s using \(\beta_{1},...,\beta_{T}\) from now on. Appendix C.1 includes the updated primal problem. The resulting dual:
\[\min_{y,x_{t},z_{t}} By+\sum_{t=1}^{T}z_{t}+\sum_{t=1}^{T}\left(\beta_{t}-\left\| \bar{\phi}(\tilde{a}_{t})-\bar{\phi}(\hat{a}_{t})\right\|_{2}\mathbb{1}\left( \hat{a}_{t}\neq\tilde{a}_{t}\right)\right)e_{t}\] \[s.t. y+z_{t}-\left\|\bar{\phi}(\tilde{a}_{t})-\bar{\phi}(\hat{a}_{t}) \right\|_{2}\mathbb{1}\left(\hat{a}_{t}\neq\tilde{a}_{t}\right)e_{t}\geq u^{* }_{s_{t}}-v^{*},\forall t\in[T]\] \[y,z_{t},e_{t}\geq 0,\ \forall t\in[T],\] (_Modified Clairvoyant Dual_)
where \(y\), \(z_{t}\)'s, and \(e_{t}\)'s are the dual variables. At the margin, \(y\delta\) corresponds to how the value of the optimal solution to the primal changes if we were to change the budget \(B\) by \(\delta\), \(z_{t}\) is the marginal value for revealing information at time step \(t\), and \(e_{t}\) is the minimum value that we need to increase \(o_{t}\) to satisfy Constraint (1). We note that in the above dual problem, we have a separate constraint for each \(z_{t}\) and \(e_{t}\). Thus, when we do not know the context arrival sequence ahead of time (as in the clairvoyant problem), the constraints in the dual are arriving one-by-one.
Let \(u_{\max}=\max_{s\in\mathcal{S}}u^{*}_{s}\) and \(u_{\min}=\min_{s\in\mathcal{S}}u^{*}_{s}\). Let \(\eta_{\min}\) be the smallest positive difference between \(u^{*}_{s_{t}}\) and \(v^{*}\), i.e., \(\eta_{\min}=\min_{s\in\mathcal{S}}\max(u^{*}_{s}-v^{*},0)\). We assume that an _upper bound_ on \(u_{\max}\) and a _lower bound_ on \(u_{\min}\) are known to the algorithm by applying domain knowledge. _Without loss of generality_ (WLOG), we assume that \(0\leq u^{*}_{s_{t}}-v^{*}\leq 1\) for all \(s_{t}\in\mathcal{S}\). Otherwise, we could scale \(u^{*}_{s_{t}}-v^{*}\) by \(u_{\max}\) and \(u_{\min}\) for all \(s_{t}\in\mathcal{S}\). We outline the online primal-dual algorithm in Algorithm 1. This algorithm provides a feasible solution to the auxiliary problem, and only depends on the history it has observed so far. Define \(V^{\text{Availary}}=\max_{a\in\mathcal{A}}\sum_{t=1}^{T}\left(o_{t}\left\langle \theta_{*},\phi(s_{t},a)\right\rangle+(1-o_{t})\left\langle\theta_{*},\bar{\phi} (a)\right\rangle\right),\) where the sequence of \(\{o_{t}\}_{t\in[T]}\) is chosen according to Algorithm 1, and let \(V^{\text{Modified Clairvoyant}}=\max_{a\in\mathcal{A}}\sum_{t=1}^{T}(o_{t} \left\langle\theta_{*},\phi(s_{t},a)\right\rangle+(1-o_{t})\left\langle\theta_{*}, \bar{\phi}(a)\right\rangle),\) where the sequence of \(\{o_{t}\}_{t\in[T]}\) is the solution to (_Modified Clairvoyant_) in Appendix C.1. We first show the following result:
**Proposition 1**.: _For any \(u^{*}_{s_{t}}\), \(v^{*}\), and context arrival sequence, \(V^{\text{Availary}}\) is at least \(\eta_{\min}(1-1/c)\) times \(V^{\text{Modified Clairvoyant}}\)._
The proof of Proposition 1 (Appx C.2) consists of proving that Algorithm 1 is (1) both primal and dual feasible, and (2) in each iteration (day), the ratio between the change in the primal and dual objective functions is bounded by \(\eta_{\min}(1-1/c)\). By weak duality, this implies that Algorithm 1 is \(\eta_{\min}(1-1/c)\)-competitive. Moreover, note that when the ratio \(\tau=1/B\) tends to zero, the competitive ratio of the algorithm tends to the best-possible competitive ratio of \((1-1/e)\)(Buchbinder et al., 2007). In addition, we note that as \(t\) increases, Algorithm 1 increasingly favors revealing information when the expected difference in rewards, \(u^{*}_{s_{t}}-v^{*}\) is large. Thus, even though \(\eta_{\min}\) appears in the competitive ratio, the performance of our algorithm is much better in practice. We illustrate this in SS 6.
Moreover, Algorithm 1 contains two competing constraints for deciding \(o_{t}\), and the key technical challenge in designing our algorithm is to ensure primal feasibility. Some key observations include: 1) to avoid negative competitive ratios, we increase \(o_{t}\) only if \(u^{*}_{s_{t}}-v^{*}\) is positive; 2) when running out of budget, i.e., \(u^{*}_{s_{t}}-v^{*}-y\leq 0\), we increase \(\beta_{t}\) such that the second constraint is always satisfied; 3) when \(u^{*}_{s_{t}}-v^{*}<y\) and \(e_{t}\) is not high enough to make \(o_{t}\) positive, we increase \(\beta_{t}\) such that the second constraint is satisfied.
To complete the regret decomposition, we show the following corollary (proof in Appx C.3) holds:
**Corollary 1**.: _For any \(u_{s_{t}}^{*},v^{*}\), and context arrival sequence, \(V^{\text{Auxiliary}}\geq\eta_{\min}(1-1/c)V^{\text{Clairvoyant}}\)._
## 4 Bounding Bandit Learning Loss
Recall from SS 2 that the reward under action \(A_{t}\) and context \(S_{t}\)\(X_{t}(S_{t},A_{t})=\langle\theta_{*},\phi(S_{t},A_{t})\rangle O_{t}+\eta_{t},\) where \(\theta_{*}\) is the unknown reward parameter, and the noise \(\eta_{t}\) is conditional mean-zero 1-sub-Gaussian. For the purpose of proofs, we assume there exists a finite \(W\) and finite \(L\) for which \(\|\theta_{*}\|_{2}\leq W\) with Q-probability one and \(\max_{a\in\mathcal{A},s\in S}\|\phi(s,a)\|_{2}\leq L\) and \(\max_{a\in\mathcal{A},s\in S}\langle\phi(s,a),\theta_{*}\rangle\leq 1\) with Q-probability one. Let \(x_{t}\) be the observed reward at time \(t\).
The objective of this section is to learn the unknown parameters \(\theta_{*}\) while making _a limited number of_ context-revealing decisions. We propose Algorithm 2, an online learning and optimization algorithm that strikes a two-way balance between (i) the exploration-exploitation dilemma for learning the unknown reward, and (ii) hedging against adversarially chosen context arrival sequence.
```
Input:\(B,\{u_{s}^{*}\}_{s\in S},v^{*},c=(1+1/B)^{B},\{\bar{\phi}(a)=\sum_{k=1}^{K} \phi(k,a)\mathbf{p}_{k}^{*}\}_{a\in\mathcal{A}}\). Initialize:\(y\gets 0,\{\beta_{t}\}_{t\in T},e_{j}\gets 0,\ \forall j\in J\). for\(t=1,...,\) To The new context \(s_{t}\) arrives, and we observe \(\tilde{a}_{t},\hat{a}_{t}\). if\(\tilde{a}_{t}\neq\hat{a}_{t}\) and\(\beta_{t}<\left\|\bar{\phi}(\tilde{a}_{t})-\bar{\phi}(\hat{a}_{t})\right\|_{2}\) and\(u_{s_{t}}^{*}>v^{*}\)then \(e_{t}=\frac{1}{\left\|\bar{\phi}(\tilde{a}_{t})-\bar{\phi}(\hat{a}_{t})\right\|_ {2}}\left\|\frac{\beta_{t}}{\left\|\bar{\phi}(\tilde{a}_{t})-\bar{\phi}(\hat{ a}_{t})\right\|_{2}^{2}}\) if\(u_{s_{t}}^{*}-v^{*}-y\leq 0\)then \(\beta_{t}=\left\|\bar{\phi}(\tilde{a}_{t})-\bar{\phi}(\hat{a}_{t})\right\|_{2}\), \(e_{t}=0\). endif else \(\beta_{t}=\left\|\bar{\phi}(\tilde{a}_{t})-\bar{\phi}(\hat{a}_{t})\right\|_{2}, e_{t}=0\). endif if\(y<1\) and \(u_{s_{t}}^{*}-v^{*}-y+\left\|\bar{\phi}(\tilde{a}_{t})-\bar{\phi}(\hat{a}_{t})\right\|_{2} \left\|1(\hat{a}_{t}\neq\tilde{a}_{t})e_{t}>0\)then \(\beta_{t}=\max\left(\beta_{t},\left\|\bar{\phi}(\tilde{a}_{t})-\bar{\phi}( \hat{a}_{t})\right\|_{2}\left(1-B+\sum_{i=1}^{t-1}o_{i}\right)\right)\) \(z_{t}=u_{s_{t}}^{*}-v^{*}-y+\left\|\bar{\phi}(\tilde{a}_{t})-\bar{\phi}(\hat{a}_{ t})\right\|_{2}\left\|1(\hat{a}_{t}\neq\tilde{a}_{t})e_{t}\) \(o_{t}=\min(u_{s_{t}}^{*}-v^{*}+\left\|\bar{\phi}(\tilde{a}_{t})-\bar{\phi}(\hat{ a}_{t})\right\|_{2}\left\|1(\hat{a}_{t}\neq\tilde{a}_{t})e_{t},B-\sum_{i=1}^{t-1}o_{i},1)\) \(y\gets y(1+o_{t}/B)+o_{t}/((c-1)\cdot B)\). else \(\beta_{t}=\left\|\bar{\phi}(\tilde{a}_{t})-\bar{\phi}(\hat{a}_{t})\right\|_{2}\), \(z_{t}=0\), \(o_{t}=0\). endif endfor
```
**Algorithm 1** Online Primal-Dual Algorithm Reveal with Learning Component
It consists of (i) a contextual UCB mechanism for learning the unknown \(\theta_{*}\) for making the treatment decisions, and (ii) the online primal-dual subroutine (Alg 1) for making contextual-revealing decisions. At each iteration \(t\), we maintain two uncertainty sets \(\tilde{C}_{t}\) and \(\hat{C}_{t}\) using histories \(\mathcal{H}_{t}^{\text{cl}}\) and \(\mathcal{H}_{t}^{\text{cl}}\), respectively, for the unknown \(\theta_{*}\), using the high-probability confidence bound that we derived in Proposition D.2. Upon observing a new context \(s_{t}\) by the reveler, the algorithm finds _optimistic_ treatments and parameters \((\tilde{A}_{t},\hat{\theta}_{t})\) and \((\hat{A}_{t},\hat{\theta}_{t})\) given \(\tilde{C}_{t}\) and \(\hat{C}_{t}\), respectively, and derives optimistic reward estimates \(\tilde{u}_{s_{t}}^{t}\) and \(\tilde{v}^{t}\). Given these values, the reveler deploys the online primal-dual subroutine (Alg 1) to decide the probability \(o_{t}\) of revealing \(\mathcal{H}_{t}^{\text{cl}}\) to the recommender. If \(O_{t}=1\), the recommender updates its uncertainty set, i.e., \(\tilde{C}_{t}=\tilde{C}_{t}\), and take the corresponding optimistic action \(\hat{A}_{t}^{*}\). Otherwise, the recommender exploits its latest uncertainty set and chooses the latest treatment. At each iteration, we update \(\hat{C}_{t+1}\) after observing the new reward feedback \(X_{t}\) and context \(S_{t}\). Let \(\phi(s_{0},a_{0}^{*})=0\). We outline the detailed steps in Algorithm 2.
RegretTo analyze the regret of Algorithm 2, we first develop a high-probability confidence bound on the regularized least-square estimator of \(\theta_{*}\) for the reveler at time \(t\) (Prop D.2). We note that because of Constraint (1), the bandit
learning loss relies _only_ on the concentration of \(\tilde{C}_{t}\). We then bound the bandit learning loss (\(\mathrm{BLL}_{T}\)) associated with learning the unknown reward (Prop 2). Leveraging our bridging argument (SS2), we prove our main result on bounding the regret by combining the bandit learning loss (Prop 2) and the information reveal loss (Cor 1) together in Theorem 1. Using the notation included in Appendix D and with \(\lambda=1/W^{2}\), we obtain (proof in Appx D.2):
**Proposition 2**.: _With probability \(1-\delta\), the bandit learning loss of the Algorithm 2 is bounded by:_
\[\mathrm{BLL}_{T} =\mathbb{E}\left[\sum_{t=1}^{T}\left\langle\phi(S_{t},A_{t}^{*})- \phi(S_{t},\tilde{A}_{t}^{\prime}),\theta_{*}\right\rangle O_{t}+\sum_{t=1}^{T} \left\langle\bar{\phi}(A^{*})-\bar{\phi}(\hat{A}_{t}),\theta_{*}\right\rangle(1 -O_{t})\right]\] \[\leq\sqrt{8Td\gamma^{2}\log\left(\frac{d+TW^{2}L^{2}}{d}\right)}+ W\sum_{t=1}^{T}\beta_{t},\]
_where \(A_{t}^{*}=\operatorname*{arg\,max}_{a\in\mathcal{A}}\left\langle\theta_{*}, \phi(s_{t},a)\right\rangle\), \(A^{*}=\operatorname*{arg\,max}_{a\in\mathcal{A}}\left\langle\theta_{*},\sum_{ k=1}^{K}\phi(S_{t}=k,a)\mathbf{p}_{k}\right\rangle\), \(\tilde{A}_{t}^{\prime}\) and \(\hat{A}_{t}\) are the respective Algorithm 2's treatments given the history is revealed or not to the recommender, and \(\gamma=1+\sqrt{2\log\left(\frac{1}{\delta}\right)+d\log(1+\frac{TW^{2}L^{2}}{d })}\), \(\beta_{t}=\mathcal{O}(1/(\sqrt{t}\log(B)))\)._
We note that for every budget level \(B\) and horizon length \(T\), there exists some constant \(\alpha\) such that with \(\beta_{T}=\alpha/(\sqrt{T}\log(B))\), Constraint (1) satisfied at time \(T\) without increasing \(\beta_{T}\). Thus, with suitable choices of \(\beta_{t}\)'s, we achieve a sub-linear regret (see a more detailed discussion in Remark D.1). It is worth noting that the sublinearity of the regret is not the most crucial aspect of this problem. While it is possible to achieve sublinear regret by setting sufficiently large \(\beta_{t}\)'s, this approach may result in a high constant in the regret. Therefore, in SS 6, we conduct a numerical analysis to evaluate the performance of our algorithm in comparison to two benchmarks, demonstrating the superior performance of our algorithm.
Recall that we have \(BLL_{T}=\mathbb{E}[V^{\text{auxiliary}}-V^{\text{ALG}}]\), and also \(V^{\text{ALG}}=\sum_{t=1}^{T}((\phi(S_{t},\tilde{A}_{t}^{\prime}),\theta_{*} )O_{t}+\langle\bar{\phi}(\hat{A}_{t}),\theta_{*}\rangle(1-O_{t}))\), where the sequence of \(\{O_{t}\}_{t\in[T]}\) is chosen according to Algorithm 2. Finally, the regret of Algorithm 2 (proof in Appx D.3) is then bounded by combining the results of Corollary 1 and Proposition 2. With \(\gamma\) and \(\beta_{t}\) defined above, and \(\alpha=\left(1+\frac{1}{c-1}\right)\frac{1}{\eta_{\text{min}}}\), we have our main result:
**Theorem 1**.: _With probability \(1-\delta\), the regret of Algorithm 2 is bounded as follows:_
\[\mathrm{Regret}_{T}\leq\sqrt{8Td\gamma^{2}\log\left(\frac{d+TW^{2}L^{2}}{d} \right)}+W\sum_{t=1}^{T}\beta_{t}+(1-\alpha)\,V^{\text{Clairvupant}}.\]
## 5 Extension to Unknown Context Distribution
We next extend our online learning setting to the case, where we also learn the unknown context distribution, \(\mathbf{p}^{*}\), in Algorithm E.2. To derive the regret, we leverage the _empirical Berstein's inequality_ to build a high-probability confidence bound \(\tilde{P}_{t}\) for the latent context distribution (see Lemma E.2). Let \(\tilde{\mathbf{p}}_{k}^{*}\) be the empirical average estimate of \(\mathbf{p}_{k}^{*}\). With \(\gamma\) and \(\beta_{t}\) defined above, and \(\zeta_{t}=\sqrt{\frac{2\,\tilde{\mathbf{p}}_{k}^{*}\left(1-\tilde{\mathbf{p}}_ {k}^{*}\right)\,\log\left(2\delta\mathcal{I}\right)}{\max\{m(k,t),1\}}}+\frac {7\log\left(2\delta\mathcal{I}\right)}{3(\max\{m(k,t)-1,1\})}\), the bandit learning loss in Algorithm E.2:
**Proposition 3**.: _With probability \(1-3\delta\), the bandit learning loss in Algorithm E.2 is bounded by:_
\[\mathrm{BLL}_{T} =\mathbb{E}\left[\sum_{t=1}^{T}\left\langle\phi(S_{t},A_{t}^{*}) -\phi(S_{t},\tilde{A}_{t}^{\prime}),\theta_{*}\right\rangle O_{t}+\sum_{t=1}^ {T}\left\langle\bar{\phi}(A^{*})-\bar{\phi}(\hat{A}_{t}),\theta_{*}\right\rangle (1-O_{t})\right]\] \[\leq\sqrt{8Td\gamma^{2}\log\left(\frac{d+TW^{2}L^{2}}{d}\right)} +W\sum_{t=1}^{T}\beta_{t}+(WL+1)\sum_{t=1}^{T}\sum_{k=1}^{K}\zeta_{t},\]
_where \(\tilde{A}_{t}^{\prime}\) and \(\hat{A}_{t}\) are the respective Algorithm E.2's treatments given the history is revealed or not to the recommender._
The proof of Proposition 3 is included in Appendix E. Algorithm E.2 is computationally expensive since, at each step, it involves optimizing over two convex uncertainty sets when calculating the optimal action. While this step can be solved using an existing bilinear optimization solver, the objective function in our problem is neither convex nor concave, making the problem NP-hard. Instead, we plugin the empirical mean estimate of \(\mathbf{p}^{*}\) in Algorithm E.2, and numerically evaluate its performance in Appendix F.
## 6 Experiments
In this section, we conduct experiments on both synthetic and real-world datasets to demonstrate the effectiveness of the proposed algorithm in minimizing regret. To show the benefit of adding the novel learning constraint, we compare two variants of the proposed algorithm: 1) PD1-UCB (UCB with primal-dual without the learning constraint, i.e., replacing the subroutine in Alg 2 with Alg B.1) and 2) PD2-UCB (Alg 2). We benchmark our algorithms with a naive UCB approach that reveals contexts with a fixed probability of \(B/T\) (naive-UCB). The experiments are repeated \(50\) times, and the cumulative regret, revealing probability, and competitive ratio are averaged and presented.
Synthetic Experiments SetupWe consider a linear contextual bandit setting with \(10\) discrete one-dimensional contexts, i.e., \(|\mathcal{S}|=10\). For each context \(S_{k}\), we sample it according to \(\mathbf{p}_{k}\), which is drawn from a uniform distribution \(U(0,1)\) and scaled by \(\sum_{k}\mathbf{p}_{k}\), For each _instance_, every coordinate of the true reward parameter \(\theta_{*}\in\mathbb{R}^{d}\) is sampled from \(U(0,1)\). The reward for a selected action \(A_{t}\) in each instance is then generated by \(X_{t}=\langle\theta_{*},\phi(S_{t},A_{t})\rangle+\eta_{t}\), where \(\phi(S_{t},A_{t})\) includes a one-hot vector (of length \(|\mathcal{A}|-1\)) denoting action \(A_{t}\), a variable denoting context \(S_{t}\), and \(|\mathcal{A}|-1\) interaction terms. The noise \(\eta_{t}\) is sampled from \(N(0,\sigma^{2})\) with \(\sigma=0.1\). We set the number of actions to be \(|\mathcal{A}|=5\), and the length of the time horizon to be \(T=300\). At each step, \(\tilde{u}_{s_{t}}^{t}\)'s and \(\tilde{v}^{t}\)'s are normalized using \(u_{\max}\) and \(u_{\min}\). Throughout this section, we choose \(\beta_{t}=1.2\Delta_{\min}\log(10)\sqrt{10}/(\sqrt{t}\log(B))\), where \(\Delta_{\min}:=\min_{k\in[K],a\in\mathcal{A},a^{\prime}\in\mathcal{A}\setminus \{a\}}\|\phi(k,a)-\phi(k,a^{\prime})\|\).
Competitive RatioWe first numerically inspect the empirical competitive ratio of PD1-UCB and PD2-UCB under one instance in Table 1, calculated by \(\mathbb{E}[V^{\text{Analyitary}}]/V^{\text{Clairvupant}}\). For PD1-UCB, we replace the sequence of \(\{O_{t}\}_{t\in[T]}\) in \(V^{\text{Analyitary}}\) by that chosen according to Algorithm B.1. The competitive ratios are for ground truth \(\theta_{*}\) averaged over 200 context arrival sequences. We observe that both PD1-UCB and PD2-UCB have a competitive ratio that is higher than \(1-1/e\) as stated in Corollary 1. In addition, PD1-UCB has a slightly higher average competitive than PD2-UCB as expected, since at each step, PD2-UCB will like to increase \(o_{t}\) to satisfy Constraint (1).
Regret under Known \(\mathbf{p}^{*}\)In Figure 1, we present the cumulative regret when \(B=10,20,\) and \(30\) respectively. We observe that PD2-UCB outperforms Naive-UCB and PD1-UCB 1) almost instance-wise (the dots above the 90 degree line in Fig 1 top right is most likely due to noise), 2) by large margins on many instances (\(\theta_{*}\) values). In addition, the benefit of our algorithm is greatest when the budget is low. We note that the regrets are in general increasing with respect to the budget since the optimal strategy for the clairvoyant is changing with respect to \(B\). We include additional scatter plots when \(B=20\) and \(30\) in Figures F.1 and F.2.
Additional ExperimentsWe include additional experiments where \(\mathbf{p}^{*}\) is unknown in Figures F.3, F.4, and F.5, and observe similar results. In addition, we test the performance of our algorithm on a real-world dataset, ROBAS 3, which involves brushing data from a clinical study (Trella et al., 2022). Due to the space limitation, we include the details of the experiments in Appendix G.
theory that holds promise for advancing digital health interventions. Potential future extensions include: 1) a rigorous extension of our algorithm to other reinforcement learning algorithms, 2) consider strategic patient behaviors, and 3) incorporating a context predictor. These directions will be addressed in future works. |
2306.01510 | Recipes to compute the algebraic K-theory of Hecke algebras of reductive
p-adic groups | We compute the algebraic K-theory of the Hecke algebra of a reductive p-adic
group G using the fact that the Farrell-Jones Conjecture is known in this
context. The main tool will be the properties of the associated Bruhat-Tits
building and an equivariant Atiyah-Hirzebruch spectral sequence. In particular
the projective class group can be written as the colimit of the projective
class groups of the compact open subgroups of G. | Arthur Bartels, Wolfgang Lueck | 2023-06-02T12:58:14Z | http://arxiv.org/abs/2306.01510v3 | # Recipes to compute the algebraic \(K\)-theory of Hecke algebras of reductive \(p\)-adic groups
###### Abstract.
We compute the algebraic \(K\)-theory of the Hecke algebra of a reductive \(p\)-adic group \(G\) using the fact that the Farrell-Jones Conjecture is known in this context. The main tool will be the properties of the associated Bruhat-Tits building and an equivariant Atiyah-Hirzebruch spectral sequence. In particular the projective class group can be written as the colimit of the projective class groups of the compact open subgroups of \(G\).
Key words and phrases:algebraic \(K\)-theory of Hecke algebras, reductive \(p\)-adic groups, Farrell-Jones Conjecture 2020 Mathematics Subject Classification: 55P91 (Primary) 20C08, 19D50 (Secondary)
## 1. Introduction
We begin with stating the main theorem of this paper, explanation will follow:
**Theorem 1.1** (Main Theorem).: _Let \(G\) be a td-group which is modulo a normal compact subgroup a subgroup of a reductive \(p\)-adic group. Let \(R\) be a uniformly regular ring with \(\mathbb{Q}\subseteq R\). Choose a model \(E_{\operatorname{\mathrm{Cop}}}(G)\) for the classifying space for proper smooth \(G\)-actions. Let \(\mathcal{I}\subseteq\operatorname{\mathcal{Cop}}\) be the set of isotropy groups of points in \(E_{\operatorname{\mathrm{Cop}}}(G)\)._
_Then_
1. _The map induced by the projection_ \(E_{\operatorname{\mathrm{Cop}}}(G)\to G/G\) _induces for every_ \(n\in\mathbb{Z}\) _an isomorphism_ \[H_{n}^{G}(E_{\operatorname{\mathrm{Cop}}}(G);\mathbf{K}_{R})\to H_{n}^{G}(G/G; \mathbf{K}_{R})=K_{n}(\mathcal{H}(G;R));\]
2. _There is a (strongly convergent) spectral sequence_ \[E_{p,q}^{2}=\operatorname{\mathit{SH}}_{p}^{G,\mathcal{I}}\bigl{(}E_{ \operatorname{\mathrm{Cop}}}(G);\overline{K_{q}(\mathcal{H}(\overline{?};R))} \bigr{)}\implies K_{p+q}(\mathcal{H}(G;R)),\] _whose_ \(E^{2}\)_-term is concentrated in the first quadrant;_
3. _The canonical map induced by the various inclusions_ \(K\subseteq G\)__ \[\operatorname*{colim}_{K\in\operatorname{\mathrm{Sub}}_{\mathcal{I}}(G)}K_{0 }(\mathcal{H}(K;R))\to K_{0}(\mathcal{H}(G;R))\] _can be identified with the isomorphism appearing in assertion (i) in degree_ \(n=0\) _and hence is bijective;_
4. _We have_ \(K_{n}(\mathcal{H}(G;R))=0\) _for_ \(n\leq-1\)_._
Note that assertion (i) of Theorem 1.1 is proved in [3, Corollary 1.8]. So this papers deals with implications of it concerning computations of the algebraic \(K\)-groups \(K_{n}(\mathcal{H}(G))\) of the Hecke algebra of \(G\).
A _td-group_\(G\) is a locally compact second countable totally disconnected topological Hausdorff group. It is _modulo a normal compact subgroup a subgroup of a reductive \(p\)-adic group_ if it contains a (not necessarily open) normal compact subgroup \(K\) such that \(G/K\) is isomorphic to a subgroup of some reductive \(p\)-adic group.
A ring is called _uniformly regular_, if it is Noetherian and there exists a natural number \(l\) such that any finitely generated projective \(R\)-module admits a resolution by projective \(R\)-modules of length at most \(l\). We write \(\mathbb{Q}\subseteq R\), if for any integer \(n\) the element \(n\cdot 1_{R}\) is a unit in \(R\). Examples for uniformly regular rings \(R\) with \(\mathbb{Q}\subseteq R\) are fields of characteristic zero.
We denote by \(\mathcal{H}(G;R)\) the _Hecke algebra_ consisting of locally constant functions \(s\colon G\to R\) with compact support, where the additive structure comes from the additive structure of \(R\) and the multiplicative structure from the convolution product. Note that \(\mathcal{H}(G;R)\) is a ring without unit.
We denote by \(E_{\operatorname{\mathsf{Cop}}}(G)\) a model for the _classifying space for proper smooth \(G\)-actions_, i.e., a \(G\)-\(CW\)-complex, whose isotropy groups are all compact open subgroups of \(G\) and for which \(E_{\operatorname{\mathsf{Cop}}}(G)^{H}\) is weakly contractible for any compact open subgroup \(H\subseteq G\). Two such models are \(G\)-homotopy equivalent. Hence \(H_{n}^{G}(E_{\operatorname{\mathsf{Cop}}}(G);\mathbf{K}_{R})\) is independent of the choice of a model. If \(G\) is a reductive \(p\)-adic group with compact center, then its Bruhat-Tits building is a model for \(E_{\operatorname{\mathsf{Cop}}}(G)\). If the center is not compact, one has to pass to the extended Bruhat-Tits building.
We will construct a _smooth \(G\)-homology theory_\(H_{*}^{G}(-;\mathbf{K}_{R})\) in Section 3. It assigns to a smooth \(G\)-\(CW\)-pair \((X,A)\) a collection of abelian groups \(\mathcal{H}_{n}^{G}(X,A;\mathbf{K}_{R})\) for \(n\in\mathbb{Z}\) that satisfies the expected axioms, i.e., long exact sequence of a pair, \(G\)-homotopy invariance, excision, and the disjoint union axiom. Moreover, for every open subgroup \(U\subseteq G\) and \(n\in\mathbb{Z}\) we have
\[H_{n}^{G}(G/U;\mathbf{K}_{R})\cong K_{n}(\mathcal{H}(U;R)). \tag{1.2}\]
Let \(\mathcal{F}\) be a collection of open subgroups of \(G\) which is closed under conjugation. Examples are the set \(\operatorname{\mathsf{Cop}}\) of compact open subgroups of \(G\) and the set \(\mathcal{I}\) of isotropy groups of points of some model for \(E_{\operatorname{\mathsf{Cop}}}(G)\). The subgroup category \(\operatorname{\mathsf{Sub}}_{\mathcal{F}}(G)\) appearing in Theorem 1.1 (iii) has \(\mathcal{F}\) as set of objects and will be described in detail in Subsection 2.A.
The abelian groups \(\operatorname{\mathit{SH}}_{p}^{G,\mathcal{F}}\bigl{(}E_{\mathcal{F}}(G); \overline{K_{q}(\mathcal{H}(?;R))}\bigr{)}\) appearing in Theorem 1.1 (ii) will be defined for the covariant functor \(\overline{K_{q}(\mathcal{H}(?;R))}\colon\operatorname{\mathsf{Sub}}_{ \mathcal{F}}(G)\to\mathbb{Z}\text{-}\mathsf{Mod}\), whose value at \(U\in\mathcal{F}\) is \(K_{n}(\mathcal{H}(U;R))\), in Subsection 2.B. They are closely related to the _Bredon homology groups_\(\operatorname{\mathit{BH}}_{p}^{G,\mathcal{F}}\bigl{(}E_{\mathcal{F}}(G);K_{q} (\mathcal{H}(?;R))\bigr{)}\).
The proof of the Main Theorem 1.1 will be given in Section 4.
The relevance of the Hecke algebra \(\mathcal{H}(G;R)\) is that the category of non-degenerate modules over it is isomorphic to the category of smooth \(G\)-representations with coefficients in \(R\), see for instance [5, 13]. Hence in particular its projective class group \(K_{0}(\mathcal{H}(G;R))\) is important. The various inclusions \(K\to G\) for \(K\in\operatorname{\mathsf{Cop}}\) induce a map
\[\bigoplus_{K\in\operatorname{\mathsf{Cop}}}K_{0}(\mathcal{H}(K;R))\to K_{0}( \mathcal{H}(G;R)), \tag{1.3}\]
which factorizes over the isomorphism appearing in Theorem 1.1 (iii) and is hence surjective. Dat [10] has shown that the map (1.3) is rationally surjective for \(G\) a reductive \(p\)-adic group and \(R=\mathbb{C}\). In particular, the cokernel of it is a torsion group. Dat [9, Conj. 1.11] conjectured that this cokernel is \(\widetilde{w}_{G}\)-torsion. Here \(\widetilde{w}_{G}\) is a certain multiple of the order of the Weyl group of \(G\). Dat proved this conjecture for \(G=\operatorname{GL}_{n}(F)\)[9, Prop. 1.13] and asked about the integral version, see the comment following [9, Prop. 1.10], which is now proven by Theorem 1.1 (iii).
The computations simplify considerably in the case of a reductive \(p\)-adic group thanks to the associated (extended) Bruhat-Tits building, see Sections 5 and 7. As an illustration we analyze the projective class groups of the Hecke algebras of \(\operatorname{SL}_{n}(F)\), \(\operatorname{PGL}_{n}(F)\) and \(\operatorname{GL}_{n}(F)\) in Section 6.
One of our main tools will be the _smooth equivariant Atiyah-Hirzebruch spectra sequence_, which we will establish and examine in Section 2.
### Acknowledgments
We thank Eugen Hellmann and Linus Kramer for many helpful comments and discussions.
The paper is funded by the ERC Advanced Grant "KL2MG-interactions" (no. 662400) of the second author granted by the European Research Council, by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - GZ 2047/1, Projekt-ID 390685813, Hausdorff Center for Mathematics at Bonn, and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2044 - 390685587, Mathematics Munster: Dynamics - Geometry - Structure.
The paper is organized as follows:
###### Contents
* 1 Introduction
* 1.1 The smooth equivariant Hirzebruch spectral sequence
* 1.2 The smooth orbit category and the smooth subgroup category
* 1.3 Cellular chain complexes and Bredon homology
* 1.4 The construction of the equivariant Atiyah-Hirzebruch spectral sequence
* 1.5 Passing to the subgroup category
* 1.6 The connective case
* 1.7 The first differential
* 1.8 A brief review of the Farrell Jones Conjecture for the algebraic \(K\)-theory of Hecke algebras
* 1.9 Proof of the Main Theorem 1.1
* 1.1.1 The main recipe for the computation of the projective class group
* 1.1.2 The general case
* 1.1.3 A variation
* 1.1.4 The projective class group of the Hecke algebras of \(\operatorname{SL}_{n}(F)\), \(\operatorname{PGL}_{n}(F)\) and \(\operatorname{GL}_{n}(F)\)
* 1.1.5 \(\operatorname{SL}_{n}(F)\)
* 1.1.6 \(\operatorname{PGL}_{n}(F)\)
* 1.1.7 Homotopy colimits
* 1.1.8 The Farrell-Jones assembly map as a map of homotopy colimits
* 1.1.9 Simplifying the source of the Farrell Jones assembly map
* 1.2.1 The
**Theorem 2.1**.: _Consider a pair \((X,A)\) of \(\mathcal{F}\)-\(G\)-\(CW\)-complexes and a smooth \(G\)-homology theory \(\mathcal{H}_{\mathcal{F}}^{G}\). Then there is an equivariant Atiyah-Hirzebruch spectral sequence converging to \(\mathcal{H}_{p+q}^{G}(X,A)\), whose \(E^{2}\)-term is given by_
\[E_{p,q}^{2}=B\!H_{p}^{G,\mathcal{F}}(X,A;\mathcal{H}_{q}^{G})\]
_for the Bredon homology \(B\!H_{p}^{G,\mathcal{F}}(X,A;\mathcal{H}_{q}^{G})\) of \((X,A)\) with coefficients in the covariant \(\mathbb{Z}\mathrm{O}\mathsf{r}_{\mathcal{F}}(G)\)-module \(\mathcal{H}_{q}^{G}\) that sends \(G/H\) to \(\mathcal{H}_{q}^{G}(G/H)\)._
The remainder of this section is devoted to the definition of the Bredon homology, the construction of the equivariant Atiyah-Hirzebruch spectral sequence, and some general calculations concerning the \(E^{2}\)-term. Convergence means that there is an ascending filtration \(F_{l,m-l}\mathcal{H}_{m}^{G}(X,A)\) for \(l=0,1,2,\ldots\) of \(\mathcal{H}_{m}^{G}(X,A)\) such that \(F_{p,q}\mathcal{H}_{p+q}^{G}(X,A)/F_{p-1,q+1}\mathcal{H}_{p+q}^{G}(X,A)\cong E _{p,q}^{\infty}\) holds for \(E_{p,q}^{\infty}=\operatorname{colim}_{r\to\infty}E_{p,q}^{r}\).
### The smooth orbit category and the smooth subgroup category
The \(\mathcal{F}\)_-orbit category_\(\mathsf{O}\mathsf{r}_{\mathcal{F}}(G)\) has as objects homogeneous \(G\)-spaces \(G/H\) with \(H\in\mathcal{F}\). Morphisms from \(G/H\) to \(G/K\) are \(G\)-maps \(G/H\to G/K\). We will put no topology on \(\mathsf{O}\mathsf{r}_{\mathcal{F}}(G)\). For any \(G\)-map \(f\colon G/H\to G/K\) of smooth homogeneous spaces, there is an element \(g\in G\) such that \(gHg^{-1}\subseteq K\) holds and \(f\) is the \(G\)-map \(R_{g^{-1}}\colon G/H\to G/K\) sending \(g^{\prime}H\) to \(g^{\prime}g^{-1}K\). Given two elements \(g_{0},g_{1}\in G\) such that \(g_{i}Hg_{i}^{-1}\subseteq K\) holds for \(i=0,1\), we have \(R_{g_{0}^{-1}}=R_{g_{1}^{-1}}\Longleftrightarrow g_{1}g_{0}^{-1}\in K\). We get a bijection
\[K\backslash\{g\in G\mid gHg^{-1}\subseteq K\}\xrightarrow{\cong}\operatorname {map}_{G}(G/H,G/K),\quad g\mapsto R_{g^{-1}}. \tag{2.2}\]
The \(\mathcal{F}\)_-subgroup category_\(\mathsf{Sub}_{\mathcal{F}}(G)\) has \(\mathcal{F}\) as the set of objects. For \(H,K\in\mathcal{F}\) denote by \(\operatorname{conhom}_{G}(H,K)\) the set of group homomorphisms \(f\colon H\to K\), for which there exists an element \(g\in G\) with \(gHg^{-1}\subset K\) such that \(f\) is given by conjugation with \(g\), i.e., \(f=c(g):H\to K,\;\;h\mapsto ghg^{-1}\). Note that \(c(g)=c(g^{\prime})\) holds for two elements \(g,g^{\prime}\in G\) with \(gHg^{-1}\subset K\) and \(g^{\prime}Hg^{\prime-1}\subset K\), if and only if \(g^{-1}g^{\prime}\) lies in the centralizer \(C_{G}H=\{g\in G\mid gh=hg\text{ for all }h\in H\}\) of \(H\) in \(G\). The group of inner automorphisms \(\operatorname{Inn}(K)\) of \(K\) acts on \(\operatorname{conhom}_{G}(H,K)\) from the left by composition. Define the set of morphisms
\[\operatorname{mor}_{\mathsf{Sub}_{\mathcal{O}\mathsf{op}}(G)}(H,K):= \operatorname{Inn}(K)\backslash\operatorname{conhom}_{G}(H,K).\]
There is an obvious bijection
\[K\backslash\{g\in G\mid gHg^{-1}\subseteq K\}/C_{G}H\xrightarrow{ \cong}\operatorname{Inn}(K)\backslash\operatorname{conhom}_{G}(H,K),\\ KgC_{G}H\mapsto[c(g)], \tag{2.3}\]
where \([c(g)]\in\operatorname{Inn}(K)\backslash\operatorname{conhom}_{G}(H,K)\) is the class represented by the element \(c(g)\colon H\to K,\;h\mapsto ghg^{-1}\) in \(\operatorname{conhom}_{G}(H,K)\) and \(K\) acts from the left and \(C_{G}H\) from the right on \(\{g\in G\mid gHg^{-1}\subseteq K\}\) by the multiplication in \(G\).
Let
\[P\colon\mathsf{O}\mathsf{r}_{\mathcal{F}}(G)\to\mathsf{Sub}_{\mathcal{F}}(G) \tag{2.4}\]
be the canonical projection which sends an object G/H to \(H\) and is given on morphisms by the obvious projection under the identifications (2.2) and (2.3).
### Cellular chain complexes and Bredon homology
Given an \(\mathcal{F}\)-\(G\)-\(CW\)-complex \(X\), we obtain a contravariant \(\mathsf{O}\mathsf{r}_{\mathcal{F}}(G)\)-space \(O_{X}\colon\mathsf{O}\mathsf{r}_{\mathcal{F}}(G)\to\mathsf{Spaces}\) by sending \(G/H\) to \(\operatorname{map}_{G}(G/H,X)=X^{H}\). We get a contravariant \(\mathsf{Sub}_{\mathcal{F}}(G)\)-space \(S_{X}\colon\mathsf{Sub}_{\mathcal{F}}(G)\to\mathsf{Spaces}\) by sending \(H\) to \(C_{G}H\backslash\operatorname{map}_{G}(G/H,X)=C_{G}H\backslash X^{H}\). A morphism \(H\to K\) given by an element \(g\in G\) satisfying \(gHg^{-1}\subseteq K\) is sent to the map \(C_{G}K\backslash X^{K}\to C_{G}H\backslash X^{H}\) induced by the map \(X^{K}\to X^{H},\;x\mapsto g^{-1}x\).
Given a pair \((Y,A)\) with a filtration \(A=Y_{-1}\subseteq Y_{0}\subseteq Y_{1}\subseteq Y_{2}\subseteq\cdots\subseteq Y\) with \(Y=\operatorname{colim}_{n\to\infty}Y_{n}\), we associate to it a \(\mathbb{Z}\)-chain complex \(C_{*}^{c}(Y,A)\), whose \(n\)-th chain module is the singular homology \(H_{n}^{\operatorname{sing}}(Y_{n},Y_{n-1})\) of the pair \((Y_{n},Y_{n-1})\) (with coefficients in \(\mathbb{Z}\)) and whose \(n\)th differential is given by the composite
\[H_{n}^{\operatorname{sing}}(Y_{n},Y_{n-1})\xrightarrow{\partial_{n}}H_{n-1}^{ \operatorname{sing}}(Y_{n-1})\xrightarrow{H_{n-1}^{\operatorname{sing}}(i_{n- 1})}H_{n-1}^{\operatorname{sing}}(Y_{n-1},Y_{n-2})\]
for \(\partial_{n}\) the boundary operator of the pair \((Y_{n},Y_{n-1})\) and the inclusion \(i_{n-1}\colon Y_{n-1}=(Y_{n-1},\emptyset)\to(Y_{n-1},Y_{n-2})\).
Given a pair of \(\mathcal{F}\)-\(G\)-\(CW\)-complexes \((X,A)\), the filtration by its skeletons induces filtrations on the spaces \(X^{H}\) and \(C_{G}H\backslash X^{H}\) for every subgroup \(H\) of \(G\). We get a contravariant \(\mathbb{Z}\mathsf{Or}_{\mathcal{F}}(G)\)-chain complex \(C_{*}^{\mathsf{Or}_{\mathcal{F}}(G)}(X,A)\colon\mathsf{Or}_{\mathcal{F}}(G) \to\mathbb{Z}\mathsf{Ch}\) and a contravariant \(\mathbb{Z}\mathsf{Sub}_{\mathcal{F}}(G)\)-chain complex \(C_{*}^{\mathsf{Sub}_{\mathcal{F}}(G)}(X,A)\colon\mathsf{Sub}_{\mathcal{F}}(G )\to\mathbb{Z}\mathsf{Ch}\) by putting
\[C_{*}^{\mathsf{Or}_{\mathcal{F}}(G)}(X,A)(G/H) := C_{*}^{c}(O_{X}(G/H),O_{A}(G/H))=C_{*}^{c}(X^{H},A^{H});\] \[C_{*}^{\mathsf{Sub}_{\mathcal{F}}(G)}(X,A)(H) := C_{*}^{c}(S_{X}(X)(H),S_{A}(H))=C_{*}^{c}(C_{G}H\backslash X^{H},C_{G}H\backslash A^{H}).\]
Choose a \(G\)-pushout
(2.5)
It induces for every closed subgroup \(H\subseteq G\) pushouts
and
Note that \((G/H_{i})^{H}\) agrees with \(\operatorname{mor}_{\mathsf{Or}_{\mathcal{F}}(G)}(G/H,G/H_{i})=\operatorname{ map}_{G}(G/H,G/H_{i})\). In the sequel we denote by \(\mathbb{Z}S\) for a set \(S\) the free \(\mathbb{Z}\)-module with the set \(S\) as basis. Since singular homology satisfies the disjoint union axiom, homotopy invariance and excision, we obtain an isomorphism of contravariant \(\mathbb{Z}\mathsf{Or}_{\mathcal{F}}(G)\)-modules
\[\bigoplus_{i\in I_{n}}\mathbb{Z}\operatorname{mor}_{\mathsf{Or}_{\mathcal{F}} (G)}(?,G/H_{i})\xrightarrow{\cong}C_{n}^{\mathsf{Or}_{\mathcal{F}}(G)}(X,A), \tag{2.6}\]
where \(\mathbb{Z}\operatorname{mor}_{\mathsf{Or}_{\mathcal{F}}(G)}(?,G/H_{i})\) is the free \(\mathbb{Z}\mathsf{Or}(G)\)-module based at the object \(G/H_{i}\), see [14, Example 9.8 on page 164], and analogously an isomorphism of contravariant \(\mathbb{Z}\mathsf{Sub}_{\mathcal{F}}(G)\)-modules
\[\bigoplus_{i\in I_{n}}\mathbb{Z}\operatorname{mor}_{\mathbb{Z}\mathsf{Sub}_{ \mathcal{F}}(G)}(?,H_{i})\xrightarrow{\cong}C_{n}^{\mathsf{Sub}_{\mathcal{F}} (G)}(X,A). \tag{2.7}\]
If \(P_{*}C_{*}^{\mathsf{Or}_{\mathcal{F}}(G)}(X,A)\) is the \(\mathbb{Z}\mathsf{Sub}_{\mathcal{F}}(G)\)-chain complex obtained by induction with \(P\colon\mathsf{Or}_{\mathcal{F}}(G)\to\mathsf{Sub}_{\mathcal{F}}(G)\) from \(C_{*}^{\mathsf{Or}_{\mathcal{F}}(G)}(X,A)\), see [14, Example 9.15 on page 166], we conclude from (2.6) and (2.7) that the canonical map of \(\mathbb{Z}\mathsf{Sub}_{\mathcal{F}}(G)\)-chain complexes
\[P_{*}C_{*}^{\mathsf{Or}_{\mathcal{F}}(G)}(X,A)\xrightarrow{\cong}C_{*}^{ \mathsf{Sub}_{\mathcal{F}}(G)}(X,A) \tag{2.8}\]
is an isomorphism.
For a covariant \(\mathbb{Z}\mathsf{Or}(G)\)-module \(M\), we get from the tensor product over \(\mathsf{Or}_{\mathcal{F}}(G)\), see [14, 9.13 on page 166], a \(\mathbb{Z}\)-chain complex \(C_{*}^{\mathsf{Or}_{\mathcal{F}}(G)}(X,A)\otimes_{\mathbb{Z}\mathsf{Or}_{ \mathcal{F}}(G)}M\).
**Definition 2.9** (Bredon homology).: We define the \(n\)-th _Bredon homology_ to be the \(\mathbb{Z}\)-module
\[\mathit{B\!H}_{n}^{G,\mathcal{F}}(X,A;M)=H_{n}\big{(}C_{*}^{\mathsf{Or}_{ \mathcal{F}}(G)}(X,A)\otimes_{\mathbb{Z}\mathsf{Or}_{\mathcal{F}}(G)}M\big{)}.\]
Given a covariant \(\mathbb{Z}\mathsf{Sub}_{\mathcal{F}}(G)\)-module \(N\), define analogously
\[\mathit{S\!H}_{n}^{G,\mathcal{F}}(X,A;N)=H_{n}\big{(}C_{*}^{\mathsf{Sub}_{ \mathcal{F}}(G)}(X,A)\otimes_{\mathbb{Z}\mathsf{Sub}_{\mathcal{F}}(G)}N\big{)}.\]
Given a covariant \(\mathbb{Z}\mathsf{Sub}_{\mathcal{F}}(G)\)-module \(N\), define the covariant \(\mathbb{Z}\mathsf{Or}_{\mathcal{F}}(G)\)-module \(P^{*}N\) to be \(N\circ P\). We get from the adjunction of [14, 9.22 on page 169] and (2.8) a natural isomorphism of \(\mathbb{Z}\)-chain complexes
\[C_{*}^{\mathsf{Sub}_{\mathcal{F}}(G)}(X,A)\otimes_{\mathbb{Z}\mathsf{Sub}_{ \mathcal{F}}(G)}N\xrightarrow{\cong}C_{*}^{\mathsf{Or}_{\mathcal{F}}(G)}(X,A )\otimes_{\mathbb{Z}\mathsf{Or}_{\mathcal{F}}(G)}P^{*}N \tag{2.10}\]
and hence natural isomorphism of \(\mathbb{Z}\)-modules
\[\mathit{B\!H}_{n}^{G,\mathcal{F}}(X,A;P^{*}N)\xrightarrow{\cong}\mathit{S\!H }_{n}^{G,\mathcal{F}}(X,A;N). \tag{2.11}\]
Let \((X,A)\) is a pair of \(\mathcal{F}\)-\(CW\)-complexes. Denote by \(\mathcal{I}\) the set of isotropy groups of points in \(X\). Let \(M\) is a covariant \(\mathbb{Z}\mathsf{Or}_{\mathcal{F}}(G)\)-module and \(N\) be a covariant \(\mathsf{Sub}_{\mathcal{F}}(G)\)-module. Denote by \(M|_{\mathcal{I}}\) and \(N|_{\mathcal{I}}\) their restrictions to \(\mathsf{Or}_{\mathcal{I}}(G)\) and \(\mathsf{Sub}_{\mathcal{I}}(G)\). Then one easily checks using [11, Lemma 1.9] that there are canonical isomorphisms
\[\mathit{B\!H}_{n}^{G,\mathcal{I}}(X,A;M|_{\mathcal{I}}) \cong \mathit{B\!H}_{n}^{G,\mathcal{F}}(X,A;M); \tag{2.13}\] \[\mathit{S\!H}_{n}^{G,\mathcal{I}}(X,A;N|_{\mathcal{I}}) \cong \mathit{B\!H}_{n}^{G,\mathcal{F}}(X,A;N). \tag{2.12}\]
### The construction of the equivariant Atiyah-Hirzebruch spectral sequence
Proof of Theorem 2.1.: Since \((X,A)\) comes with the skeletal filtration, there is by a general construction a spectral sequence
\[E_{p,q}^{r},\quad d_{p,q}^{r}:E_{p,q}^{r}\to E_{p-r,q+r-1}^{r}\]
converging to \(\mathcal{H}_{p+q}^{G}(X,A)\), whose \(E_{1}\)-term is given by
\[E_{p,q}^{1}=\mathcal{H}_{p+q}^{G}(X_{p},X_{p-1})\]
and the first differential is the composite
\[d_{p,q}^{1}:E_{p,q}^{1}=\mathcal{H}_{p+q}^{G}(X_{p},X_{p-1})\to\mathcal{H}_{p+ q-1}^{G}(X_{p-1})\to\mathcal{H}_{p+q-1}^{G}(X_{p-1},X_{p-2})=E_{p-1,q}^{1},\]
where the first map is the boundary operator of the pair \((X_{p},X_{p-1})\) and the second induced by the inclusion. The elementary construction is explained for trivial \(G\) for instance in [18, 15.6 on page 339]. The construction carries directly over to the equivariant setting.
The straightforward proof of the identification of \(E_{p,q}^{2}\) with \(\mathit{B\!H}_{p}^{G,\mathcal{F}}(X,A;\mathcal{H}_{q})\) is left to the reader.
### Passing to the subgroup category
**Condition 2.14** (Sub\(|_{\mathcal{F}}\)).: _Let \(\mathcal{H}^{G}_{*}(-)\) be a smooth \(G\)-homology theory. Then \(\mathcal{H}^{G}_{*}(-)\) satisfies the Condition (Sub\(|_{\mathcal{F}}\)) if for any \(H\in\mathcal{F}\) and \(g\in C_{G}H\) the \(G\)-map \(R_{g^{-1}}\colon G/H\to G/H\) sending \(g^{\prime}H\) to \(g^{\prime}g^{-1}H\) induces the identity on \(\mathcal{H}^{G}_{q}(G/H)\), i.e., \(\mathcal{H}^{G}_{q}(R_{g^{-1}})=\mathrm{id}_{\mathcal{H}^{G}_{q}(G/H)}\)._
**Remark 2.15**.: Suppose that the \(G\)-homology theory \(\mathcal{H}^{G}_{*}\) satisfies the Condition (Sub\(|_{\mathcal{F}}\)). Then the covariant \(\mathbb{Z}\mathsf{Or}_{\mathcal{F}}(G)\)-module \(\mathcal{H}^{G}_{q}\) sending \(G/H\) with \(H\in\mathcal{F}\) to \(\mathcal{H}^{G}_{q}(G/H)\) defines a covariant \(\mathbb{Z}\mathsf{Sub}_{\mathcal{F}}(G)\)-module \(\overline{\mathcal{H}^{G}_{q}}\colon\mathsf{Sub}_{\mathcal{F}}(G)\to\mathbb{ Z}\text{-}\mathsf{Mod}\) uniquely determined by \(\mathcal{H}^{G}_{q}=\overline{\mathcal{H}^{G}_{q}}\circ P\) for the projection \(P\colon\mathsf{Or}_{\mathcal{F}}(G)\to\mathsf{Sub}_{\mathcal{F}}(G)\). Moreover, we obtain from (2.11) for every pair \((X,A)\) of \(\mathcal{F}\)-\(G\)-\(CW\)-complexes natural isomorphisms
\[B\!H_{n}^{G,\mathcal{F}}(X,A;\mathcal{H}^{G}_{q}(-))\xrightarrow{\cong}S\!H _{n}^{G,\mathcal{F}}(X,A;\overline{\mathcal{H}^{G}_{q}(-)}).\]
Note that the right hand side is often easier to compute than the left hand side. One big advantage of \(\mathsf{Sub}(G)\) in comparison with \(\mathsf{Or}(G)\) is that for a finite subgroup \(H\subseteq G\) the set of automorphisms of \(H\) is the group \(N_{G}H/H\cdot C_{G}H\), which is finite, whereas the set of automorphisms of \(G/H\) in \(\mathsf{Or}(G)\) for a finite group \(H\) is the group \(N_{G}H/H\), which is not necessarily finite. This is a key ingredient in the construction of an equivariant Chern character for discrete groups \(G\) and proper \(G\)-\(CW\)-complexes in [15, 16].
If \(G\) is abelian, \(\mathsf{Sub}_{\mathcal{F}}(G)\) reduces to the poset of open subgroups of \(G\) ordered by inclusion.
### The connective case
**Theorem 2.16**.:
1. _Suppose that_ \(\mathcal{H}^{G}_{q}(G/H)=0\) _for every_ \(H\in\mathcal{F}\) _and_ \(q\in\mathbb{Z}\) _with_ \(q<0\)_. Then we get for every pair_ \((X,A)\) _of_ \(\mathcal{F}\)_-_\(G\)_-_\(CW\)_-complexes and every_ \(q\in\mathbb{Z}\) _with_ \(q<0\)__ \[\mathcal{H}^{G}_{q}(X,A)=0;\]
2. _Choose a model_ \(E_{\mathrm{Cop}}(G)\) _for the classifying space of smooth proper_ \(G\)_-actions. Let_ \(\mathcal{I}\) _be the set of isotropy groups of points in_ \(E_{\mathrm{Cop}}(G)\)_. Suppose that_ \(\mathcal{H}^{G}_{q}(G/H)=0\) _for every open_ \(H\in\mathcal{I}\) _and_ \(q\in\mathbb{Z}\) _with_ \(q<0\)_._ 1. _Then for every_ \(q<0\) _we have_ \(\mathcal{H}^{G}_{q}(E_{\mathrm{Cop}}(G))=0\)_, the edge homomorphism induces an isomorphism_ \[B\!H_{0}^{G}(E_{\mathrm{Cop}}(G);\mathcal{H}^{G}_{q}(-))\xrightarrow{\cong} \mathcal{H}^{G}_{0}(E_{\mathcal{F}}(G))\] _and the canonical map_ \[\operatorname*{colim}_{G/H\in\mathsf{Or}_{\mathcal{I}}(G)}\mathcal{H}^{G}_{0} (G/H)\xrightarrow{\cong}\mathcal{H}^{G}_{0}(E_{\mathcal{F}}(G))\] _is bijective;_ 2. _Suppose additionally that_ \(\mathcal{H}^{G}_{*}\) _satisfies Condition (Sub_\(\mathcal{I}\)_), see Condition_ 2.14_. Then the edge homomorphism induces an isomorphism_ \[S\!H_{0}^{G}(E_{\mathrm{Gop}}(G);\overline{\mathcal{H}^{G}_{q}(-)}) \xrightarrow{\cong}\mathcal{H}^{G}_{0}(E_{\mathcal{F}}(G))\] _and the canonical map_ \[\operatorname*{colim}_{H\in\mathsf{Sub}_{\mathcal{I}}(G)}\overline{\mathcal{H} ^{G}_{0}}(H)\xrightarrow{\cong}\mathcal{H}^{G}_{0}(E_{\mathrm{Cop}}(G))\] _is bijective._
Proof.: (i) This follows directly from the smooth equivariant Atyiah-Hirzebruch spectral sequence of Theorem 2.1
(ii)a We get \(\mathcal{H}^{G}_{q}(E_{\operatorname{\mathsf{Gop}}}(G))=0\) for \(q<0\) from assertion (i).
We get from the the smooth equivariant Atyiah-Hirzebruch spectral sequence of Theorem 2.1 an isomorphism
\[\mathit{B\!H}^{G,\mathcal{I}}_{0}(E_{\operatorname{\mathsf{Gop}}}(G);\mathcal{ H}^{G}_{0})=H_{0}(C_{*}^{\operatorname{\mathsf{Or}}_{\mathcal{I}}(G)}(E_{ \operatorname{\mathsf{Gop}}}(G))\otimes_{\operatorname{\mathbb{Z}}\! \operatorname{\mathsf{Or}}_{\mathcal{I}}(G)}\mathcal{H}^{G}_{0})\xrightarrow{ \cong}\mathcal{H}^{G}_{0}(E_{\operatorname{\mathsf{Gop}}}(G)).\]
since \(E_{p,q}^{2}=\mathit{B\!H}^{G,\mathcal{I}}_{0}(E_{\operatorname{\mathsf{Gop}}} (G);\mathcal{H}^{G}_{q})=0\) holds for \(p,q\in\mathbb{Z}\) with \(q<0\). Since the \(\mathbb{Z}\!\operatorname{\mathsf{Or}}_{\mathcal{I}}(G)\)-module \(C_{n}^{\operatorname{\mathsf{Or}}_{\mathcal{I}}(G)}(E_{\operatorname{\mathsf{Gop }}}(G))\) is free in the sense of [14, 9.16 on page 167] for \(n\geq 0\) by (2.6) and \(E_{\operatorname{\mathsf{Gop}}}(G)^{H}\) is weakly contractible for \(H\in\mathcal{I}\), the \(\mathbb{Z}\!\operatorname{\mathsf{Or}}_{\mathcal{I}}(G)\)-chain complex \(C_{*}^{\operatorname{\mathsf{Or}}_{\mathcal{I}}(G)}(E_{\operatorname{\mathsf{Gop }}}(G))\) is a projective \(\mathbb{Z}\!\operatorname{\mathsf{Or}}_{\mathcal{I}}(G)\)-resolution of the constant contravariant \(\mathbb{Z}\!\operatorname{\mathsf{Or}}_{\mathcal{I}}(G)\)-module \(\underline{\mathbb{Z}}\), whose value is \(\mathbb{Z}\) at each object and assigns to any morphism \(\operatorname{id}_{\mathbb{Z}}\). Since \(-\otimes_{\mathbb{Z}\otimes_{\mathbb{Z}\otimes_{\mathbb{Z}\mathsf{Or}}_{ \mathcal{I}}(G)}}\mathcal{H}^{G}_{q}\) is right exact by [14, 9.23 on page 169], we get a isomorphism
\[H_{0}(C_{*}^{\operatorname{\mathsf{Or}}_{\mathcal{I}}(G)}(E_{\operatorname{ \mathsf{Gop}}}(G))\otimes_{\mathbb{Z}\!\operatorname{\mathsf{Or}}_{\mathcal{I }}(G)}\mathcal{H}^{G}_{0})\cong\underline{\mathbb{Z}}\otimes_{\mathbb{Z} \!\operatorname{\mathsf{Or}}_{\mathcal{I}}(G)}\mathcal{H}^{G}_{0}.\]
We conclude from the adjunction appearing in [14, 9.21 on page 169] and the universal property of the colimit that here is a canonical isomorphism
\[\operatorname*{colim}_{G/H\in\operatorname{\mathsf{Or}}_{\mathcal{I}}(G)} \mathcal{H}^{G}_{0}(G/H)\cong\underline{\mathbb{Z}}\otimes_{\mathbb{Z} \!\operatorname{\mathsf{Or}}_{\mathcal{I}}(G)}\mathcal{H}^{G}_{0}.\]
This finishes the proof of assertion (ii)a.
(ii)b This follows from assertion (ii)a, since we get from Condition (Sub\({}_{\mathcal{I}}\)) a canonical isomorphism
\[\operatorname*{colim}_{G/H\in\operatorname{\mathsf{Or}}_{\mathcal{I}}(G)} \mathcal{H}^{G}_{0}(G/H)\xrightarrow{\cong}\operatorname*{colim}_{H\in\operatorname {\mathsf{Sub}}_{\mathcal{I}}(G)}\overline{\mathcal{H}^{G}_{q}}(H).\]
for the covariant \(\mathbb{Z}\!\operatorname{\mathsf{Sub}}_{\mathcal{I}}(G)\)-module \(\overline{\mathcal{H}^{G}_{q}}\) determined by the covariant \(\mathbb{Z}\!\operatorname{\mathsf{Or}}_{\mathcal{I}}(G)\)-module \(\mathcal{H}^{G}_{q}\), see Remark 2.15.
### The first differential
Let \(X\) be an \(\mathcal{F}\)-\(G\)-\(CW\)-complex. Suppose that \(X_{0}=\coprod_{j\in J}G/V_{j}\) and that \(X_{1}\) is given by the \(G\)-pushout
(2.17)
We want to figure out the map of \(\mathbb{Z}\!\operatorname{\mathsf{Or}}_{\mathcal{F}}(G)\)-modules \(\gamma\) making the following diagram commute
where the vertical isomorphisms come from the isomorphisms (2.6). In order to describe \(\gamma\), we have to define for each \(i\in I\) and \(j\in J\) a map of \(\mathbb{Z}\!\operatorname{\mathsf{Or}}(G)\)-modules
\[\gamma_{i,j}\colon\mathbb{Z}\operatorname{\operatorname{\mathsf{mor}}_{ \operatorname{\mathsf{Or}}_{\mathcal{F}}(G)}}(?,G/H_{i})\to\mathbb{Z} \operatorname{\operatorname{\mathsf{mor}}_{\operatorname{\mathsf{Z}}\! \operatorname{\mathsf{Or}}_{\mathcal{I}}(G)}}(?,G/K_{j})\]
such that \(\{j\in I_{n-1}\mid\gamma_{i,j}\neq 0\}\) is finite for every \(i\in I_{n}\). Note that \(\gamma_{i,j}\) is determined by the image of \(\operatorname{id}_{G/H_{i}}\). Hence we need to specify for \(i\in I\) and \(j\in J\) an element
\[\overline{\gamma_{i,j}}\in\mathbb{Z}\operatorname{\operatorname{\mathsf{mor}}_{ \operatorname{\mathsf{Or}}_{\mathcal{I}}(G)}}(G/U_{i},G/V_{j})=\mathbb{Z} \operatorname{\operatorname{\mathsf{map}}_{G}}(G/U_{i},G/V_{j}). \tag{2.18}\]
For each \(i\in I\) there are two elements \(j_{-}(i)\) and \(j_{+}(i)\) in \(J\) such that the image of \(G/H_{i}\times\{\pm 1\}\) under the map \(q_{i}\) appearing in (2.17) is the summand \(G/K_{j_{\pm}}(i)\) belonging to \(j_{\pm}(i)\) of \(\coprod_{j\in I_{0}}G/K_{j}\), if we write \(S^{0}=\{-1,1\}\). Denote by \((q_{i}^{1})_{\pm 1}\colon G/H_{i}\to G/K_{j_{\pm}}\) the restriction of \(q_{i}^{1}\) to \(G/H_{i}\times\{\pm 1\}\). We leave the elementary proof of the next lemma to the reader.
**Lemma 2.19**.: _We get in \(\mathbb{Z}\operatorname{map}_{G}(G/H_{i},G/K_{j})\)_
\[\overline{\gamma_{i,j}}=\begin{cases}\pm[(q_{i}^{1})_{\pm 1}]&\text{if $j=j_{\pm}(i) $ and $j_{-}(i)\neq j_{+}(i)$};\\ \vspace{0.05in}\{[(q_{i}^{1})_{+1}]-[(q_{i}^{1})_{-1}]&\text{if $j=j_{-}(i)=j_{+}(i)$}; \\ \vspace{0.05in}0&\text{if $j\notin\{j_{-}(i),j_{+}(i)\}$}.\end{cases}\]
**Remark 2.20**.: This implies for the \(\mathbb{Z}\)-chain complex \(C_{*}^{\operatorname{\mathsf{O}Tr}(G)}(X,A)\otimes_{\operatorname{\mathsf{O}Tr }(G)}M\) for a covariant \(\mathbb{Z}\operatorname{\mathsf{O}Tr}_{\mathcal{F}}(G)\)-module \(M\) that its first differential agrees with the \(\mathbb{Z}\)-homomorphism
\[\alpha=(\alpha_{i,j})_{i\in I,j\in J}\colon\,\bigoplus_{i\in I}M(G/U_{i}) \to\bigoplus_{j\in J}M(G/V_{j}),\]
where the \(\mathbb{Z}\)-homomorphisms \(\alpha_{i,j}\colon M(G/U_{i})\to M(G/V_{j})\) are given as follows. We get in the notation of Lemma 2.19
\[\alpha_{i,j}=\begin{cases}\pm M((q_{i}^{1})_{\pm})&\text{if $j=j_{\pm}(i)$ and $j_{-}(i)\neq j_{+}(i)$};\\ M((q_{i}^{1})_{+1})-M((q_{i}^{1})_{-1})&\text{if $j=j_{-}(i)=j_{+}(i)$};\\ 0&\text{if $j\notin\{j_{-}(i),j_{+}(i)\}$}.\end{cases}\]
Note that the cokernel of \(\alpha\) is \(\mathit{BH}_{0}^{G,\mathcal{F}}(X;M)\).
We get a computation of the first differential of \(C_{*}^{\operatorname{\mathsf{Sub}Tr}(G)}(X,A)\otimes_{\mathbb{Z}\operatorname {\mathsf{Sub}Tr}(G)}N\) for a covariant \(\mathbb{Z}\operatorname{\mathsf{Sub}Tr}(G)\)-module \(N\) from the isomorphism (2.10). Explicitly the first differential is given by
\[\beta=(\beta_{i,j})_{i\in I,j\in J}\colon\,\bigoplus_{i\in I_{n}}N(U_{i})\to \bigoplus_{j\in I_{n-1}}N(V_{j}),\]
where the \(\mathbb{Z}\)-homomorphisms \(\beta_{i,j}\colon N(G/U_{i})\to N(G/V_{j})\) are given as follows. Choose for the map \((q_{i})_{\pm}\colon G/U_{i}\to G/V_{j}\) an element \((g_{i})_{\pm}\) with \((q_{i})_{\pm}(eU_{i})=(g_{i})_{\pm}^{-1}V_{j}\). Let \([c(g_{i})_{\pm}]\colon U_{i}\to V_{j}\) be the morphism in \(\operatorname{\mathsf{Sub}Tr}(G)\) represented by \(c(g_{i})_{\pm}\colon U_{i}\to V_{j}\) sending \(u\) to \(gug^{-1}\). Then
\[\beta_{i,j}=\begin{cases}\pm N([c(g_{i})_{\pm}])&\text{if $j=j_{\pm}(i)$ and $j_{-}(i)\neq j_{+}(i)$};\\ N([c(g_{i})_{+}])-N([c(g_{i})_{-}])&\text{if $j=j_{-}(i)=j_{+}(i)$};\\ 0&\text{if $j\notin\{j_{-}(i),j_{+}(i)\}$}.\end{cases}\]
Note that the cokernel of \(\beta\) is \(\mathit{SH}_{0}^{G,\mathcal{F}}(X;N)\).
## 3. A brief review of the Farrell Jones Conjecture for the algebraic \(K\)-theory of Hecke algebras
In this section we give a review of the Farrell Jones Conjecture for the algebraic \(K\)-theory of Hecke algebras. Further information can be found in [2, 3].
Let \(R\) be a (not necessarily commutative) associative unital ring with \(\mathbb{Q}\subseteq R\). Let \(G\) be a td-group. Let \(\mathcal{H}(G;R)\) be the associated Hecke algebra.
One can construct a covariant functor
\[\mathbf{K}_{R}\colon\operatorname{\mathsf{O}Tr}_{\mathcal{O}Tr}(G)\to \operatorname{\mathsf{Spectra}};\]
such that \(\pi_{n}(\mathbf{K}_{R}(Q^{\prime}/U^{\prime}))\cong K_{n}(\mathcal{H}(U;R))\) holds for any \(n\in\mathbb{Z}\) and open subgroup \(U\subseteq Q\). Associated to it is a smooth \(G\)-homology theory \(H^{G}_{*}(-;\mathbf{K}_{R})\) such that
\[H^{G}_{n}(G/U;\mathbf{K}_{R})\cong K_{n}(\mathcal{H}(U;R)) \tag{3.1}\]
holds for every \(n\in\mathbb{Z}\) and every open subgroup \(U\subseteq Q\).
The next result follows from [3, Corollary 1.8].
**Theorem 3.2**.: _Let \(G\) be a td-group which is modulo a normal compact subgroup a subgroup of a reductive \(p\)-adic group. Let \(R\) be a uniformly regular ring with \(\mathbb{Q}\subseteq R\)._
_Then the map induced by the projection \(E_{\operatorname{\mathrm{Cop}}}(G)\to G/G\) induces for every \(n\in\mathbb{Z}\) an isomorphism_
\[H^{G}_{n}(E_{\operatorname{\mathrm{Cop}}}(G);\mathbf{K}_{R})\xrightarrow{ \cong}H^{G}_{n}(G/G;\mathbf{K}_{R})=K_{n}(\mathcal{H}(G;R)).\]
## 4. Proof of the Main Theorem 1.1
Proof of Theorem 1.1.: (i) This is exactly Theorem 3.2.
(ii) Since an open group homomorphism \(U\to V\) between two td-groups induces a ring homomorphism \(\mathcal{H}(U;R)\to\mathcal{H}(V;R)\) between the Hecke algebras and hence a homomorphism \(K_{n}(\mathcal{H}(U;R))\to K_{n}(\mathcal{H}(V;R))\) and inner automorphisms of a td-group \(U\) induce the identity on \(K_{n}(\mathcal{H}(U;R))\), we get a covariant \(\mathbb{Z}\mathsf{Sub}_{\operatorname{\mathrm{Com}}}(G)\)-module \(K_{n}(\mathcal{H}(?;R))\) whose value at \(U\) is \(K_{n}(\mathcal{H}(U;R))\). Since the isomorphism (3.1) is natural, we get an isomorphisms of covariant \(\mathbb{Z}\mathsf{Or}_{\mathcal{O}_{\operatorname{\mathrm{P}}}}(G)\)-modules
\[P^{*}K_{n}(\mathcal{H}(?;R))\xrightarrow{\cong}\pi_{n}(\mathbf{K}_{R})\]
for the projection \(P\colon\mathsf{Or}_{\mathcal{O}_{\operatorname{\mathrm{P}}}}(G)\to\mathsf{ Sub}_{\operatorname{\mathrm{\mathrm{Op}}}}(G)\) of (2.4). So the smooth equivariant Atiyah-Hirzebruch spectra sequence applied to the smooth homology theory \(H^{G}_{*}(-;\mathbf{K}_{R})\) takes for a \(\mathcal{F}\)-\(G\)-\(CW\)-complexes \(X\) the form
\[E^{2}_{p,q}=S\!H^{G,\mathcal{F}}_{q}\big{(}X;K_{q}(\mathcal{H}(?;R))\big{)} \implies H^{G}_{p+q}(X;\mathbf{K}_{R}). \tag{4.1}\]
Now assertion (ii) follows from the special case \(X=E_{\operatorname{\mathrm{Cop}}}(G)\) and assertion (i).
(iii) and (iv) As \(K_{q}(\mathcal{H}(K;R))\) vanishes for every compact td-group \(K\) and every \(q\leq-1\), see [2, Lemma 8.1], assertions (iii), and (iv) follow from Theorem 2.16 applied in the case \(X=E_{\operatorname{\mathrm{Cop}}}(G)\) and from assertion (i). This finishes the proof of the Main Theorem 1.1.
## 5. The main recipe for the computation of the projective class group
Throughout this section \(G\) will be a td-group and \(R\) be a uniformly regular ring with \(\mathbb{Q}\subseteq R\), e.g., a field of characteristic zero. We will assume that the assembly map \(H^{G}_{n}(E_{\operatorname{\mathrm{Cop}}}(G);\mathbf{K}_{R})\to H^{G}_{n}(G/G ;\mathbf{K}_{R})=K_{n}(\mathcal{H}(G;R))\) is bijective for all \(n\in\mathbb{Z}\) This is known to be true for subgroups of reductive \(p\)-adic groups by Theorem 3.2.
### The general case
Let \(X\) be an abstract simplicial complex with a simplicial \(G\)-action such that all isotropy groups are compact open, the \(G\)-action is cellular, and \(|X|^{K}\) is non-empty and connected for every compact open subgroup \(K\) of \(G\).
We can choose a subset \(V\) of the set of vertices of \(X\) such that the \(G\)-orbit though any vertex in \(X\) meets \(V\) in precisely one element. Fix a total ordering on \(V\). Let \(E\) be the subset of \(V\times V\) consisting of those pairs \((v,w)\) such that \(v\leq w\) holds and there exists \(g\in G\) for which \(v\) and \(gw\) satisfy \(v\neq gw\) and span an edge \([v,gw]\) in \(X\). For \((v,w)\in E\) define \(\overline{F(v,w)}\) to be the subset of \(G_{v}\backslash G/G_{w}\) consisting of elements \(x\) for which \(v\) and \(gw\) satisfy \(v\neq gw\) and span an edge \([v,gw]\) in \(X\) for some (and hence all) representative \(g\) of \(x\). Choose a subset \(F(v,w)\) of \(G\) such that the projection \(G\to G_{v}\backslash G/G_{w}\) induces a bijection \(F(v,w)\to\overline{F(v,w)}\).
Then for every edge of \(X\) the \(G\)-orbit through it meets the set \(\{[v,gv]\mid(v,w)\in E,g\in F(v,w)\}\) in precisely one element. Moreover, the \(0\)-skeleton of \(|X|\) is given by \(|X|_{0}=\coprod_{u\in V}G/G_{u}\) and \(|X|_{1}\) is given by the \(G\)-pushout
where \(q_{(v,w),g}\colon G/(G_{v}\cap G_{gw})\times S^{0}\to|X|_{0}=\coprod_{u\in V}G /G_{u}\) is defined a follows. Write \(S^{0}=\{-1,1\}\). The restriction of \(q_{(v,w),g}\) to \(G/(G_{v}\cap G_{gw})\times\{-1\}\) lands in the summand \(G/G_{v}\) and is given by canoncial projection. The restriction of \(q_{(v,w),g}\) to \(G/(G_{v}\cap G_{gw})\times\{1\}\) lands in the summand \(G/G_{w}\) and is given by the \(G\)-map \(R_{g^{-1}}\colon G/(G_{v}\cap G_{gw})\to G/G_{w}\) sending \(z(G_{v}\cap G_{gw})\) to \(zgG_{w}\).
Next we define a map
\[\beta=(\beta_{(v,w),g,u})\colon\bigoplus_{(v,w)\in E}\bigoplus_{g\in F(v,w)}K _{0}(\mathcal{H}(G_{v}\cap G_{gw};R))\to\bigoplus_{u\in V}K_{0}(\mathcal{H}(G _{u};R)).\]
If \(u=v\), then \(\beta_{(v,w),g,v}\colon K_{0}(\mathcal{H}(G_{v}\cap G_{gw};R))\to K_{0}( \mathcal{H}(G_{v};R))\) is the map induced by the inclusion \(G_{v}\cap G_{gw}\to G_{v}\) multiplied with \((-1)\). If \(u=w\), then \(\beta_{(v,w),g,w}\)\(K_{0}(\mathcal{H}(G_{v}\cap G_{gw};R))\to K_{0}(\mathcal{H}(G_{w};R))\) is the map induced by the group homomorphism \(G_{v}\cap G_{gw}\to G_{w}\) sending \(z\) to \(g^{-1}zg\). If \(u\notin\{v,w\}\), then \(\beta_{(v,w),g,u}=0\).
**Lemma 5.1**.: _The cokernel of \(\beta\) is isomorphic to \(K_{0}(\mathcal{H}(G;R))\)._
Proof.: We conclude from Remark 2.20 that the cokernel of \(\beta\) is \(SH_{0}^{G,\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{{ \operatorname{{ \
If \(\widetilde{C}\) is compact, then we can consider \(X\) as a \(\widetilde{G}\)-\(CW\)-complex by restricting the \(G\)-action with \(\operatorname{pr}\) and Subsection 5.a applies. Hence we will assume that \(\widetilde{C}\) is not compact, or, equivalently, that \(\widetilde{C}\) is not contained in the kernel \(\widetilde{M}:=\ker(\mu)\). Then the index \(m:=[\mathbb{Z}:\mu(C)]\) is a natural number \(m\geq 1\). We fix an element \(\widetilde{c}\in\widetilde{C}\) with \(\mu(\widetilde{c})=m\). In sequel we choose for every \(g\in G\) an element \(\widetilde{g}\) in \(\widetilde{G}\) satisfying \(\operatorname{pr}(\widetilde{g})=g\) and denote for an open subgroup \(U\subseteq G\) by \(\widetilde{U}\subseteq\widetilde{G}\) its preimage under \(\operatorname{pr}\colon\widetilde{G}\to G\). Let
\[\gamma\colon\bigoplus_{(v,w)\in E}\bigoplus_{g\in F(v,w)}K_{0}(\mathcal{H}( \widetilde{G_{v}}\cap\widetilde{G_{gw}}\cap\widetilde{M};R))\to\bigoplus_{u \in V}K_{0}(\mathcal{H}(\widetilde{G_{u}}\cap\widetilde{M};R))\]
be the map whose component for \((v,w)\in E\), \(g\in F(v,w)\), and \(u\in V\) is the map
\[\gamma_{(v,w),g,u}\colon K_{0}(\mathcal{H}(\widetilde{G_{v}}\cap\widetilde{G_ {gw}}\cap\widetilde{M};R))\to K_{0}(\mathcal{H}(\widetilde{G_{u}}\cap \widetilde{M};R)) \tag{5.3}\]
defined next. If \(u=v\), it is the map coming from the inclusion \(\widetilde{G_{v}}\cap\widetilde{G_{gw}}\cap\widetilde{M}\to\widetilde{G_{v}} \cap\widetilde{M}\) multiplied with \((-1)\). If \(u=w\), it is the map coming from the group homomorphism \(\widetilde{G_{v}}\cap\widetilde{G_{gw}}\cap\widetilde{M}\to\widetilde{G_{w}} \cap\widetilde{M}\) sending \(x\) to \(\widetilde{g}x\widetilde{g}^{-1}\). If \(u\not\in\{v,w\}\), it is trivial. Note that this definition is independent of the choice of \(\widetilde{g}\in\widetilde{G}\) satisfying \(\operatorname{pr}(\widetilde{g})=g\) for \(g\in F(v,w)\).
**Lemma 5.4**.: _The cokernel of \(\gamma\) is \(K_{0}(\mathcal{H}(\widetilde{G};R))\)._
Proof.: Note that \(|X|\times\mathbb{R}\) carries the \(G\times\mathbb{Z}\)-\(CW\)-complex structure coming from the product of the \(G\)-\(CW\)-complex structure on \(|X|\) and the standard free \(\mathbb{Z}\)-\(CW\)-structure on \(\mathbb{R}\). Since the \(\mathbb{Z}\)-CW-complex \(\mathbb{R}\) has precisely one equivariant \(1\)-cell and one equivariant \(0\)-cell, the set of equivariant \(0\)-cells of the \(G\times\mathbb{Z}\)-\(CW\) complex \(|X|\times\mathbb{R}\) can be identified with the set \(V\) and the set of equivariant \(1\)-cells can be identified with the disjoint union of \(V\) and the set \(\coprod_{(v,w)\in E}F(v,w)\). Now the \(0\)-skeleton of \(|X|\times\mathbb{R}\) is given by the disjoint union \(\coprod_{u\in V}\widetilde{G}/\widetilde{G_{u}}\times\mathbb{Z}\) and the \(1\)-skeleton of \(|X|\times\mathbb{R}\) is given by the \(G\times\mathbb{Z}\)-pushout
(5.5)
where \(\widetilde{q}\) is given as follows. Write \(S^{0}=\{-1,1\}\). Fix \(u\in V\). The restriction of \(\widetilde{q}\) to the summand \(\widetilde{G}/\widetilde{G_{v}}\times\mathbb{Z}\times\{\epsilon\}\) lands in the summand \(\widetilde{G}/\widetilde{G_{v}}\times\mathbb{Z}\) and is given by id for \(\epsilon=-1\) and by id \(\times\)sh\({}_{1}\) for \(\epsilon=1\), where sh\({}_{a}\colon\mathbb{Z}\to\mathbb{Z}\) sends \(b\) to \(a+b\) for \(a,b\in\mathbb{Z}\). Fix \((v,w)\in E\) and \(g\in F(v,w)\). The restriction of \(\widetilde{q}\) to the summand \(\widetilde{G}/(\widetilde{G_{v}}\cap\widetilde{G_{gw}})\times\mathbb{Z}\times \{-1\}\) belonging to \((v,w)\) and \(g\) lands in the summand for \(u=v\) and is the canonical projection \(\widetilde{G}/(\widetilde{G_{v}}\cap\widetilde{G_{gw}})\times\mathbb{Z}\to \widetilde{G}/\widetilde{G_{v}}\times\mathbb{Z}\). The restriction of \(\widetilde{q}\) to the summand \(\widetilde{G}/(\widetilde{G_{v}}\cap\widetilde{G_{gw}})\times\mathbb{Z}\times\{1\}\) belonging to \((v,w)\) and \(g\) lands in the summand for \(u=w\) and is the map \(R_{\widetilde{g}^{-1}}\times\)id\({}_{2}\colon\widetilde{G}/(\widetilde{G_{v}}\cap\widetilde{G_{gw}})\times \mathbb{Z}\to\widetilde{G}/\widetilde{G_{w}}\times\mathbb{Z}\), where \(R_{\widetilde{g}^{-1}}\) sends \(\widetilde{z}(\widetilde{G_{v}}\cap\widetilde{G_{gw}})\) to \(\widetilde{z}\widetilde{g}^{-1}\widetilde{G_{w}}\).
We have the group homomorphism
\[\iota:=\operatorname{pr}\times\mu\colon\widetilde{G}\to G\times\mathbb{Z}.\]
Its kernel is \(\widetilde{C}\cap\widetilde{M}\). Its image has finite index \(G\times\mathbb{Z}\), which agrees with the index \(m\) of the image of \(\mu\) in \(\mathbb{Z}\).
We are interested in the \(\widetilde{G}\)-\(CW\)-complex \(\iota^{*}(|X|\times\mathbb{R})\) obtained by restriction with \(\iota\) from the \(G\times\mathbb{Z}\)-\(CW\)-complex \(|X|\times\mathbb{R}\). So we have to analyze how the \(G\times\mathbb{Z}\)-cells in \(\iota^{*}(|X|\times\mathbb{R})\) viewed as \(\widetilde{G}\)-spaces decompose as disjoint union of \(\widetilde{G}\)-cells. Consider any open subgroup \(U\subseteq G\). Then we obtain a \(\widetilde{G}\)-homeomorphism
\[\alpha(U)\colon\coprod_{p=0}^{m-1}\widetilde{G}/(\widetilde{U}\cap\widetilde{ M})\xrightarrow{\cong}\iota^{*}\big{(}G/U\times\mathbb{Z}\big{)}\]
by sending the element \(\widetilde{z}(\widetilde{U}\cap\widetilde{M})\) in the \(p\)-th summand to \((\operatorname{pr}(\widetilde{z})U,\mu(\widetilde{z})+p)\). Next we have to analyze the naturality properties of \(\alpha(U)\). The following diagram commutes for \(a\in\mathbb{Z}\)
where \(\widehat{\pi}\) sends the summand for \(p=0,\dots,m-2\) by the identity to the summand for \(p+1\) and sends the summand for \(p=m-1\) to the summand for \(p=0\) by the map \(R_{\widetilde{c}}\colon\widetilde{G}/(\widetilde{U}\cap\widetilde{M})\to \widetilde{G}/(\widetilde{U}\cap\widetilde{M})\) for \(\widetilde{c}\in\widetilde{C}\) satisfying \(\mu(\widetilde{c})=m\). Note for the sequel that the endomorphism \(\pi_{n}(\mathbf{K}_{R}(R_{\widetilde{c}}))\) of \(\pi_{n}(\mathbf{K}_{R}(\widetilde{G}/\widetilde{U}\cap\widetilde{M}))=K_{0}( \mathcal{H}(\widetilde{U}\cap\widetilde{M}))\) is the identity, since conjugation with \(\widetilde{c}\) induces the identity on \(\widetilde{U}\cap\widetilde{M}\).
Consider two open subgroups \(U\) and \(V\) of \(G\) and an element \(g\in G\) with \(gUg^{-1}\subseteq V\). Then we get well-defined \(\widetilde{G}\)-maps \(R_{\widetilde{g}^{-1}}\colon\widetilde{G}/(\widetilde{U}\cap\widetilde{M})\to \widetilde{G}/(\widetilde{V}\cap\widetilde{M})\) sending \(\widetilde{z}(\widetilde{U}\cap\widetilde{M})\) to \(\widetilde{z}\widetilde{g}^{-1}(\widetilde{V}\cap\widetilde{M})\) and \(R_{g^{-1}}\times\operatorname{id}\colon\iota^{*}\big{(}G/U\times\mathbb{Z} \big{)}\to\iota^{*}\big{(}G/V\times\mathbb{Z}\big{)}\) sending \((zU,n)\) to \((zg^{-1}V,n)\) and the following diagram commutes
In particular the following diagram commutes
Now we obtain from the \(G\times\mathbb{Z}\)-pushout (5.5) by applying restriction with \(\iota\) and the maps \(\alpha_{U}\) above a \(\widetilde{G}\)-pushout describing how the \(1\)-skeleton of the \(\widetilde{G}\)-\(CW\)-complex \(\iota^{*}(|X|\times\mathbb{R})\) is obtained from its \(0\)-skeleton and explicite descriptions of the attaching maps.
In the sequel \(A^{m}\) stands for the \(m\)-fold direct sum of copies of \(A\) for an abelian group \(A\) and \(\pi\colon A^{m}\to A^{m}\) denotes the permutation map sending \((a_{1},a_{2},\dots,a_{m})\) to \((a_{m},a_{1},\dots,a_{m-1})\) and \(\operatorname{aug}\colon A^{m}\to A\) denotes the augmentation map sending \((a_{1},\dots,a_{m})\) to \(a_{1}+\dots+a_{m}\).
Let \(\delta\) be the map given by the direct sum
\[\delta=\bigoplus_{v\in V}\delta_{v}\colon\bigoplus_{v\in V}K_{0}(\mathcal{H}( \widetilde{G_{v}}\cap\widetilde{M};R))^{m}\to\bigoplus_{v\in V}K_{0}(\mathcal{H}( \widetilde{G_{v}}\cap\widetilde{M};R))^{m}\]
where \(\delta_{v}\colon K_{0}(\mathcal{H}(\widetilde{G_{v}}\cap\widetilde{M};R))^{m} \to K_{0}(\mathcal{H}(\widetilde{G_{v}}\cap\widetilde{M};R))^{m}\) is \(\pi-\operatorname{id}\). Let
\[\epsilon\colon\bigoplus_{(v,w)\in E}\bigoplus_{g\in F(v,w)}K_{0}(\mathcal{H}( \widetilde{G_{v}}\cap\widetilde{G_{gw}}\cap\widetilde{M};R))^{m}\to\bigoplus_ {u\in V}K_{0}(\mathcal{H}(\widetilde{G_{u}}\cap\widetilde{M};R))^{m}\]
be the map given by the components \(\epsilon_{(v,w),g,u}\) defined as follows. For \(u=v\) the map \(\epsilon_{(v,w),g,v}\) is the \(m\)-fold direct sum \(\gamma^{m}_{(v,w),g,v}\) of the maps \(\gamma_{(v,w),g,v}\) defined in (5.3). For \(u=w\) we put
\[\epsilon_{(v,w),g,w}\colon K_{0}(\mathcal{H}(\widetilde{G_{v}} \cap\widetilde{G_{gw}}\cap\widetilde{M};R))^{m}\xrightarrow{\gamma^{m}_{(v,w),g,u}} K_{0}(\mathcal{H}(\widetilde{G_{v}}\cap\widetilde{M};R))^{m}\] \[\xrightarrow{\pi^{m}(\widetilde{g})} K_{0}(\mathcal{H}(\widetilde{G_{w}}\cap\widetilde{M};R))^{m}.\]
Since \(\pi^{m}=\operatorname{id}\), the map \(\pi^{\mu(\widetilde{g})}\) depends only on \(\overline{\mu}(g)\), where \(\overline{\mu}\colon G\to\mathbb{Z}/m\) sends \(g\) to the image of \(\widetilde{g}\) under the projection \(\mathbb{Z}\to\mathbb{Z}/m\) for any choice of an element \(\widetilde{g}\in\widetilde{G}\) with \(\operatorname{pr}(\widetilde{g})=g\).
The map
\[\delta\oplus\epsilon\colon\left(\bigoplus_{v\in V}K_{0}(\mathcal{H}( \widetilde{G_{v}}\cap\widetilde{M};R))^{m}\right)\oplus\left(\bigoplus_{(v,w )\in E}\bigoplus_{g\in F(v,w)} K_{0}(\mathcal{H}(\widetilde{G_{v}}\cap\widetilde{G_{gw}}\cap \widetilde{M};R))^{m}\right)\] \[\to\bigoplus_{u\in V}K_{0}(\mathcal{H}(\widetilde{G_{u}}\cap \widetilde{M};R))^{m}\]
is \(K_{0}(\mathcal{H}(\widetilde{G};R))\) because of Theorem 2.16 (ii)b and Remark 2.20 by the same argument as it appears in the proof of Lemma 5.1 since \(\left(\iota^{*}(|X|\times\mathbb{R})\right)^{K}\) is connected for every compact open subgroup \(K\) of \(\widetilde{G}\). It does not matter that \(\iota^{*}(|X|\times\mathbb{R})\) is a \(\widetilde{G}\)-\(CW\)-complex but not a simplicial complex, since in the description of \(\beta_{i,j}\) appearing in Remark 2.20 the case \(j_{i}(+)=j_{-}(i)\) never occurs.
We can identify \(\bigoplus_{v\in V}K_{0}(\mathcal{H}(\widetilde{G_{v}}\cap\widetilde{M};R))\) and the cokernel of \(\delta\), since we have the exact sequence \(A^{m}\xrightarrow{\pi-\operatorname{id}}A^{m}\xrightarrow{\alpha}A\to 0\) for every abelian group \(A\). The cokernel of \(\delta\oplus\epsilon\) is isomorphic the cokernel of the composite of \(\epsilon\) with the map
\[\bigoplus_{v\in V}\operatorname{aug}\colon\bigoplus_{v\in V}K_{0}(\mathcal{H} (\widetilde{G_{v}}\cap\widetilde{M};R))^{m}\to\bigoplus_{v\in V}K_{0}( \mathcal{H}(\widetilde{G_{v}}\cap\widetilde{M};R))=\operatorname{cok}(\delta).\]
For every \((v,w)\in E\), \(g\in F(v,w)\), and \(u\in V\) the diagram
commutes, since \(\alpha\circ\pi=\alpha\) holds. This finishes the proof of Lemma 5.4.
The projective class group of the Hecke algebras of \(\operatorname{SL}_{n}(F)\), \(\operatorname{PGL}_{n}(F)\) and \(\operatorname{GL}_{n}(F)\)
Next we apply the recipes of Sections 5 to some prominent reductive \(p\)-adic groups \(G\) as an illustration. For the remainder of this section \(R\) is a uniformly regular ring with \(\mathbb{Q}\subseteq R\).
Note that for a reductive \(p\)-adic groups \(G\) the assembly map \(H^{G}_{n}(E_{\operatorname{\text{\rm Cop}}}(G);\mathbf{K}_{R})\to H^{G}_{n}(G/G; \mathbf{K}_{R})=K_{n}(\mathcal{H}(G;R))\) is bijective for all \(n\in\mathbb{Z}\) by Theorem 3.2. Moreover, the Bruhat-Tits building \(X\) of \(G\) or of \(G/\operatorname{cent}(G)\) can serve as the desired simplicial complex \(X\) appearing in Section 5. The original construction of the Bruhat-Tits building can be found in [8]. For more information about buildings we refer to [1, 6, 7, 17]. The space \(X\) carries a \(\operatorname{CAT}(0)\)-metric, which is invariant under the action of \(G\) or \(G/\operatorname{cent}(G)\), see [6, Theorem 10A.4 on page 344], Hence \(|X|^{H}\) is contractible for any compact open subgroup \(H\) of \(G\) or \(G/\operatorname{cent}(G)\), since \(X^{H}\) is a convex non-empty subset of \(X\) and hence contractible by [6, Corollary II.2.8 on page 179]. Therefore the geometric realization of the the Bruhat-Tits building \(X\) is (after possible subdividing to achieve a cellular action) a model for \(E_{\operatorname{\text{\rm Cop}}}(G)\) or of \(E_{\operatorname{\text{\rm Cop}}}(G/\operatorname{cent}(G))\).
### \(\operatorname{SL}_{n}(F)\)
We begin with computing \(K_{0}(\mathcal{H}(\operatorname{SL}_{n}(F);R))\), where \(F\) is a non-Archimedian local field with valuation \(v\colon F\to\mathbb{Z}\cup\{\infty\}\). The following claims about the Bruhat-Tits building \(X\) for \(\operatorname{SL}_{n}(F)\) (and later about \(X^{\prime}\)) can all be verified from the description of \(X\) in [1, Sec. 6.9].
For \(l=0,\ldots,n-1\) let \(U^{\operatorname{S}}_{l}\) be the compact open subgroup of \(\operatorname{SL}_{n}(F)\) consisting of all matrices \((a_{ij})\) in \(\operatorname{SL}_{n}(F)\) satisfying \(v(a_{i,j})\geq-1\) for \(1\leq i\leq n-l<j\leq n\), \(v(a_{i,j})\geq 1\) for \(1\leq j\leq n-l<i\leq n\) and \(v(a_{i,j})\geq 0\) for all other \(i,j\). In particular \(U^{\operatorname{S}}_{0}=\operatorname{SL}_{n}(\mathcal{O})\), where \(\mathcal{O}=\{z\in F\mid v\geq 0\}\). The intersection of the \(U^{\operatorname{S}}_{l}\)-s is the Iwahori subgroup \(I^{\operatorname{S}}\) of \(\operatorname{SL}_{n}(F)\). It is given by those matrices \(A\) in \(\operatorname{SL}_{n}(F)\) for which \(v(a_{i,j})\geq 1\) for \(i>j\) and \(v(a_{i,j})\geq 0\) for \(i\leq j\) hold.
The \((n-1)\)-simplex \(\Delta\) can be chosen with an ordering on its vertices such that the isotropy group of its \(l\)-th vertex \(v_{l}\) is \(U^{\operatorname{S}}_{l}\). The isotropy group of a face \(\sigma\) of \(\Delta\) is the intersection of the isotropy groups of the vertices of \(\sigma\). In particular, the isotropy group of \(\Delta\) is the Iwahori subgroup \(I^{S}\) of \(\operatorname{SL}_{n}(F)\). Consider the map
\[d^{\operatorname{SL}_{n}(F)}\colon\bigoplus_{0\leq i<j\leq n-1}K_{0}(\mathcal{ H}(U^{\operatorname{S}}_{i}\cap U^{\operatorname{S}}_{j};R))\to\bigoplus_{0\leq l \leq n-1}K_{0}(\mathcal{H}(U^{\operatorname{S}}_{l};R)),\]
for which the component \(d^{\operatorname{SL}_{n}(F)}_{i<j,l}\colon K_{0}(\mathcal{H}(U^{\operatorname {S}}_{i}\cap U^{\operatorname{S}}_{j};R))\to K_{0}(\mathcal{H}(U^{ \operatorname{S}}_{l};R))\) is given by \(-K_{0}(\mathcal{H}(f^{i}_{i<j};R))\), if \(l=i\), by \(K_{0}(\mathcal{H}(f^{j}_{i<j};R))\), if \(l=j\), and is zero, if \(l\notin\{i,j\}\), where \(f^{k}_{i<j}\colon U^{\operatorname{S}}_{i}\cap U^{\operatorname{S}}_{j}\to U^{ \operatorname{S}}_{k}\) is the inclusion for \(k=i,j\).
Then the cokernel of \(d^{\operatorname{SL}_{n}(F)}\) is \(K_{0}(\mathcal{H}(\operatorname{SL}_{n}(F);R))\) by Lemma 5.1 and Remark 5.2.
### \(\operatorname{PGL}_{n}(F)\)
Next we compute \(K_{0}(\mathcal{H}(\operatorname{PGL}_{n}(F);R))\). The action of \(\operatorname{SL}_{n}(F)\) on \(X\) extends to an action of \(\operatorname{GL}_{n}(F)\). This action factors through the canonical projection \(\operatorname{pr}\colon\operatorname{GL}_{n}(F)\to\operatorname{PGL}_{n}(F)\) to an action of \(\operatorname{PGL}_{n}(F)\). These actions are still simplicial, but no longer cellular. Let
\[\widehat{h}:=\left(\begin{array}{cccc}&1&&&\\ &&\ddots&&\\ &&&1\\ \zeta&&&\end{array}\right)\in\operatorname{GL}_{n}(F)\]
where we chose a uniformizer \(\zeta\in F\), i.e., an element in \(F\) satisfying \(v(\zeta)=1\). Obviously \(\widehat{h}^{n}\) is the diagonal matrix \(\zeta\cdot I_{n}\), all whose diagonal entries are \(\zeta\), and hence is central in \(\operatorname{GL}_{n}(F)\). Define \(h\in\operatorname{PGL}_{n}(F)\) by \(h=\operatorname{pr}(\widehat{h})\). Then \(hv_{l}=v_{l+1}\) for \(l=0,\ldots,n-2\) and \(hv_{n-1}=v_{0}\) and \(h^{n}\) is the unit in \(\operatorname{PGL}_{n}(F)\). In particular, the action of \(\operatorname{PGL}_{n}(F)\) is transitive on the vertices of \(X\). To obtain a cellular action, \(X\) can be subdivided to \(X^{\prime}\) as follows. The \((n-2)\)-skeleton of \(X\) is unchanged, while the \((n-1)\)-simplices of \(X\) are in \(X^{\prime}\) replaced with cones on their boundary.
More formally, the vertices of \(X^{\prime}\) are the vertices of \(X\) and the barycenters \(b_{\sigma}\) of \((n-1)\)-simplices \(\sigma\) of \(X\). A set \(S\) of vertices of \(X^{\prime}\) is a simplex of \(X^{\prime}\), if and only if \(S\) is a \(k\)-simplex of \(X\) and \(k<n-1\) or if \(S\) contains exactly one barycenter \(b_{\sigma}\) and for all \(v\in S\setminus\{b_{\sigma}\}\) are vertices of \(\sigma\) (in the simplicial structure of \(X\)). The action of \(\operatorname{PGL}_{n}(F)\) on \(X^{\prime}\) is then cellular and is transitive on \((n-1)\)-simplices of \(X^{\prime}\). There are two orbits of vertices, represented by \(v_{0}\) and \(b_{\Delta}\). Let \(k:=\lfloor n/2\rfloor\). There are \(k+1\) orbits of \(1\)-simplices, represented by \(\{v_{0},v_{1}\}\),\(\ldots\),\(\{v_{0},v_{k}\}\) and \(\{v_{0},b_{\Delta}\}\). Next we describe some isotropy groups.
For an open subgroup \(W\subseteq\operatorname{PGL}_{n}(F)\) we denote by \(\widetilde{W}\) its preimage under the projection \(\operatorname{pr}\colon\operatorname{GL}_{n}(F)\to\operatorname{PGL}_{n}(F)\). For \(l=0,\ldots,n-1\) let \(U_{l}^{\operatorname{G}}\) be the compact open subgroup of \(\operatorname{GL}_{n}(F)\) given by \(\widehat{h}^{l}\operatorname{GL}_{n}(\mathcal{O})\widehat{h}^{-l}= \operatorname{PGL}_{n}(F)_{v_{l}}=\operatorname{PGL}_{n}(F)_{h_{t}v_{0}}\) In particular \(U_{0}^{\operatorname{G}}=\operatorname{GL}_{n}(\mathcal{O})\). Note that
\[U_{l}^{\operatorname{G}}\cap\operatorname{SL}_{n}(F)=(\widehat{h}^{l} \operatorname{GL}_{n}(\mathcal{O})\widehat{h}^{-l})\cap\operatorname{SL}_{n}( F)=\widehat{h}^{l}\operatorname{SL}_{n}(\mathcal{O})\widehat{h}^{-l}=U_{l}^{S}\]
holds. The intersection of the \(U_{l}^{\operatorname{G}}\)-s is the Iwahori subgroup \(I^{\operatorname{G}}\) of \(\operatorname{GL}_{n}(F)\). Let \(U_{l}^{\operatorname{P}}\) be the image of \(U_{l}^{\operatorname{G}}\) in \(\operatorname{PGL}_{n}(F)\). This is the isotropy groups of the vertex \(v_{l}\) for the action of \(\operatorname{PGL}_{n}(F)\). The Iwahori subgroup \(I^{\operatorname{P}}\) of \(\operatorname{PGL}_{n}(F)\) is the image of \(I^{\operatorname{G}}\) under \(\operatorname{pr}\). It is the pointwise isotropy subgroup for \(\Delta\). Let \(H\) be the subgroup generated by the image of \(h\) in \(\operatorname{PGL}_{n}(F)\). It is a cyclic subgroup of order \(n\) that cyclically permutes the vertices of \(\Delta\). This subgroup normalizes \(I^{\operatorname{P}}\) and the isotropy group of \(b_{\Delta}\) is the product \(HI^{\operatorname{P}}\). Recall that \(v_{l}=h^{l}v_{0}\) and hence \(U_{l}^{P}=h^{l}U_{0}Ph^{-l}\)
Write \(i_{H}\colon I^{\operatorname{P}}\to HI^{\operatorname{P}}\), \(i_{0}\colon I^{\operatorname{P}}\to U_{0}^{\operatorname{P}}\), \(c_{0}\colon U_{0}^{\operatorname{P}}\cap U_{i}^{\operatorname{P}}\to U_{0}^{ \operatorname{P}}\) for the inclusions and define \(c_{l}\colon U_{0}^{\operatorname{P}}\cap U_{l}^{\operatorname{P}}\to U_{0}^{ \operatorname{P}}\) by \(z\mapsto h^{-l}zh^{l}\). Let
\[d^{\operatorname{PGL}_{n}(F)}\colon K_{0}(\mathcal{H}(I^{ \operatorname{P}};R))\oplus\bigoplus_{l=1}^{k}K_{0}(\mathcal{H}(U_{0}^{ \operatorname{P}}\cap U_{l}^{\operatorname{P}};R))\\ \to K_{0}(\mathcal{H}(HI^{\operatorname{P}};R))\oplus K_{0}( \mathcal{H}(U_{0}^{\operatorname{P}};R))\]
be the map that is \(K_{0}(i_{H})\times-K_{0}(i_{0})\) on \(K_{0}(\mathcal{H}(I^{\operatorname{P}};R))\) and \(0\times(K_{0}(c_{l})-K_{0}(c_{0}))\) on \(K_{0}(\mathcal{H}(U_{0}^{\operatorname{P}}\cap U_{l}^{\operatorname{P}};R))\). The cokernel of the homomorphism \(d^{\operatorname{PGL}_{n}(F)}\) agrees with \(\operatorname{\textit{SH}}_{0}^{\operatorname{PGL}_{n}(F)}\bigl{(}X^{\prime};K _{0}(\mathcal{H}(?;R))\bigr{)}\) by Lemma 5.1, if, using the notation of Section 5.A, we put \(E=\{v_{0},b_{\Delta}\}\) with \(v_{0}<b_{\Delta}\), \(F(v_{0}.v_{0})=\{h,h^{2},\ldots,h^{k}\}\), and \(F(v_{0},b_{\Delta})=\{e\}\).
### \(\operatorname{GL}_{n}(F)\)
Next we compute \(K_{0}(\mathcal{H}(\operatorname{GL}_{n}(F);R))\). Note that \(\operatorname{GL}_{n}(F)\) has a non-compact center. Hence Subsection 5.a does not apply and we have to pass to the setting of Subsection 5.b using the short exact sequence \(1\to C=\operatorname{cent}(\operatorname{GL}_{n}(F))\to\operatorname{GL}_{n}(F) \xrightarrow{\operatorname{pr}}\operatorname{PGL}_{n}(F)\to 1\), the discussion in Subsection 6.b and Lemma 5.4.
Let \(\widetilde{M}\) be the kernel of the composite \(\mu\colon\operatorname{GL}_{n}(F)\xrightarrow{\det}F^{\times}\xrightarrow{ \nu}\mathbb{Z}\). Let \(\widehat{H}\subseteq\operatorname{GL}_{n}(F)\) be the infinite cyclic subgroup generated by the element \(\widehat{h}\). Note that \(\widetilde{M}\cap C\) consists of those diagonal matrices whose entries on the diagonal are all the same and are sent to \(0\) under \(\nu\). We conclude \((\operatorname{GL}_{n}(\mathcal{O})\cdot C)\cap\widetilde{M}=\operatorname{ GL}_{n}(\mathcal{O})\) from \(C\cap\widetilde{M}\subseteq\operatorname{GL}_{n}(\mathcal{O})\subseteq \widetilde{M}\). Recall that for \(W\subseteq\operatorname{PGL}_{n}(F)\) we denote by \(\widetilde{W}\) its preimage under \(\operatorname{pr}\colon\operatorname{GL}_{n}(F)\to\operatorname{PGL}_{n}(F)\). Since \(\operatorname{pr}(U_{l}^{\operatorname{G}})=U_{l}^{\operatorname{P}}\), we get for \(l=0,\ldots,n-1\)
\[\widetilde{U_{l}^{\operatorname{P}}}\cap\widetilde{M}=(U_{l}^{ \operatorname{G}}\cdot C)\cap\widetilde{M}=(\widehat{h}^{l}\operatorname{GL}_{n}( \mathcal{O})\widehat{h}^{-l}\cdot C)\cap\widetilde{M}\\ =\widehat{h}^{l}\bigl{(}(\operatorname{GL}_{n}(\mathcal{O}) \cdot C)\cap\widetilde{M}\bigr{)}\widehat{h}^{-l}=\widehat{h}^{l} \operatorname{GL}_{n}(\mathcal{O})\widehat{h}^{-l}=U_{l}^{\operatorname{G}}.\]
Now one easily checks \(\widetilde{IP}\cap\widetilde{M}=I^{\operatorname{G}}\). Finally we show \(\widetilde{HIP}\cap\widetilde{M}=I^{\operatorname{G}}\). We get \(I^{\operatorname{G}}\subseteq\widetilde{HI}\widetilde{P}\cap\widetilde{M}\) from \(\widetilde{IP}\cap\widetilde{M}=I^{\operatorname{G}}\). Consider an element \(A\in\widetilde{HI}\widetilde{P}\cap\widetilde{M}\). We can find an integer \(b\), an element \(B\in I^{\operatorname{G}}\), and an element \(D\in C\) such that \(A=\widehat{h}^{b}BD\) and
\(\nu(A)=0\) holds. From \(I^{\mathrm{G}}\subseteq\widetilde{M}\) we conclude \(\widehat{h}^{b}D\in\widetilde{M}\). Since \(\mu(D)\) is divisible by \(n\) and \(\mu(\widehat{h})=1\) holds, \(b\) is divisible by \(n\). This implies \(\widehat{h}^{b}D\in C\cap\widetilde{M}\). As \((C\cap\widetilde{M})I^{\mathrm{G}}=I^{\mathrm{G}}\) holds, we conclude \(A\in I^{\mathrm{G}}\). Hence \(\widehat{H}\widetilde{I}^{\mathrm{P}}\cap\widetilde{M}=I^{\mathrm{G}}\) holds.
Let \(\widetilde{i}_{0}\colon I^{\mathrm{G}}\to U_{0}^{\mathrm{G}}\) and \(\widetilde{c}_{0}\colon U_{0}^{\mathrm{G}}\cap U_{0}^{\mathrm{G}}\to U_{0}^{ \mathrm{G}}\) be the inclusions and let \(\widetilde{c}_{l}\colon U_{0}^{\mathrm{G}}\cap U_{l}^{\mathrm{G}}\to U_{0}^{ \mathrm{G}}\) be the map sending \(\widetilde{z}\) to \(\widehat{h}^{-l}\widehat{z}\widehat{h}^{l}\). Let
\[\overline{d}^{\mathrm{GL}_{n}(F)}\colon K_{0}(\mathcal{H}(I^{ \mathrm{G}};R))\oplus\bigoplus_{l=1}^{k}K_{0}(\mathcal{H}(U_{0}^{\mathrm{G}} \cap U_{l}^{\mathrm{G}};R))\\ \to K_{0}(\mathcal{H}(I^{\mathrm{G}};R))\oplus K_{0}(\mathcal{H}(U _{0}^{\mathrm{G}};R))\]
be the map that is \(\mathrm{id}_{K_{0}(I^{\mathrm{G}})}\times-K_{0}(\widetilde{i}_{0})\) on \(K_{0}(\mathcal{H}(I^{\mathrm{G}};R))\) and \(0\times(K_{0}(\widetilde{c}_{l})-K_{0}(\widetilde{c}_{0}))\) on \(K_{0}(\mathcal{H}(U_{0}^{\mathrm{G}}\cap U_{i}^{\mathrm{G}};R))\). The cokernel of the map \(\overline{d}^{\mathrm{GL}_{n}(F)}\) is \(K_{0}(\mathcal{H}(\mathrm{GL}_{n}(F);R))\) by Lemma 5.4 Let
\[\widetilde{d}^{\mathrm{GL}_{n}(F)}\colon\bigoplus_{l=1}^{k}K_{0}(\mathcal{H}( U_{0}^{\mathrm{G}}\cap U_{l}^{\mathrm{G}};R))\to K_{0}(\mathcal{H}(U_{0}^{ \mathrm{G}};R))\]
be the map which is given by \(K_{0}(\widetilde{c}_{l})-K_{0}(\widetilde{c}_{0})\) on \(K_{0}(\mathcal{H}(U_{0}^{\mathrm{G}}\cap U_{l}^{\mathrm{G}};R))\). Since \(\widetilde{d}^{\mathrm{GL}_{n}(F)}\) has the same cokernel as \(\overline{d}^{\mathrm{GL}_{n}(F)}\), the cokernel of \(\widetilde{d}^{\mathrm{GL}_{n}(F)}\) is \(K_{0}(\mathcal{H}(\mathrm{GL}_{n}(F);R))\).
## 7. Homotopy colimits
### The Farrell-Jones assembly map as a map of homotopy colimits
Next we want to extend the considerations of Section 6 to the higher \(K\)-groups. For this purpose and the proofs appearing in [3] it is worthwhile to write down the assembly map in terms of homotopy colimits. The projections \(G/U\to G/G\) for \(U\) compact open in \(G\) induce a map
\[\operatorname*{hocolim}_{G/U\in\mathrm{Or}_{\mathrm{op}}(G)}\mathbf{K}_{R}(G/ U)\to\mathbf{K}_{R}(G/G)\simeq\mathbf{K}(\mathcal{H}(G;R)). \tag{7.1}\]
This map can be identified after applying \(\pi_{n}\) with the assembly map appearing in Theorem 1.1 (i) and Theorem 3.2. This follows from [11, Section 5].
### Simplifying the source of the Farrell Jones assembly map
Let \(X\) be an abstract simplicial complex with simplicial \(G\)-action such that the isotropy group of each vertex is compact open and the \(G\)-action is cellular. Furthermore we assume that \(|X|^{K}\) is weakly contractible for any compact open subgroup of \(G\). Then \(|X|\) is a model for \(E_{\mathrm{Op}}(G)\).
Let \(C\) be a collection of simplices of \(X\) that contains at least one simplex from each orbit of the action of \(G\) on the set of simplices of \(X\). Define a category \(\mathcal{C}(C)\) as follows. Its objects are the simplices from \(C\). A morphism \(gG_{\sigma}\colon\sigma\to\tau\) is an element \(gG_{\sigma}\in G/G_{\sigma}\) satisfying \(g\sigma\subseteq\tau\). The composite of \(gG_{\sigma}\colon\sigma\to\tau\) with \(hG_{\tau}\colon\tau\to\rho\) is \(hgG_{\sigma}\colon\sigma\to\rho\). Define a functor
\[\iota_{C}\colon\mathcal{C}(C)^{\mathrm{op}}\to\mathrm{Or}_{\mathcal{C} \mathrm{op}}(G) \tag{7.2}\]
by sending an object \(\sigma\) to \(G/G_{\sigma}\) and a morphism \(gG_{\sigma}\colon\sigma\to\tau\) to \(R_{g}\colon G/G_{\tau}\to G/G_{\sigma},\ g^{\prime}G_{\tau}\mapsto g^{\prime}gG _{\sigma}\).
**Lemma 7.3**.: _Under the assumptions above the map induced by the functor \(\iota_{C}\)_
\[\operatorname*{hocolim}_{\sigma\in\mathcal{C}(C)^{\mathrm{op}}}\mathbf{K}_{R}( G/G_{\sigma})\xrightarrow[G/U\in\mathrm{Or}_{\mathrm{op}}(G)]\mathbf{K}_{R}(G/U)\]
_is a weak homotopy equivalence._
Proof.: We want to apply the criterion [12, 9.4]. So we have to show that the geometric realization of the nerve of the category \(G/K\downarrow\iota_{C}\) is a contractible space for every object \(G/K\) in \(\operatorname{\mathsf{O}\mathsf{r}_{\operatorname{\mathsf{Cop}}}}(G)\). An object in \(G/K\downarrow\iota_{C}\) is a pair \((\sigma,u)\) consisting of an element \(\sigma\in C\) and a \(G\)-map \(u\colon G/K\to G/G_{\sigma}\). A morphism \((\sigma,u)\to(\tau,v)\) in \(G/K\downarrow\iota_{C}\) is given by a morphism \(gG_{\tau}\colon\tau\to\sigma\) in \(\mathcal{C}(C)\) such that the \(G\)-map \(R_{g}\colon G/G_{\sigma}\to G/G_{\tau}\) sending \(zG_{\sigma}\) to \(zgG_{\tau}\) satisfies \(v\circ R_{g}=u\).
Let \(\mathcal{P}(X^{K})\) be the poset given by the simplices of \(X^{K}\) ordered by inclusion. Then we get an equivalence of categories
\[F\colon\mathcal{P}(X^{K})^{\operatorname{op}}\xrightarrow{\cong}G/K \downarrow\iota_{C}\]
as follows. It sends a simplex \(\sigma\) to the object \((\sigma,\operatorname{pr}_{\sigma}\colon G/K\to G/G_{\sigma})\) for the canonical projection \(\operatorname{pr}_{\sigma}\). A morphism \(\sigma\to\tau\) in \(\mathcal{P}(X^{K})^{\operatorname{op}}\) is send to the morphism \((\sigma,\operatorname{pr}_{\sigma})\to(\tau,\operatorname{pr}_{\tau})\) in \(G/K\downarrow\iota_{C}\) which is given by the morphism \(eG_{\tau}\colon\tau\to\sigma\) in \(\mathcal{C}(C)\).
Consider an object \((\sigma,u)\) in \(G/K\downarrow\iota_{C}\). We want to show that it is isomorphic to an object in the image of \(F\). Choose \(g\in G\) such that \(g^{-1}Kg\subseteq G_{\sigma}\) holds and \(u\) is the \(G\)-map \(R_{g}\colon G/K\to G/G_{\sigma}\) sending \(zK\) to \(zgG_{\sigma}\). Then \(K\subseteq G_{g\sigma}\) and we can consider the object \(F(g\sigma)=(g\sigma,\operatorname{pr}_{g\sigma})\) for the projection \(\operatorname{pr}_{g\sigma}\colon G/K\to G_{g\sigma}\). Now the isomorphism \(gG_{\sigma}\colon\sigma\to g\sigma\) in \(\mathcal{C}(C)\) induces an isomorphism \(F(g\sigma)\xrightarrow{\cong}(\sigma,u)\) in \(G/K\downarrow\iota_{C}\).
Obviously \(F\) is faithful. It remains to show that \(F\) is full. Fix two objects \(\sigma\) and \(\tau\) in \(\mathcal{P}(X^{K})\). Consider a morphism \(f\colon F(\sigma)=(\sigma,\operatorname{pr}_{\sigma})\to F(\tau)=(\tau, \operatorname{pr}_{\tau})\) in \(G/K\downarrow\iota_{C}\). It is given by a morphism \(gG_{\tau}\colon\tau\to\sigma\) in \(\mathcal{C}(C)\) such that the composite of \(R_{g}\colon G/G_{\sigma}\to G/G_{\tau}\) with \(\operatorname{pr}_{\sigma}\) is \(\operatorname{pr}_{\tau}\). This implies \(gG_{\tau}=G_{\tau}\) and hence \(g\in G_{\tau}\). Since \(g\tau\subseteq\sigma\) holds by the definition of a morphism in \(\mathcal{C}(C)\), we get \(\tau\subseteq\sigma\). Hence \(f\) is the image of the morphism \(\sigma\to\tau\) under \(F\). This shows that \(F\) is full.
Hence it remains to show that geometric realization of the nerve of \(\mathcal{P}(X^{K})^{\operatorname{op}}\) is contractible. Since this is \(|X|^{K}\), this follows from the assumptions.
Suppose additionally that \(X\) admits a strict fundamental domain \(\Delta\), i.e., a simplicial subcomplex \(\Delta\) that contains exactly one simplex from each orbit for the \(G\)-action on the set of simplices of \(X\). Then we can take for \(C\) the simplices from \(\Delta\). In this case \(\mathcal{C}(C)\) can be identified with the poset \(\mathcal{P}(\Delta)\) of simplices of \(\Delta\). Recall that for any open subgroup \(U\) of \(G\), there is an explicite weak homotopy equivalence \(\mathbf{K}(\mathcal{H}(U;R))\xrightarrow{\cong}\mathbf{K}_{R}(G/U)\), where the source is the \(K\)-theory spectrum \(\mathbf{K}(\mathcal{H}(U;R))\) of the Hecke algebra \(\mathcal{H}(U;R)\), see [4, 5.6 and Remark 6.7]. Lemma 7.3 implies
**Theorem 7.4**.: _Let \(X\) be an abstract simplicial complex with a simplicial \(G\)-action such that the isotropy group of each vertex is compact open, the \(G\)-action is cellular, and \(|X|^{K}\) is weakly contractible for every compact open subgroup \(K\) of \(G\). Let \(\Delta\) be a strict fundamental domain._
_Then the assembly map_
\[\operatorname{hocolim}_{\sigma\in\mathcal{P}(\Delta)^{\operatorname{op}}} \mathbf{K}(\mathcal{H}(G_{\sigma};R))\to\operatorname{hocolim}_{G/U\in \operatorname{\mathsf{O}\mathsf{r}_{\operatorname{\mathsf{Cop}}}}(G)}\mathbf{K} _{R}(G/U) \tag{7.5}\]
_that is induced by the the functor \(\mathcal{P}(\Delta)^{\operatorname{op}}\to\operatorname{\mathsf{O}\mathsf{r} _{\operatorname{\mathsf{Cop}}}}(G)\) sending a simplex \(\sigma\) to \(G_{\sigma}\), is a weak homotopy equivalence,_
**Example 7.6** (\(\operatorname{SL}_{n}(F)\)).: Let \(X\) be the Bruhat-Tits building for \(\operatorname{SL}_{n}(F)\). Then the canonical \(\operatorname{SL}_{n}(F)\) action on \(X\) is cellular. We will use again the notation introduced in Section 6. The \((n-1)\)-simplex \(\Delta\), viewed as a subcomplex of \(X\), is a strict fundamental domain. Applying this in the case \(n=2\) yields the homotopy
pushout diagram
For the K-groups this yields a Mayer-Vietoris sequence, infinite to the left,
\[\cdots \to K_{n}(\mathcal{H}(I^{\mathrm{S}};R))\to K_{n}(\mathcal{H}(U^{ \mathrm{S}}_{1};R))\oplus K_{n}(\mathcal{H}(U^{\mathrm{S}}_{0};R))\to K_{n}( \mathcal{H}(\mathrm{SL}_{2}(F);R))\] \[\to K_{n-1}(\mathcal{H}(I^{\mathrm{S}};R))\to K_{n-1}(\mathcal{H}(U^{ \mathrm{S}}_{1};R))\oplus K_{n-1}(\mathcal{H}(U^{\mathrm{S}}_{0};R))\to\cdots\] \[\cdots \to K_{0}(\mathcal{H}(I^{\mathrm{S}};R))\to K_{0}(\mathcal{H}(U^{ \mathrm{S}}_{1};R))\oplus K_{0}(\mathcal{H}(U^{\mathrm{S}}_{0};R))\to K_{0}( \mathcal{H}(\mathrm{SL}_{2}(F);R))\to 0 \tag{7.7}\]
and \(K_{n}(\mathcal{H}(\mathrm{SL}_{2}(F);R))=0\) for \(n\leq-1\).
For \(n=3\) we obtain the homotopy push-out diagram
where we abbreviated \(U^{\mathrm{S}}_{ij}:=U^{\mathrm{S}}_{i}\cap U^{\mathrm{S}}_{j}\). In general, for \(\mathrm{SL}_{n}(F)\) we obtain a homotopy push-out diagram whose shape is an n-cube.
To such an \(n\)-cube there is assigned a spectral sequence concentrated in the region for \(p\geq 0\) and \(0\leq q\leq n-1\), which corresponds to the spectral sequence appearing in Theorem 1.1 (ii)
## 8. Allowing central characters and actions on the coefficients
So far we have only considered the standard Hecke algebra \(\mathcal{H}(G;R)\). There are more general Hecke algebras \(\mathcal{H}(G;R,\rho,\omega)\), see [2], and all the discussions of this paper carry over to them in the obvious way.
|
2307.09265 | PGL orbits in tree varieties | In this paper, we introduce tree varieties as a natural generalization of
products of partial flag varieties. We study orbits of the PGL action on tree
varieties. We characterize tree varieties with finitely many PGL orbits,
generalizing a celebrated theorem of Magyar, Weyman and Zelevinsky. We give
criteria that guarantee that a tree variety has a dense PGL orbit and provide
many examples of tree varieties that do not have dense PGL orbits. We show that
a triple of two-step flag varieties $F(k_1, k_2; n)^3$ has a dense PGL orbit if
and only if $k_1 + k_2 \not= n$. | Izzet Coskun, Demir Eken, Chris Yun | 2023-07-18T13:49:11Z | http://arxiv.org/abs/2307.09265v1 | # \(\mathbb{PGL}\) orbits in tree varieties
###### Abstract.
In this paper, we introduce tree varieties as a natural generalization of products of partial flag varieties. We study orbits of the \(\mathbb{PGL}\) action on tree varieties. We characterize tree varieties with finitely many \(\mathbb{PGL}\) orbits, generalizing a celebrated theorem of Magyar, Weyman and Zelevinsky. We give criteria that guarantee that a tree variety has a dense \(\mathbb{PGL}\)-orbit and provide many examples of tree varieties that do not have dense \(\mathbb{PGL}\) orbits. We show that a triple of two-step flag varieties \(F(k_{1},k_{2};n)^{3}\) has a dense \(\mathbb{PGL}(n)\) orbit if and only if \(k_{1}+k_{2}\neq n\).
Key words and phrases:Flag varieties, \(\mathbb{PGL}(n)\)-actions, dense orbits 2010 Mathematics Subject Classification: Primary: 14L30, 14M15, 14M17. Secondary: 14L35, 51N30 During the preparation of this article the first author was partially supported by the NSF FRG grant DMS 1664296 and NSF grant DMS-2200684.
## 1. Introduction
In this paper, we introduce tree varieties and study \(\mathbb{PGL}\) orbits in tree varieties. These varieties arise naturally when studying \(\mathbb{PGL}\) orbits on products of flag varieties via inductive constructions. We characterize tree varieties with finitely many \(\mathbb{PGL}\) orbits, generalizing a celebrated theorem of Magyar, Weyman and Zelevinsky [13].
We also study tree varieties with dense \(\mathbb{PGL}\) orbits. The study of the product of flag varieties with dense \(\mathbb{PGL}\) orbits was initiated by Popov [14, 15] based on a question of M. Burger and further studied in [11, 12]. We refer the reader to [16] for a recent survey. We give some criteria that guarantee that the tree varieties have dense \(\mathbb{PGL}\) orbits. Unfortunately, at present a complete characterization of tree varieties with dense \(\mathbb{PGL}\) orbits seems our of reach, even in the special case of products of three flag varieties. We do, however, settle the first non-trivial case by showing that \(F(k_{1},k_{2};n)^{3}\) has a dense \(\mathbb{PGL}(n)\) orbit if and only if \(k_{1}+k_{2}\neq n\).
We now introduce tree varieties. Throughout the paper we work over an algebraically closed field of arbitrary characteristic.
### Tree varieties
Let \(T\) be a directed tree. We will denote the vertices of \(T\) by \(V(T)\) and the edges of \(T\) by \(E(T)\). Each directed edge is determined by specifying a pair of vertices \((s,t)\), where the edge points from the _source_\(s\) to the _target_\(t\).
_Definition 1.1_ (Labeled tree).: A _labeled tree_\((T,\phi)\) is a pair such that
* \(T\) is a rooted, directed tree where all the edges point towards the root; and
* \(\phi:V(T)\to\mathbb{Z}_{>0}\) is a function that assigns to each vertex of \(T\) a positive integer such that if \((s,t)\in E(T)\), then \(\phi(s)<\phi(t)\).
Let \(r\) denote the root of the tree \(T\). The vertex \(r\) is the only vertex which is not the source of an edge. We say \(n=\phi(r)\) is the _ambient dimension_ of \((T,\phi)\). A _leaf_ of \(T\) is a vertex \(s\) which is not the target of any edge. A _branch_ of a tree is a maximal directed chain containing a leaf such
that each vertex in the chain is the target of at most one edge. Note that each branch contains a unique leaf. We will depict labeled trees by drawing a tree where each vertex \(v\) is labeled by \(\phi(v)\).
_Example 1.2_.: In the tree below, the leaves are the vertices marked by \(d_{1}\) and \(d_{5}\). The tree has two branches, one with three vertices labeled \(d_{1},d_{2},d_{3}\) and one with one vertex labeled \(d_{5}\).
_Definition 1.3_ (Tree variety).: Let \((T,\phi)\) be a labeled tree with ambient dimension \(n\). Let \(W\) be an \(n\)-dimensional vector space. The _tree variety_\(F(T,\phi)\) associated to \((T,\phi)\) is the variety which parameterizes a \(\phi(v)\)-dimensional subspace \(U_{v}\) of \(W\) for each vertex \(v\in V(T)\) such that \(U_{s}\subset U_{t}\) whenever \((s,t)\in E(T)\).
Tree varieties are smooth, irreducible, projective varieties and their dimensions are readily computed (see Theorem 2.1).
_Example 1.4_.: Let \((T,\phi)\) be the labeled tree where \(T\) is a chain with \(m+1\) vertices and \(\phi\) associates the positive integers \(k_{1}<k_{2}<\cdots<k_{m}<n\) to these vertices
In this case, the tree variety \(F(T,\phi)\) is the \(m\)-step partial flag variety \(F(k_{1},\ldots,k_{m};n)\) parameterizing partial flags \(U_{1}\subset U_{2}\subset\cdots\subset U_{m}\subset W\), where \(U_{i}\) has dimension \(k_{i}\).
_Example 1.5_.: Consider the labeled tree \((T,\phi)\), where \(T\) is a union of \(j\) chains joined at the root.
In this case, the tree variety \(F(T,\phi)=\prod_{i=1}^{j}F(k_{i,1},\ldots,k_{i,m_{i}};n)\), the product of \(j\)-partial flag varieties. Hence, tree varieties generalize products of partial flag varieties. They are also closely related to quiver varieties associated to a rooted, directed tree. However, unlike in quiver varieties, in tree varieties we do not take any quotients.
If the ambient dimension of \((T,\phi)\) is \(n\), the group \(\mathbb{PGL}(n)\) acts on \(F(T,\phi)\). In this paper, we are interested in the orbits of this action. We address the following two main questions.
1. When does the action of \(\mathbb{PGL}(n)\) on \(F(T,\phi)\) have finitely many orbits?
2. When does the action of \(\mathbb{PGL}(n)\) on \(F(T,\phi)\) have a dense orbit?
We resolve the first of these questions completely. The second question is much harder, nevertheless, we obtain many new partial results. In fact, the main motivation for introducing tree varieties came from studying the second question for products of Grassmannians.
### Results
We now describe our results in detail.
**Definition 1.6**.: Given a branch \(B\) of a labeled tree \((T,\phi)\), let \(s_{B}\) denote the leaf of \(B\). Then the _minimum width_\(\operatorname{mw}(B)\) of \(B\) is defined by
\[\operatorname{mw}(B):=\min\{\phi(s_{B}),\min\{\phi(t)-\phi(s)|(s,t)\in E(T)\text { and }s\in B\}\}.\]
**Example 1.7**.: In the tree in Example 1.2, the minimum width of the branches are
\[\min\{d_{1},d_{i+1}-d_{i}\text{ for }1\leq i\leq 3\}\quad\text{and}\quad\min\{d_{ 5},d_{4}-d_{5}\},\]
respectively. In Example 1.4, the minimum width of the branch is
\[\min\{k_{1},n-k_{m},k_{i+1}-k_{i}\text{ for }1\leq i\leq m-1\}.\]
#### 1.2.1. Results on finiteness of orbits
Our first theorem classifies the tree varieties that are homogeneous or have two orbits.
**Theorem 1.8**.: _Let \(F(T,\phi)\) be a tree variety._
1. _The following are equivalent:_ 1. _The variety_ \(F(T,\phi)\) _is homogeneous._ 2. _The tree_ \(T\) _is a chain._ 3. _The variety_ \(F(T,\phi)\) _is a partial flag variety._
2. _The variety_ \(F(T,\phi)\) _has two_ \(\mathbb{PGL}(n)\) _orbits if and only if_ \(T\) _has exactly two branches each of length_ \(1\) _and one of the branches has minimum width equal to_ \(1\)_._
This theorem is closely related to the following result of Knop [11] in Type A and generalizes Case (1).
**Proposition 1.9**.: _[_11_]_ _Let \(X\) be a projective rational homogeneous variety. Then the complement of the diagonals in \(X^{m}\) is homogeneous if and only if_
1. _either_ \(m=2\) _and_ \(X\cong\mathbb{P}^{n}\)__
2. _or_ \(m=3\) _and_ \(X\cong\mathbb{P}^{1}\)_._
We next classify tree varieties that have finitely many orbits under the \(\mathbb{PGL}(n)\) action, completely answering Question (1).
**Theorem 1.10**.: _The tree variety \(F(T,\phi)\) has finitely many \(\mathbb{PGL}(n)\) orbits if and only if \((T,\phi)\) has at most 3 leaves and satisfies one of the following._
1. \(T\) _has at most 2 leaves._
2. \(T\) _has 3 leaves with the following possible branch lengths._ 1. \((1,1,\ell)\) _with_ \(1\leq\ell\)_,_ 2. \((1,2,\ell)\) _with_ \(2\leq\ell\leq 4\)_,_ 3. \((1,2,\ell)\) _with_ \(5\leq\ell\) _provided that the minimum width of the branch of length 1 is 2 or the minimum width of the branch of length 2 is 1._ 4. \((1,\ell_{1},\ell_{2})\) _with_ \(1\leq\ell_{1}\leq\ell_{2}\) _provided that the minimum width of the branch of length 1 is 1._
As a special case, this theorem contains Magyar, Weyman and Zelevinsky's theorem classifying products of flag varieties with finitely many \(\mathbb{PGL}(n)\) orbits [12, Theorem 2.2]. One can also enumerate all the orbits in the cases described in Theorem 1.10 using [12, Theorem 2.9].
#### 1.2.2. Results on density of orbits
The original motivation for introducing tree varieties was to classify products of partial flag varieties that have a dense \(\mathbb{PGL}(n)\) orbit.
_Example 1.11_.: Any tree variety with finitely many \(\mathbb{PGL}(n)\) orbits has a dense orbit. Hence, Theorem 1.10 provides many examples of dense tree varieties. However, a tree variety may have infinitely many orbits, but still have a dense orbit. For instance, consider the tree consisting of \(k\leq n+1\) vertices labeled \(1\) each connecting to the root labeled \(n\).
The corresponding variety is \(k\) ordered points in \(\mathbb{P}^{n-1}\). This tree variety has a dense orbit since any linearly general \(k\) points are equivalent under the \(\mathbb{PGL}(n)\) action [10, Exercise 1.6], but has finitely many orbits only if \(k\leq 3\).
If \(\mathbb{PGL}(n)\) acts with dense orbit on a variety \(X\), then we must have that
\[\dim(\mathbb{PGL}(n))=n^{2}-1\geq\dim(X).\]
This imposes strong dimension restrictions on tree varieties that can have a dense orbit. More generally, given any vertex \(v\in T\), let \(T^{v}\) be the subtree of \(T\) consisting of the vertices that have a directed path terminating at \(v\). Letting \(\phi^{v}\) be the restriction of \(\phi\) to \(T^{v}\), we obtain a new labeled tree \((T^{v},\phi^{v})\) with root \(v\).
**Lemma 1.12**.: _If \(F(T,\phi)\) is dense, then for any vertex \(v\in T\)_
\[\sum_{(s,t)\in E(T^{v})}\phi(s)(\phi(t)-\phi(s))\leq\phi(v)^{2}-1.\]
Proof.: By comparing dimensions of stabilizers of general points and using Lemma 2.4, it follows that if \(F(T,\phi)\) is dense, then \(F(T^{v},\phi^{v})\) is dense. Hence, the dimension of the tree variety \(F(T^{v},\phi^{v})\) has to be less than or equal to the dimension of \(\mathbb{PGL}(\phi(v))\). By Theorem 2.1, the dimension of the tree variety is given in the left-hand side of the inequality and the dimension of \(\mathbb{PGL}(\phi(v))\) is given in the right-hand side of the inequality. This proves the lemma.
This motivates the following definition.
_Definition 1.13_.: We call a tree variety \(F(T,\phi)\)_dense_ if \(F(T,\phi)\) has a dense \(\mathbb{PGL}(n)\) orbit. Otherwise, we say \(F(T,\phi)\) is _sparse_. The tree variety \(F(T,\phi)\) is _trivially sparse_ if any vertex \(v\in T\) violates the inequality in Lemma 1.12.
Trivially sparse tree varieties are sparse for an easy to check reason. A tree variety may be sparse without being trivially sparse. The following is a generalization of [1, Example 1.2].
_Example 1.14_.: Consider the labeled tree
The tree variety associated to this tree is not trivially sparse when \(m>2\), but it is sparse. Let \(W\) be the span of the two \(1\)-dimensional subspaces. The vector space \(W\) generically intersects each of the \((m-1)\)-dimensional subspaces in a \(1\)-dimensional subspace. The cross-ratio of the four \(1\)-dimensional subspaces in \(W\) is an invariant of the orbits. This tree variety has dimension \(4m-4+m(n-m)\), which can be arbitrarily smaller than \(n^{2}-1\) as \(n\) tends to infinity.
This example raises the following problem.
**Problem 1.15**.: _Classify dense tree varieties._
Already the following special case seems to be challenging.
**Problem 1.16**.: _Classify dense tree varieties with three leaves._
In Proposition 5.4, we will show that classifying dense tree varieties with at most three leaves reduces to classifying dense products of three partial flag varieties. Popov [14, 15] classified dense \((G/P)^{n}\) when \(P\) is a maximal parabolic subgroup. Devyatov [13] has extended the classification to non-maximal parabolic subgroups, except in type \(A\). When \(G/P\) is a type A partial flag variety, even the classification of dense \((G/P)^{3}\) is unknown. We will give several partial results towards this classification. Some of our results can be summarized in the following theorem.
**Theorem 1.17**.:
1. _If there exists two indices_ \(i\neq j\) _such that_ \(k_{i}+k_{j}=n\)_, then_ \(F(k_{1},\ldots,k_{r};n)^{3}\) _is sparse (Corollary_ 5.2_)._
2. _If_ \(3k_{r}\leq n\)_, then_ \(F(k_{1},\ldots,k_{r};n)^{3}\) _is dense (Lemma_ 5.5_)._
3. _If_ \(2k_{r}\leq n\) _and_ \(2k_{i}\leq k_{i+1}\) _for_ \(2\leq i\leq r-1\)_, then_ \(F(k_{1},\ldots,k_{r};n)^{3}\) _is dense (Proposition_ 5.8_)._
Finally, we will classify the density of the triple self-product of two-step flag varieties.
**Theorem 1.18** (Proposition 5.1 and Theorem 5.10).: _The product \(F(k_{1},k_{2};n)^{3}\) is sparse if and only if \(k_{1}+k_{2}=n\). The product \(F(k_{1},k_{2};n)^{3}\) is trivially sparse if and only if \(n\) is divisible by \(3\), \(k_{1}=\frac{n}{3}\) and \(k_{2}=\frac{2n}{3}\)._
The density of the \(\mathbb{PGL}(n)\) action on a product of flag varieties has many applications. For the applications, we specialize our base field to \(\mathbb{C}\). Let \(\lambda_{1},\ldots,\lambda_{d}\) be nonzero dominant characters of the maximal torus \(T\) in the semi-simple group \(G\). Then \((\lambda_{1},\ldots,\lambda_{d})\) is called _primitive_ if for every non-negative \(d\)-tuple of integers \((n_{1},\ldots,n_{d})\), the Littlewood-Richardson coefficient \(c_{n_{1}\lambda_{1},\ldots,n_{d}\lambda_{d}}^{0}\leq 1\). Popov in [14, Theorem 1] proves that if \(G\) has an open orbit on \(G/P_{\lambda_{1}}\times\cdots\times G/P_{\lambda_{d}}\), then \((\lambda_{1},\ldots,\lambda_{d})\) is primitive. Hence, for vectors for which \(\mathbb{PGL}(n)\) acts with dense orbit, we get a strong bound on the Littlewood-Richardson coefficients. Similarly, the density has geometric applications to enumerative geometry and genus zero Gromov-Witten invariants. We refer the reader to [10] for more details.
Our approach to Theorems 1.10 and 1.18 is elementary. In order to show that a tree variety does not have a dense orbit, we explicitly construct a cross-ratio which has to be preserved by the \(\mathbb{PGL}(n)\) action. For applications in other contexts, knowing the explicit cross-ratio which obstructs density is often useful.
### Organization of the paper
In SS2, we recall the necessary background. In SS3, we prove Theorem 1.8. In SS4, we classify tree varieties with finitely many orbits and prove Theorem 1.10. In SS5, we study tree varieties with dense orbit.
**Acknowledgements**.: We would like to thank Dave Anderson, James Freitag, Majid Hadian, Janos Kollar, Howard Nuer, Sybille Rosset, Geoffrey Smith and Dmitry Zakharov for helpful discussions regarding actions of \(\mathbb{P}\mathrm{GL}(n)\) on products of varieties.
## 2. Preliminaries
In this section, we collect basic facts concerning tree-varieties and group actions.
### Tree-varieties
Let \((T,\phi)\) be a labeled tree with the root \(r\). Recall that \(V(T)\) and \(E(T)\) denote the vertices and edges of \(T\), respectively. Let \(n=\phi(r)\) be the ambient dimension of the tree. The tree variety \(F(T,\phi)\) parameterizes subspaces \((U_{v})_{v\in V(T)}\) such that \(\dim(U_{v})=\phi(v)\) and \(U_{s}\subset U_{t}\) whenever \((s,t)\in E(T)\). The tree variety \(F(T,\phi))\) is naturally a closed algebraic subset of \(\prod_{v\in V(T)\setminus\{r\}}G(\phi(v),n)\) given by imposing the incidence relations \(U_{s}\subset U_{t}\) for every edge \((s,t)\in E(T)\).
**Theorem 2.1**.: _The tree variety \(F(T,\phi)\) is a smooth, projective, irreducible variety of dimension_
\[\sum_{(s,t)\in E(T)}\phi(s)(\phi(t)-\phi(s)).\]
Proof.: We prove the theorem by induction on the number of vertices in \(T\). If \(T\) has only two vertices and one edge \((s,t)\), then \(F(T,\phi)\) parameterizes \(\phi(s)\)-dimensional subspaces of the \(n\)-dimensional vector space \(W\). In this case, \(F(T,\phi)\) is the Grassmannian \(G(\phi(s),n)\) which is a smooth, irreducible, projective variety of dimension \(\phi(s)(n-\phi(s))\). Since \(n=\phi(t)\), the theorem is true in this case.
By induction, assume that the theorem holds for trees with \(m\) or fewer vertices. Let \(T\) be a tree with \(m+1\) vertices. Then \(T\) has at least one leaf. Let \(s_{0}\) be a leaf. Removing the leaf \(s_{0}\) and the edge \((s_{0},t_{0})\) with source \(s_{0}\), we obtain a tree \(T^{\prime}\) with \(m\) vertices. The restriction of \(\phi\) to \(T^{\prime}\) defines a function \(\phi^{\prime}\). Since \(T^{\prime}\) has \(m\) vertices, by induction, the tree variety \(F(T^{\prime},\phi^{\prime})\) is a smooth, irreducible, projective variety of dimension \(\sum_{(s,t)\in E(T^{\prime})}\phi(s)(\phi(t)-\phi(s))\). The variety \(F(T,\phi)\) is obtained from \(F(T^{\prime},\phi^{\prime})\) by choosing a \(\phi(s_{0})\)-dimensional linear space in \(U_{t_{0}}\). Hence, \(F(T,\phi)\) is naturally a \(G(\phi(s_{0}),\phi(t_{0}))\)-bundle over \(F(T^{\prime},\phi^{\prime})\). Consequently, \(F(T,\phi)\) is a smooth, irreducible, projective variety with dimension \(\phi(s_{0})(\phi(t_{0})-\phi(s_{0}))+\dim(F(T^{\prime},\phi^{\prime}))\). The latter expression is precisely \(\sum_{(s,t)\in E(T)}\phi(s)(\phi(t)-\phi(s)).\) The theorem follows by induction.
#### 2.1.1. Forgetful morphisms between tree varieties
Let \(v\) be a vertex of \(T\) different from the root. The vertex \(v\) may be the target of more than one edge of \(T\); however, \(v\) is the source of a single edge \((v,t)\). Let \(T_{v}\) be the tree obtained from \(T\) by deleting \(v\) and replacing every edge \((s,v)\) whose target is \(v\) by \((s,t)\). Given a labeled tree \((T,\phi)\), we obtain a new labeled tree \((T_{v},\phi_{v})\), where \(\phi_{v}\) is the restriction of \(\phi\) to \(V(T)\setminus v\). Then there is a natural forgetful morphism
\[\pi_{v}:F(T,\phi)\to F(T_{v},\phi_{v})\]
that forgets the linear space \(U_{v}\). This map is induced by the natural projection
\[\prod_{w\in V(T)\setminus\{r\}}G(\phi(w),n)\to\prod_{w\in V(T)\setminus\{v,r \}}G(\phi(w),n).\]
Given any vertex \(v\) in \(T\), there is a unique chain connecting \(v\) to the root. Given a set of vertices \(v_{1},\ldots,v_{\ell}\), let \(T_{v_{1},\ldots,v_{\ell}}\) denote the tree obtained from \(T\) by deleting the vertices \(v_{1},\ldots,v_{\ell}\)
and replacing any edge \((s,v_{i})\) with \((s,t_{i})\), where \(t_{i}\) is the first vertex in the chain connecting \(s_{i}\) to the root which is not among \(v_{1},\ldots,v_{\ell}\). Let \(\phi_{v_{1},\ldots,v_{\ell}}\) be the restriction of \(\phi\) to \(V(T)\setminus\{v_{1},\ldots,v_{\ell}\}\). Then there is a natural forgetful morphism
\[\pi_{v_{1},\ldots,v_{\ell}}:F(T,\phi)\to F(T_{v_{1},\ldots,v_{\ell}},\phi_{v_{ 1},\ldots,v_{\ell}})\]
that forgets the linear spaces \(U_{v_{1}},\ldots,U_{v_{\ell}}\). This morphism is also induced by the corresponding natural projection
\[\prod_{w\in V(T)\setminus\{r\}}G(\phi(w),n)\to\prod_{w\in V(T)\setminus\{v_{1 },\ldots,v_{\ell},r\}}G(\phi(w),n).\]
**Proposition 2.2**.: _Let \(v\) be a vertex of \(T\) different from the root. Let \(s_{1},\ldots,s_{j}\) be the vertices of \(T\) such that \((s_{i},v)\) are edges in \(T\). Then the forgetful morphism_
\[\pi_{v}:F(T,\phi)\to F(T_{v},\phi_{v})\]
_is surjective if and only if_
\[\sum_{i=1}^{j}\phi(s_{i})\leq\phi(v).\]
_In particular, if \(v\) is the target of a unique edge, then \(\pi_{v}\) is surjective and the fibers of \(\pi_{v}\) are isomorphic to Grassmannians \(G(\phi(v)-\phi(s_{1}),\phi(t)-\phi(s_{1}))\)._
Proof.: A point \(\{U_{w}\}_{w\in V(T_{v})}\) of \(F(T_{v},\phi_{v})\) is in the image of \(\pi_{v}\) if and only if there is a linear space \(U_{v}\) of dimension \(\phi(v)\) contained in \(U_{t}\) and containing \(U_{s_{i}}\) for \(1\leq i\leq j\). If \(\sum_{i=1}^{j}\phi(s_{i})\leq\phi(v)\), one can always choose such a linear space \(U_{v}\). Conversely, if \(\sum_{i=1}^{j}\phi(s_{i})>\phi(v)\), then \(F(T_{v},\phi_{v})\) will contain points where the linear spaces \(U_{s_{i}}\) span a vector space of dimension greater than \(\phi(v)\). Hence, such a point cannot be in the image of \(\pi_{v}\).
#### 2.1.2. Constructing tree varieties inductively
Given a labeled tree \((T,\phi)\) and a vertex \(s\in V(T)\) different from the root \(r\), there is a unique chain connecting \(s\) to \(r\). Define the distance function \(d:V(T)\to\mathbb{N}\) by setting \(d(s)\) to be the length of this chain for \(s\neq r\) and set \(d(r)=0\).
Given a positive integer \(m\), we can define _the truncation \((T_{\leq m},\phi_{\leq m})\) of \((T,\phi)\) at distance \(m\)_ as follows. Let \(T_{\leq m}\) be the tree obtained by deleting all the vertices \(s\) of \(T\) with \(d(s)>m\) and deleting the edges that have these vertices as sources. Define \(\phi_{\leq m}\) by restricting \(\phi\) to vertices \(v\) with \(d(v)\leq m\). If we delete the vertices of \(T\) with \(d(v)<m\), we obtain a set of labeled trees \((T^{1},\phi^{1}),\ldots,(T^{j},\phi^{j})\) one for each vertex \(v_{i}\) with \(d(v_{i})=m\). The vertex \(v_{i}\) forms the root of the tree \(T^{i}\) and \(\phi^{i}\) is the restriction of \(\phi\) to vertices at a distance at least \(m\) that have \(v_{i}\) in the chain connecting them to the root of \(T\).
**Proposition 2.3**.: _For every positive integer \(m\), there is a surjective forgetful morphism_
\[\pi_{\leq m}:F(T,\phi)\to F(T_{\leq m},\phi_{\leq m})\]
_and the fibers are isomorphic to_
\[F(T^{1},\phi^{1})\times\cdots\times F(T^{j},\phi^{j}).\]
_In particular, a tree variety can be constructed inductively according to the distance function._
Proof.: The forgetful morphism \(\pi_{\leq m}\) forgets all the vector spaces associated to vertices \(v\) with \(d(v)>m\). Let \(v_{1},\ldots,v_{j}\) be the vertices with \(d(v_{i})=m\). Given a point in \(F(T_{\leq m},\phi_{\leq m})\), the fiber of \(\pi_{\leq m}\) corresponds to choosing linear subspaces in \(U_{v_{i}}\) according to the labeled tree \((T^{i},\phi^{i})\). The proposition follows.
### Group actions
**Lemma 2.4**.: _Let \(X\) be an irreducible projective variety with a \(\mathbb{PGL}(n)\) action. Let \(x\in X\) be a closed point and let \(\operatorname{Stab}(x)\) denote the stabilizer of \(x\). Then the orbit of \(x\) is dense in \(X\) if and only if_
\[\dim(\operatorname{Stab}(x))=n^{2}-1-\dim(X).\]
Proof.: Let \(G\) be an algebraic group acting on an irreducible projective variety \(X\). Then the orbit \(Gx\) of \(x\) under \(G\) is open in its Zariski closure \(\overline{G}x\) by [1, I.1.8]. On the other hand, \(Gx\) is isomorphic to \(G/\operatorname{Stab}(x)\). Hence
\[\dim(\overline{G}x)=\dim(Gx)=\dim(G)-\dim(\operatorname{Stab}(x)).\]
Since \(X\) is irreducible, the orbit \(Gx\) is dense in \(X\) if and only if \(\dim(Gx)=\dim(X)\). Hence, the orbit of \(x\) is dense if and only if
\[\dim(\operatorname{Stab}(x))=\dim(G)-\dim(X).\]
The lemma follows by letting \(G=\mathbb{PGL}(n)\) and noting that \(\dim(\mathbb{PGL}(n))=n^{2}-1\).
**Proposition 2.5**.: _Let \(\pi_{v}:F(T,\phi)\to F(T_{v},\phi_{v})\) be a surjective forgetful morphism._
1. _If_ \(F(T,\phi)\) _has a dense_ \(\mathbb{PGL}(n)\) _orbit, then_ \(F(T_{v},\phi_{v})\) _has a dense_ \(\mathbb{PGL}(n)\) _orbit._
2. _If_ \(F(T,\phi)\) _has finitely many_ \(\mathbb{PGL}(n)\) _orbits, then_ \(F(T_{v},\phi_{v})\) _has finitely many_ \(\mathbb{PGL}(n)\) _orbits._
Proof.: The forgetful morphism \(\pi_{v}\) is equivariant for the \(\mathbb{PGL}(n)\) action. Hence, the image of an orbit is contained in an orbit. Let \(O_{T}\subset F(T,\phi)\) be the dense orbit. Let \(O\subset F(T_{v},\phi_{v})\) be the orbit containing \(\pi_{v}(O_{T})\). We then have
\[\overline{O}\supset\pi_{v}(\overline{O_{T}})=\pi_{v}(F(T,\phi))=F(T_{v},\phi_ {v}).\]
This proves part (1).
Suppose \(F(T,\phi)=\sqcup_{i=1}^{j}O_{i}\) is a union of finitely many \(\mathbb{PGL}(n)\) orbits. The image \(\pi_{v}(O_{i})\) is contained in an orbit \(O_{i}^{\prime}\). Then
\[F(T_{v},\phi_{v})=\pi_{v}(F(T,\phi))=\sqcup_{i=1}^{j}\pi_{v}(O_{i})\subset \cup_{i=1}^{j}O_{i}^{\prime}.\]
Hence, \(F(T_{v},\phi_{v})\) has finitely many orbits. Of course, some of the orbits \(O_{i}^{\prime}\) may coincide. This concludes the proof of the proposition.
**Proposition 2.6**.: _The action of \(\mathbb{PGL}(n)\) on \(\prod_{i=1}^{\ell}F(k_{i,1},\ldots,k_{i,j_{i}},n)\) has finitely many orbits (respectively, a dense orbit) if and only if the action of \(\mathbb{PGL}(n)\) on \(\prod_{i=1}^{\ell}F(n-k_{i,j_{1}},\ldots,n-k_{i,1},n)\) has finitely many orbits (respectively, a dense orbit)._
Proof.: Let \(W^{*}\) be the dual of the ambient vector space \(W\) with the dual \(\mathbb{PGL}(n)\) action. Taking quotient spaces and passing to the dual defines an isomorphism between \(\prod_{i=1}^{\ell}F(k_{i,1},\ldots,k_{i,j_{i}},n)\) and \(\prod_{i=1}^{\ell}F(n-k_{i,j_{1}},\ldots,n-k_{i,1},n)\) which respects the \(\mathbb{PGL}(n)\) action. The proposition follows.
The action of \(\mathbb{P}GL(n)\) on products of Grassmannians has been studied in detail in [10]. We recall the following theorem for the reader's convenience (see also [11]).
**Theorem 2.7**.: _[_1_, Theorem 5.1]_ _Let \(m\leq 4\) and let \(X=\prod_{i=1}^{m}G(k_{i},n)\) be a product of \(m\)-Grassmannians. Then the \(\mathbb{P}GL(n)\) action on \(X\) is sparse if and only if \(m=4\) and \(\sum_{i=1}^{4}k_{i}=2n\)._
The first case of the theorem is the action of \(\mathbb{P}GL(2)\) on an ordered set of \(4\) distinct points \((z_{1},z_{2},z_{3},z_{4})\). In this case, the action is trivially sparse and the invariants are fully understood. There is a unique element \(g\) of \(\mathbb{P}GL(2)\) taking the first three to \(0,\infty\) and \(1\), respectively. The image of the fourth point \(g(z_{4})\) is called the cross-ratio of the four points. Four distinct ordered points are projectively equivalent if and only if their cross-ratio is the same.
## 3. Tree varieties with few \(\mathbb{P}\mathrm{GL}(n)\) orbits
In this section, prove Theorem 1.8 and Proposition 1.9.
Proof of Theorem 1.8.: We first prove (1). If the tree \(T\) is a chain, then, by Example 1.4, \(F(T,\phi)\) is a partial flag variety. Hence, \(F(T,\phi)\) is homogeneous under the \(\mathbb{P}\mathrm{GL}(n)\) action. This shows (ii) implies (iii) implies (i). To conclude the proof of (1), we need to show that (i) implies (ii).
Suppose that the tree \(T\) is not a chain. Then there exists a vertex \(v\) such that there are at least two vertices \(v_{1}\) and \(v_{2}\) so that \((v_{1},v)\) and \((v_{2},v)\) are edges in \(T\). Let \(t\) be such a vertex with the smallest distance from the root. Let
\[\phi(v_{1})=d_{1},\quad\phi(v_{2})=d_{2}\quad\text{and}\quad\phi(t)=d_{t}.\]
Without loss of generality, we may assume that \(d_{1}\leq d_{2}<d_{t}\). Among the linear spaces parameterized by the tree variety \(F(T,\phi)\), there are three linear spaces \(U_{1}\), \(U_{2}\), \(U_{t}\) of dimensions \(d_{1}\), \(d_{2}\) and \(d_{t}\), respectively, corresponding to these three vertices. By Proposition 2.3, we may construct \(F(T,\phi)\) inductively starting at the root. Once we have chosen \(U_{t}\), \(U_{1}\) and \(U_{2}\) are arbitrary linear subspaces of \(U_{t}\) of dimensions \(d_{1}\) and \(d_{2}\), respectively. Hence,
\[\max(0,d_{1}+d_{2}-d_{t})\leq\dim(U_{1}\cap U_{2})\leq d_{1}\]
and every possible value in this range can occur. Since \(d_{1}>0\) and \(d_{2}<d_{t}\), there are at least two possible values. Since the dimension \(\dim(U_{1}\cap U_{2})\) is an invariant of the \(\mathbb{P}\mathrm{GL}(n)\) action, \(F(T,\phi)\) cannot be homogeneous. We conclude that (i) implies (ii).
Next suppose \(F(T,\phi)\) has two orbits under the \(\mathbb{P}\mathrm{GL}(n)\) action. Then \(T\) is not a chain and we do have vertices \(v_{1},v_{2}\) and \(t\) as in the previous paragraph. In this case, the range
\[\max(0,d_{1}+d_{2}-d_{t})\leq\dim(U_{1}\cap U_{2})\leq d_{1}\]
must have \(2\) possible values. This can only happen if \(d_{1}=1\) or \(d_{2}=d_{t}-1\). To conclude the proof of (2), we need to show that \(v_{1}\) and \(v_{2}\) are leaves of \(T\) and there are no other edges with \(t\) as the target.
Suppose there is a third vertex \(v_{3}\) such that \((v_{3},t)\in E(T)\). Without loss of generality, we may assume that \(\phi(v_{3})=d_{3}\geq d_{2}\). The corresponding vector space \(U_{3}\subset U_{t}\) may be chosen freely in \(U_{t}\). If \(U_{1}\subset U_{2}\), then \(U_{3}\) may or may not contain \(U_{2}\). Further, if \(U_{3}\) does not contain \(U_{2}\), it may or may not contain \(U_{1}\). If \(U_{1}\not\subset U_{2}\), then \(U_{3}\) may or may not contain either \(U_{1}\) or \(U_{2}\). We conclude that there are at least \(6\) orbits. Since \(F(T,\phi)\) has only two \(\mathbb{P}\mathrm{GL}(n)\) orbits, there cannot be a third vertex \(v_{3}\) such that \((v_{3},t)\in E(T)\).
If there is an edge \((v_{3},v_{1})\in E(T)\), then \(U_{3}\subset U_{1}\) can be chosen freely. Then there are at least three possibilities:
1. \(U_{3}\subset U_{1}\subset U_{2}\),
2. \(U_{3}\subset U_{1}\) and neither are subsets of \(U_{2}\), or
3. \(U_{3}\subset U_{1}\cap U_{2}\), but \(U_{1}\) is not a subset of \(U_{2}\).
Hence, there are at least \(3\) orbits. A similar argument applies if there is an edge \((v_{3},v_{2})\). We conclude that if \(F(T,\phi)\) has two orbits under the \(\mathbb{PGL}(n)\) action, then the tree looks like
and assuming that \(d_{1}\leq d_{2}\leq d_{3}\), either \(d_{1}=1\) or \(d_{2}=d_{3}-1\). Hence, the tree has exactly two branches each of length one and one of the branches has minimum width equal to \(1\).
Conversely, the \(\mathbb{PGL}(n)\) action has two orbits on such a tree variety. The orbits are determined by whether \(U_{1}\subset U_{2}\) or \(U_{1}\not\subset U_{2}\). If \(U_{1}\subset U_{2}\), then the corresponding orbit is a partial flag variety, hence homogeneous. If \(U_{1}\not\subset U_{2}\), then \(U_{2}\subset U_{3}\subset\cdots\subset W\) is a partial flag and we may choose a basis for \(W\) so that \(U_{i}\) is the span of \(e_{i}\) for \(1\leq i\leq d_{i}\). If \(\dim(U_{1})=d_{1}=1\), then we may further require that \(e_{d_{3}}\) is a basis for \(U_{1}\). If \(d_{1}>1\) and \(d_{3}-d_{2}=1\), we may require that \(e_{i}\) for \(d_{2}-d_{1}+2\leq i\leq d_{2}\) is a basis for \(U_{1}\cap U_{2}\) and \(U_{1}\) is spanned by \(e_{i}\) for \(d_{2}-d_{1}+2\leq i\leq d_{3}\). Hence, this locus also forms a single orbit. We conclude that these tree varieties have exactly two \(\mathbb{PGL}(n)\) orbits. This proves part (2) of Theorem 1.8.
For completeness, we sketch a simple proof of Knop's result [11].
Proof of Proposition 1.9.: By Poincare duality, a one-dimensional Schubert variety in a rational homogeneous variety \(X=G/P\) is a line in the minimal embedding of \(X\). Suppose that \(m>1\) and that the complement of the diagonals in \(X^{m}\) is homogeneous. Observe that this implies that the complement of the diagonals in \(X^{l}\) is homogeneous for all \(l\leq m\). In particular, the complement of the diagonals in \(X^{2}\) is homogeneous. Pick a line \(L\) on \(X\) and let \(p\) and \(q\) be distinct points on \(L\). Since \(G\) is acting transitively on pairs of points, there must be a line between any two distinct points on \(X\). We conclude that \(X=\mathbb{P}^{n}\) for some \(n\). If \(n>1\), we can take three distinct collinear points and three distinct non-collinear points on \(\mathbb{P}^{n}\) to see that the complement of the diagonals in \((\mathbb{P}^{n})^{3}\) is not homogeneous. When \(n=1\), \(\mathbb{PGL}(2)\) acts transitively on triples of ordered, distinct points on \(\mathbb{P}^{1}\). Since \(3=\dim\mathbb{PGL}(2)<\dim((\mathbb{P}^{1})^{4})=4\), \(\mathbb{PGL}(2)\) does not have a dense orbit on \((\mathbb{P}^{1})^{4}\). This concludes the proof of the proposition.
## 4. Tree varieties with finitely many \(\mathbb{PGL}(n)\) orbits
In this section, we classify tree varieties with finitely many \(\mathbb{PGL}(n)\) orbits and prove Theorem 1.10. It is possible to reduce the proof of this theorem to the classification of flag varieties of finite type by Magyar, Weyman and Zelevinsky [13] and obtain a relatively short proof. However, we prefer to give an elementary proof.
Proof of Theorem 1.10.: We will classify \(F(T,\phi)\) that have finitely many \(\mathbb{PGL}(n)\) orbits. The classification is somewhat involved, so we will break it into smaller steps. From now on suppose that \(F(T,\phi)\) has finitely many \(\mathbb{PGL}(n)\) orbits. We begin by showing that \(T\) must have at most \(3\) leaves. We will then show that if \(T\) has at most \(2\) leaves, then \(F(T,\phi)\) has finitely many orbits. The hardest part is to classify trees with \(3\) leaves for which \(F(T,\phi)\) has finitely many orbits. To
show that \(F(T,\phi)\) does not have finitely many \(\mathbb{PGL}(n)\) orbits, we construct a cross-ratio that needs to be preserved but can take arbitrary values.
### Step 1: \(T\) has at most 3 leaves
We first show that if \(F(T,\phi)\) has finitely many \(\mathbb{PGL}(n)\) orbits, then \(T\) has at most 3 leaves. Suppose that \(T\) has 4 leaves \(s_{1},\ldots,s_{4}\). For each leaf \(s_{i}\), let \((s_{i},t_{i})\) be the edge with source \(s_{i}\). Some of the vertices \(t_{i}\) may coincide. Fix a full flag
\[F_{1}\subset\cdots\subset F_{n}=W,\]
where \(F_{k}\) has dimension \(k\). For each vertex \(v\in V(T)\) different from the leaves \(s_{1},\ldots,s_{4}\), let \(U_{v}=F_{\phi(v)}\). Then the 4 linear spaces \(U_{i}\) corresponding to the leaves \(s_{i}\) can be chosen freely subject to the condition that \(U_{i}\subset F_{\phi(t_{i})}\).
Let \(d_{i}\) denote the dimension of \(U_{i}\) and without loss of generality assume that \(d_{1}\leq d_{2}\leq d_{3}\leq d_{4}\). Choose \(U_{2}\subset\bigcap_{i=2}^{4}F_{\phi(t_{i})}\) so that \(\dim(U_{2}\cap F_{\phi(t_{1})})\geq d_{1}-1\). Since \(\min(\phi(t_{3}),\phi(t_{4}))>d_{3}\) and \(\min(\phi(t_{1}),\phi(t_{2}))>d_{1}\), this is possible. Choose \(U_{1}\subset\bigcap_{i=1}^{4}F_{\phi(t_{i})}\) such that \(U_{1}\cap U_{2}=\Lambda_{1}\) with \(\dim(\Lambda_{1})=d_{1}-1\) and the span of \(U_{1}\) and \(U_{2}\) is \(\Lambda_{2}\) with \(\dim(\Lambda_{2})=d_{2}+1\). Let \(U_{3}\subset F_{\phi(t_{3})}\) be a linear space of dimension \(d_{3}\) containing \(\Lambda_{1}\) and intersecting \(\Lambda_{2}\) in a linear space of dimension \(d_{2}\) not containing \(U_{1}\) or \(U_{2}\). Set \(\Lambda_{3}=U_{2}\cap U_{3}\). Then \(\dim(\Lambda_{3})=d_{2}-1\) and \(\Lambda_{1}\subset\Lambda_{3}\) by construction. Finally, choose \(U_{4}\subset F_{\phi(t_{4})}\) of dimension \(d_{4}\) containing \(\Lambda_{3}\), intersecting \(\Lambda_{2}\) in a linear space of dimension \(d_{2}\) and not containing \(U_{1},U_{2},U_{3}\). For each \(i\), let \(Z_{i}\) denote the span of \(\Lambda_{3}\) and \(U_{i}\cap\Lambda_{2}\). Observe that \(\dim(Z_{i})=d_{2}\) and \(\Lambda_{3}\subset Z_{i}\).
The action of \(\mathbb{PGL}(n)\) respects spans and intersections, consequently \(\mathbb{PGL}(n)\) acts on the vector spaces \(Z_{i}\). The linear spaces \(Z_{i}\) of dimension \(d_{2}\) are all contained in \(\Lambda_{2}\) of dimension \(d_{2}+1\) and contain \(\Lambda_{3}\) of dimension \(d_{2}-1\). Hence, they determine 4 points in a pencil of linear spaces. (Equivalently, \(Z_{i}/\Lambda_{3}\) are 4 two-dimensional linear subspaces of \(\Lambda_{2}/\Lambda_{3}\).) The cross-ratio of these 4 points on \(\mathbb{P}^{1}\) is an invariant of the \(\mathbb{PGL}(n)\) action. Since we can choose \(Z_{2}\) arbitrarily subject to the condition that it contains \(\Lambda_{3}\), is contained in \(\Lambda_{2}\) and does not contain \(U_{1},Z_{3},Z_{4}\), any cross-ratio is possible. Hence, \(\mathbb{PGL}(n)\) cannot have finitely many orbits as soon as \(T\) contains at least 4 leaves and the base field is infinite.
### Step 2: If \(T\) has at most 2 leaves, then \(F(T,\phi)\) has finitely many orbits
If \(T\) has only one leaf, then \(T\) is a chain. In this case \(F(T,\phi)\) is a partial flag variety and is homogeneous under the \(\mathbb{PGL}(n)\) action (see Example 1.4 and Theorem 1.8 (1)).
Next, suppose \(T\) has two leaves, then \(T\) has to be of the form
\[\begin{CD}d_{1}@>{}>{}>{}>d_{2}@>{}>{}>\cdots @>{}>{}>d_{j}@>{}>{}>\cdots @>{}>{}>n\\ @V{}V{}V@V{}V{}V{}V\\ e_{1}@>{}>{}>\cdots @>{}V{}V@V{}V{e_{l}}V\\ \end{CD}\]
Let \((T^{\prime},\phi^{\prime})\) be the following labeled tree
\[\begin{CD}1@>{}>{}>{}>2@>{}>{}>\cdots @>{}>{}>d_{j}-1@>{}>{}>d_{j}@>{}>{}>d_{j}+1@>{}>{}>\cdots @>{}>{}>n-1@>{}>{}>n\\ @V{}V{}V@V{}V{}V\\ e_{1}@>{}>{}>{}>\cdots @>{}>{}>e_{l}\\ \end{CD}\]
By repeated applications of Proposition 2.2, there is a surjective morphism \(\pi:F(T^{\prime},\phi^{\prime})\to F(T,\phi)\). By Proposition 2.5, it suffices to show that \(F(T^{\prime},\phi^{\prime})\) has finitely many \(\mathbb{PGL}(n)\) orbits. Fix a basis \(e_{1},\ldots,e_{n}\) for \(W\). Since \(\mathbb{PGL}(n)\) acts transitively on full flags, after applying an element of \(\mathbb{PGL}(n)\)
we may assume that the vector spaces \(U_{i}\) corresponding to the top chain in \(T^{\prime}\) are given by the span of \(e_{j}\) for \(1\leq j\leq i\). The stabilizer of the full flag is the Borel subgroup of upper triangular matrices. Next, we need to choose a partial flag \(W_{e_{1}}\subset\cdots\subset W_{e_{l}}\subset U_{d_{j}}\). The orbits of the Borel group are precisely the Schubert cells of the partial flag variety \(F(e_{1},\ldots,e_{l},d_{j})\). A Schubert cell is determined by specifying an element of the symmetric group \(\mathfrak{S}_{d_{j}}\) with at most \(l\)-descents at \(e_{1},\ldots,e_{l}\). There are finitely many Schubert cells, hence \(\mathbb{PGL}(n)\) acts with finitely many orbits on \(F(T^{\prime},\phi^{\prime})\). More generally, the orbits in the original tree variety \(F(T,\phi)\) are determined by the dimensions of intersections of \(W_{e_{\alpha}}\) and \(U_{d_{\beta}}\) for every \(1\leq\alpha\leq l\) and \(1\leq\beta\leq j-1\).
This concludes the discussion of trees with greater than or equal to 4 or less than or equal to 2 leaves. From now on, we assume that \(T\) has three leaves.
### Step 3: One of the branches has length at most 1
We next show that if \(F(T,\phi)\) has finitely many \(\mathbb{PGL}(n)\) orbits, then \(T\) cannot have three branches where each branch has at least 2 vertices. By repeatedly applying Proposition 2.2, we may forget all but two of the vector spaces comprising any branch of \(T\) to obtain a tree \(T^{\prime}\) and a surjective morphism \(\pi:F(T,\phi)\to F(T^{\prime},\phi^{\prime})\). By Proposition 2.5, it suffices to show that \(F(T^{\prime},\phi^{\prime})\) does not have finitely many \(\mathbb{PGL}(n)\) orbits. Hence we may assume that \(T\) looks like
where the vertices labeled \(v\) and \(v^{\prime}\) may coincide.
Fix a flag \(F_{\bullet}\) and choose all the vector spaces except for the 6 labeled \(s_{i},t_{i}\) for \(1\leq i\leq 3\) from this fixed flag. Now we can choose the 6 vector spaces \(U_{s_{i}}\subset U_{t_{i}}\) for \(1\leq i\leq 3\) only subject to the condition that \(U_{t_{i}}\) is contained in the flag element \(F_{\phi(v)}\) for \(i=1,2\) and \(U_{t_{3}}\) is contained in the flag element \(F_{\phi(v^{\prime})}\). In particular, we may assume that \(U_{s_{3}},U_{t_{3}}\) intersect \(F_{\phi(v)}\) in distinct and proper subspaces. The proof of this step follows from the following claim.
**Claim 4.1**.: _Let \(U_{i}\subset V_{i}\) for \(1\leq i\leq 3\), be \(6\) proper subspaces of a vector space \(W\). Then \(\mathbb{PGL}(W)\) does not have finitely many orbits on such 6-tuples of vector spaces._
Proof of Claim 4.1.: Let the small letter denote the dimension of the vector space denoted by the capital letter. The prototypical case is when \(u_{i}=1\) and \(v_{i}=2\) for all \(i\). In that case, take \(U_{i}\subset V_{i}\) to be general flags contained in a three dimensional subspace \(Y\). Then \(V_{2}\) and \(V_{3}\) intersect \(V_{1}\) in one-dimensional subspaces \(Q_{2}\) and \(Q_{3}\). Similarly, the span of \(U_{2}\) and \(U_{3}\) intersects \(V_{1}\) in a one-dimensional subspace \(Q_{4}\), distinct from \(Q_{2}\) and \(Q_{3}\). Then the cross-ratio of \(U_{1},Q_{2},Q_{3},Q_{4}\) is an invariant of the \(\mathbb{PGL}(n)\) action. By varying \(U_{1}\), we see that the cross-ratio takes arbitrary values. Hence, there are infinitely many \(\mathbb{PGL}(n)\) orbits if the base field is infinite.
The general case is similar. Without loss of generality, assume that \(v_{1}\leq v_{2}\leq v_{3}\). Fix a linear space \(V_{1}\) of dimension \(v_{1}\) and choose \(V_{2}\) and \(V_{3}\) to be general among linear spaces of dimensions \(v_{2}\) and \(v_{3}\), respectively, such that they intersect \(V_{1}\) in subspaces \(Q_{2}\) and \(Q_{3}\) of dimension \(v_{1}-1\) and span a linear space \(Y\) of dimension \(v_{3}+1\). Observe that \(Y\) contains \(V_{1}\). Let \(\Lambda=\cap_{i=1}^{3}V_{i}\), which is a linear space of dimension \(v_{1}-2\). Let \(U_{1}\) be a general linear subspace of \(V_{1}\) which intersects \(\Lambda\) in a subspace of dimension \(\min(u_{1}-1,v_{1}-2)\). Let \(Q_{1}\subset V_{1}\) be the \((v_{1}-1)\)-dimensional space spanned by \(U_{1}\) and \(\Lambda\). Let \(Q_{4}\) be a \((v_{1}-1)\)-dimensional subspace of \(V_{1}\) containing \(\Lambda\) and distinct from \(Q_{1}\), \(Q_{2}\) and \(Q_{3}\). Let \(\gamma\in Q_{4}\) be a vector not contained in \(\Lambda\). Let \(\alpha_{2}\) be a vector in \(V_{2}\) not contained
in \(V_{1}\) or \(V_{3}\). Then there is a unique vector \(\alpha_{3}\in V_{3}\) such that \(\gamma\) is a linear combination of \(\alpha_{2}\) and \(\alpha_{3}\). Let \(Y^{\prime}\) be a general codimension one linear subspace of \(Y\) containing \(Q_{4}\) and \(\alpha_{2}\) (and hence also \(\alpha_{3}\)). For \(i=2,3\), let \(U_{i}\) be general linear subspaces of \(Y^{\prime}\cap V_{i}\) of dimension \(u_{i}\) containing \(\alpha_{i}\). Then the span of \(U_{2}\), \(U_{3}\) and \(\Lambda\) intersects \(V_{1}\) in \(Q_{4}\). The linear spaces \(Q_{1},Q_{2},Q_{3},Q_{4}\) are \((v_{1}-1)\)-dimensional subspaces of \(V_{1}\) and contain \(\Lambda\). Hence, they form a pencil. The cross-ratio of these four vector spaces in the pencil is an invariant of the \(\mathbb{PGL}(n)\) action and can take arbitrary values. We conclude that \(\mathbb{PGL}(n)\) cannot have finitely many orbits.
We thus conclude that if \(F(T,\phi)\) has finitely many \(\mathbb{PGL}(n)\) orbits, then \(T\) has at most three leaves and one of the branches has length at most one.
### Step 4: Bounding the length of the other branches
Assume that \(T\) has three leaves. We now show that if the minimum width of the branch with one leaf is not one, then the length of another branch has to be at most \(2\). Suppose that \(T\) has two branches \(u_{1}\to\cdots\to u_{p}\) and \(v_{1}\to\cdots\to v_{q}\) with \(p,q\geq 3\) and another branch of length one with leaf \(s_{1}\) and minimum width greater than \(1\). We would like to show that \(F(T,\phi)\) does not have finitely many \(\mathbb{PGL}(n)\) orbits. By repeated applications of Propositions 2.2 and 2.5, it suffices to assume that \(p=q=3\). Then the tree \(T\) has one of the following forms:
or
Let all the linear spaces corresponding to vertices other than those labeled by \(u_{i}\), \(v_{i}\) or \(y\) be from a fixed flag. In the first case, let the vector space \(Y\) of dimension \(y\) intersect the vector space corresponding to the vertex \(v\) in a subspace of dimension between \(2\) and \(v-2\). This is possible since the minimum width of that branch is greater than \(1\). In the second case, let the flag \(V_{1}\subset V_{2}\subset V_{3}\) intersect the vector space corresponding to \(v\) in a nontrivial three-step flag. Thus, the proof of this step reduces to the following claim.
**Claim 4.2**.: _Let \(U_{1}\subset U_{2}\subset U_{3}\) and \(V_{1}\subset V_{2}\subset V_{3}\) be proper linear spaces of a vector space \(W\) of dimension \(n\). Let \(Y\) be a subspace of \(W\) of dimension not equal to \(1\) or \(n-1\). Then \(\mathbb{PGL}(W)\) cannot have finitely many orbits on the configurations of the \(7\) vector spaces \(U_{i},V_{i},Y\)._
Proof of Claim 4.2.: The prototypical case is when \(u_{1}=v_{1}=1\), \(u_{2}=v_{2}=2\) and \(u_{3}=v_{3}=3\) and \(y=2\). We may assume that these are general vector spaces in a \(4\)-dimensional vector space. The span of \(U_{1}\) and \(V_{2}\) intersects \(Y\) in a one-dimensional linear space \(Q_{1}\). Similarly, the span of \(U_{2}\) and \(V_{1}\) intersects in a one-dimensional linear space \(Q_{2}\). Finally, \(U_{3}\) and \(V_{3}\) intersect \(Y\) in a one-dimensional linear spaces \(Q_{3}\) and \(Q_{4}\), respectively, The \(\mathbb{PGL}(n)\) action needs to preserve the cross-ratio of the subspaces \(Q_{i}\) in \(Y\). Since the cross-ratio can take an arbitrary value, \(\mathbb{PGL}(n)\) cannot act with finitely many orbits.
The general case is similar. First, we make two initial reductions to simplify the argument. Choose \(V_{2}\) so that it intersects \(U_{2}\) in a subspace of dimension \(u_{2}-2\). Let \(\Omega\) be the span of \(V_{2}\) and \(U_{2}\), which has dimension \(v_{2}+2\). We can then choose \(U_{3}\), \(V_{3}\) and \(Y\) to intersect \(\Omega\) in dimensions \(\max(u_{2}+1,u_{3}+v_{2}+2-n)\), \(v_{2}+1\) and \(\max(2,y+v_{2}+2-n)\), respectively. Since \(\Omega\) is canonically determined as the span of \(U_{2}\) and \(V_{2}\), it suffices to show that \(\mathbb{P}\mathrm{GL}(n)\) has infinitely many orbits in the corresponding configuration in \(\Omega\). We may thus assume that \(v_{2}=n-2\) and \(v_{3}=n-1\).
Next, assume that \(V_{1}\) and \(U_{3}\) intersect in a linear space of dimension \(\min(v_{1}-1,u_{3}-3)\). Let \(\Sigma\) be the span of \(V_{1}\) and \(U_{3}\), which is a linear space of dimension \(\sigma=\max(v_{1}+3,u_{3}+1)\). We can choose \(V_{2},V_{3}\) and \(Y\) to be linear spaces that intersect \(\Sigma\) in dimension \(\sigma-2\), \(\sigma-1\) and \(\max(2,y+\sigma-n)\), respectively. Since \(\Sigma\) is canonically determined as the span of \(U_{3}\) and \(V_{1}\), it suffices to show that \(\mathbb{P}\mathrm{GL}(n)\) has infinitely many orbits in the corresponding configuration in \(\Sigma\). We thus reduce to the case when either \((v_{1},v_{2},v_{3})=(n-3,n-2,n-1)\) or \(v_{2}=n-2,v_{3}=n-1\) and \(u_{3}=n-1\).
First, assume \((v_{1},v_{2},v_{3})=(n-3,n-2,n-1)\) and fix \(V_{1}\subset V_{2}\) of dimension \(v_{1}\) and \(v_{2}\). Let \(U_{1}\) be a linear space of dimension \(u_{1}\) general among linear spaces which intersect \(V_{2}\) in a subspace of dimension \(u_{1}-1\). Observe that the span of \(U_{1}\) and \(V_{2}\) is a linear space \(Q^{\prime}_{1}\) of dimension \(n-1\). Let \(U_{2}\) be a linear space of dimension \(u_{2}\) general among linear spaces that contain \(U_{1}\) and intersect \(V_{1}\) in a subspace of dimension \(u_{2}-2\). Observe that the span of \(U_{2}\) and \(V_{1}\) is a linear space \(Q^{\prime}_{2}\) of dimension \(n-1\) distinct from \(Q^{\prime}_{1}\). Choose \(V_{3}\) to be a general vector space of dimension \(v_{3}\) containing \(V_{2}\). Observe that it is distinct from \(Q^{\prime}_{1}\) and \(Q^{\prime}_{2}\). Let \(\Lambda^{\prime}=Q^{\prime}_{1}\cap Q^{\prime}_{2}\cap V_{3}\), which is a linear space of dimension \(n-3\). Let \(Y\) be a linear space of dimension \(y\) general among those that intersect \(\Lambda^{\prime}\) in a subspace \(\Lambda\) of dimension \(y-2\). Set \(Q_{1}=Q^{\prime}_{1}\cap Y\), \(Q_{2}=Q^{\prime}_{2}\cap Y\) and \(Q_{3}=V_{3}\cap Y\). Note that \(Q_{1},Q_{2},Q_{3}\) are subspaces of \(Y\) of dimension \(y-1\) that contain \(\Lambda\). Finally, pick a vector \(v\) in \(Y\) not contained in \(Q_{1},Q_{2}\) or \(Q_{3}\). Let \(U_{3}\) be a linear space of dimension \(u_{3}\) general among those that contain \(v\), \(U_{2}\) and intersect \(Y\) along the span of \(v\) and \(\Lambda\). Let \(Q_{4}\) be the span of \(v\) and \(\Lambda\), which is the span of \(U_{3}\cap Y\) with \(\Lambda\). Then \(Q_{1},\ldots,Q_{4}\) are four points of a pencil of hyperplanes in \(Y\) and their cross-ratio is an invariant of the \(\mathbb{P}\mathrm{GL}(n)\) action. By varying \(v\), we can get any cross-ratio. We conclude that \(\mathbb{P}\mathrm{GL}(n)\) cannot act with finitely many orbits.
Finally, assume that \(v_{2}=n-2,v_{3}=n-1\) and \(u_{3}=n-1\). Fix \(V_{2}\subset V_{3}\) to be linear spaces of dimension \(n-2\) and \(n-1\), respectively. Let \(U_{1}\) be a linear space of dimension \(u_{1}\) general among those that intersect \(V_{2}\) in a subspace of dimension \(u_{1}-1\). Then \(U_{1}\) and \(V_{2}\) span a linear space \(Q^{\prime}_{1}\) of dimension \(n-1\), distinct from \(V_{3}\). Let \(U_{3}\) be a linear space of dimension \(n-1\) general among those that contain \(U_{1}\). Let \(\Lambda^{\prime}=Q^{\prime}_{1}\cap V_{3}\cap U_{3}\), which is a linear space of \(n-3\) contained in \(V_{2}\). Let \(Y\) be a linear space of dimension \(y\) general among those which intersect \(\Lambda^{\prime}\) in a subspace \(\Lambda\) of dimension \(y-2\). Set \(Q_{1}=Q^{\prime}_{1}\cap Y\), \(Q_{2}=V_{3}\cap Y\) and \(Q_{3}=U_{3}\cap Y\). Take a hyperplane general \(H\) among those containing \(U_{1}\) and \(\Lambda\). Observe that this is possible since \(Q^{\prime}_{1}\) and \(U_{3}\) both contain \(U_{1}\). Let \(V_{1}\) be \(H\cap V_{2}\) and let \(U_{2}\) be a general linear space containing \(U_{1}\) and contained in \(U_{3}\cap H\). Then the span of \(V_{1}\) and \(U_{2}\) is \(H\). Setting \(Q_{4}=H\cap Y\), the cross-ratio of \(Q_{1},\ldots,Q_{4}\) in the pencil they span in \(Y\) is an invariant of the \(\mathbb{P}\mathrm{GL}(n)\) action. We conclude that \(\mathbb{P}\mathrm{GL}(n)\) cannot act with finitely many orbits.
### Step 5: Bounding the length of the third branch
Finally, we show that if \(T\) is a tree with three leaves with branch lengths \(1,2\) and \(\ell\) and the minimum width of the branch of length \(1\) is at least \(3\) or the minimum width of the branch of length \(2\) is at least \(2\), then \(\ell\leq 4\). By repeated applications of Propositions 2.2 and 2.5, it suffices to study the case when \(\ell=5\). By reductions similar to the previous cases, this step follows from the following claim.
**Claim 4.3**.: _Let \(A\), \(B_{1}\subset B_{2}\) and \(C_{1}\subset\cdots\subset C_{5}\) be distinct, proper subspaces of a vector space \(W\) of dimension \(n\). Assume that \(\dim(A)\neq 1,2,n-2\) or \(n-1\). Assume that \(\dim(B_{1})\neq 1\), \(\dim(B_{2})\neq n-1\) and \(\dim(B_{2})\neq\dim(B_{1})+1\). Then \(\mathbb{PGL}(n)\) does not act on configurations of \(8\) subspaces \(A,B_{i},C_{j}\) with finitely many orbits._
Proof of Claim 4.3.: The prototypical example is when \(n=6\), \(\dim(A)=3\), \(\dim(B_{i})=2i\) and \(\dim(C_{i})=i\). Let all these linear spaces be general. For \(i=1,2\), let \(D_{i}\) be the span of \(A\) and \(C_{i}\). Let \(U_{1}=A\cap B_{2}\), and for \(i=2,3\), let \(U_{i}=D_{i-1}\cap B_{2}\). We thus get three linear spaces \(U_{1}\subset U_{2}\subset U_{3}\), where \(\dim(U_{i})=i\) in the \(4\)-dimensional linear space \(B_{2}\). For \(i\geq 3\), let \(V_{i-3}=B_{2}\cap C_{i}\). We get another three linear spaces \(V_{1}\subset V_{2}\subset V_{3}\) where \(\dim(V_{i})=i\) in \(B_{2}\). Finally, let \(Y=B_{1}\). It is easy to see that these are general linear spaces. We thus reduce to the configuration in Claim 4.2, hence \(\mathbb{PGL}(n)\) cannot act with finitely many orbits.
The general case is similar. Fix \(B_{1}\subset B_{2}\) to be general linear spaces of dimensions \(b_{1}\) and \(b_{2}\), respectively. Choose \(C_{3}\subset C_{4}\subset C_{5}\) such that \(\dim(C_{i}\cap B_{2})=\max(i-2,c_{i}+b_{2}-n)\). Set \(V_{i}=C_{i+2}\cap B_{2}\). Choose \(C_{1}\subset C_{2}\) subsets of \(C_{3}\) and \(A\) so that they satisfy the following conditions:
1. We have \(\dim(A\cap B_{2})=\max(1,a+b_{2}-n)\),
2. Let \(D_{i}\) denote the span of \(A\) and \(C_{i}\). Then for \(i=1,2\), \(\max(i+1,a+b_{2}-n+i)\leq\dim(D_{i}\cap B_{2})\)
Set \(U_{1}=A\cap B_{2}\) and for \(i=2,3\) set \(U_{i}=D_{i+1}\cap B_{2}\). Finally set \(B_{1}=Y\). We obtain the configuration in Claim 4.2 and such configurations cannot have finitely many \(\mathbb{PGL}(n)\) orbits.
Now we are ready to complete the proof that if \(F(T,\phi)\) has finitely many \(\mathbb{PGL}(n)\) orbits, then \((T,\phi)\) must be one of the labeled trees listed in Theorem 1.10. If \(T\) has at most two leaves, then, by Step 2, \(F(T,\phi)\) has finitely many orbits. This corresponds to Case (1). If \(T\) has at least \(4\) leaves, then, by Step 1, \(F(T,\phi)\) cannot have finitely many orbits.
We now consider the case when \(T\) has exactly three leaves. By Step 3, one of the branches must have length \(1\). If the minimum width of the branch with length \(1\) is not \(1\), then Step 4 guarantees that another branch has length at most \(2\). Hence, if \(F(T,\phi)\) has finitely many orbits, the possible branch lengths are \((1,\ell_{1},\ell_{2})\) provided that the minimum width of the branch with length \(1\) is \(1\). This is Case 2(d). Otherwise, the branch lengths must be \((1,1,\ell)\), which is Case 2(a) or \((1,2,\ell)\). Furthermore, by Steps 4 and 5, if the minimum width of the branch with length \(1\) is bigger than \(2\) and the minimum width of the branch with length \(2\) is bigger than \(1\), we must have \(\ell\leq 4\). We conclude that either the branch lengths are \((1,2,\ell)\) with \(\ell\leq 4\) (which is Case 2(b)); or the branch lengths are \((1,2,\ell)\) with \(\ell\geq 5\) provided that either the minimum width of the branch with length \(1\) is at most \(2\) or the minimum width of the branch with length \(2\) is at most \(1\) (which is case 2(c)).
This completes the argument that any tree variety with finitely many \(\mathbb{PGL}(n)\) orbits must be among those listed in Theorem 1.10. Conversely, we need to show that the tree varieties listed have finitely many orbits. In fact, using the results of Magyar, Weyman and Zelevinsky, one can enumerate all the orbits. We will now sketch the argument.
### Step 6: The other cases have finitely many orbits
We have already enumerated the orbits when the tree has at most \(2\) branches in Step 2. We may therefore assume that the three has \(3\) branches, hence it looks as follows.
\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\cdots\)\(\cdots\)\(\cdots\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\)\(\cdots\cdots\)\(
This quantity is strictly positive unless \(x=y=0\). Since it is an integer, if not zero, it has to be at least \(1\). We conclude that the action of \(\mathbb{PGL}(n)\) on \(F(k_{1},k_{2},n)^{3}\) is trivially sparse if and only if \(k_{1}=\frac{n}{3}\) and \(k_{2}=\frac{2n}{3}\). This proves part (1).
Now assume that \(k_{1}+k_{2}=n\). Consider the three partial flags \(F_{i,k_{1}}\subset F_{i,k_{2}}\). For general choices, \(F_{2,k_{2}}\) and \(F_{3,k_{2}}\) intersect \(F_{1,k_{2}}\) in two general subspaces \(U_{2}\) and \(U_{3}\) of dimension \(2k_{2}-n\). The span of \(F_{2,k_{1}}\) and \(F_{3,k_{1}}\) intersect \(F_{1,k_{2}}\) in a general subspace \(U_{4}\) of dimension \(k_{1}\). If \(F(k_{1},k_{2},n)^{3}\) is dense, then \(\mathbb{PGL}(n)\) has to act on configurations of \(F_{1,k_{1}},U_{2},U_{3},U_{4}\) with a dense orbit. Since
\[k_{1}+\sum_{i=2}^{4}\dim(U_{i})=2k_{1}+4k_{2}-2n=2k_{2},\]
by [1, Theorem 5.1], this action does not have a dense orbit. We conclude that \(F(k_{1},k_{2},n)^{3}\) is sparse if \(k_{1}+k_{2}=n\). This proves part (2).
By Proposition 2.5, we deduce the following corollary.
**Corollary 5.2**.: _Assume that there are two indices \(i\neq j\) such that \(k_{i}+k_{j}=n\). Then \(F(k_{1},\ldots,k_{r};n)^{3}\) is sparse._
We will next show that classifying dense tree varieties with three leaves reduces to classifying dense products of three partial flag varieties. We first introduce some notation.
_Notation 5.3_.: Let \(0<k_{1}<\cdots<k_{r}<n\) be an increasing sequence of positive integers less than \(n\). We will denote the sequence by \(k_{\bullet}\). For ease of notation, we set \(k_{0}=0\) and \(k_{r+1}=n\). Let \(n^{\prime}\leq n\) and set \(d=n-n^{\prime}\). Let \(j\) be the index such that \(k_{j-1}\leq d<k_{j}\). Then for \(1\leq i\leq r-j+1\), set \(k^{\prime}_{i}=k_{j+i-1}-d\). Given a sequence \(k_{\bullet}\) and an integer \(n^{\prime}<n\), we will call the sequence \(k^{\prime}_{\bullet}\)_the sequence derived from \(k_{\bullet}\) with respect to \(n\) and \(n^{\prime}\)_. Notice that this only depends on \(n-n^{\prime}\), so if we do not wish to emphasize \(n\) and \(n^{\prime}\), we will sometimes say _the sequence derived from \(k_{\bullet}\) with respect to \(d\)_. Given a vector space \(W\) of dimension \(n^{\prime}\) and a general partial flag \(F_{\bullet}\) in an \(n\)-dimensional vector space with dimensions \(k_{\bullet}\), the sequence \(k^{\prime}_{\bullet}\) denotes the dimension vector of the partial flag in \(W\) obtained by \(F_{\bullet}\cap W\).
**Proposition 5.4**.: _Let \(T\) be the following tree:_
_Set \(k_{1,r_{1}+1}=m^{\prime}\) and \(k_{1,r_{1}+s_{1}}=m\). Let \(k^{\prime}_{3,\bullet}\) be the sequence derived from \(k_{3,\bullet}\) with respect to \(m\) and \(m^{\prime}\). Then the tree variety \(F(T,\phi)\) is dense if and only if_
\[\prod_{i=1}^{2}F(k_{i,1},\ldots,k_{i,r_{i}};m^{\prime})\times F(k^{\prime}_{3,1},\ldots,k^{\prime}_{3,r_{3}-j+1},m^{\prime})\]
_is dense._
Proof.: Let \(T^{\prime}\) be the tree obtained from \(T\) by deleting all the vertices that have a directed path to \(k_{1,r_{1}+1}\). Then \(T^{\prime}\) has two leaves, hence by Theorem 1.10 it has finitely many orbits. In particular, the orbit where the two partial flags emanating from the vertex marked \(k_{1,r_{1}+s_{1}}\) are transverse is dense. Note that any other orbit has strictly smaller dimension and cannot contain a
point of this locus in its closure by the semi-continuity of the dimension of intersections. In this orbit, the intersection of the flag \(F_{3,k_{\bullet}}\) with the vector space \(U_{1,k_{r_{1}+1}}\) has dimension vector \(k^{\prime}_{3,\bullet}\), the sequence derived from \(k_{3,\bullet}\) with respect to \(m\) and \(m^{\prime}\). Hence, if \(F(T,\phi)\) is dense, then \(\prod_{i=1}^{2}F(k_{i,1},\ldots,k_{i,r_{i}};m^{\prime})\times F(k^{\prime}_{3, 1},\ldots,k^{\prime}_{3,r_{3}-j+1},m^{\prime})\) is dense.
Conversely, suppose that \(\prod_{i=1}^{2}F(k_{i,1},\ldots,k_{i,r_{i}};m^{\prime})\times F(k^{\prime}_{3, 1},\ldots,k^{\prime}_{3,r_{3}-j+1},m^{\prime})\) is dense. Observe that for a point in the dense orbit the flags must be pairwise transverse. Moreover, for a transverse pair of flags, there exists a third flag so that the triple is in the dense orbit. Make the convention that \(k_{2,r_{2}+1}=k_{1,r_{1}+1}\) and \(k_{r_{3}+1}=k_{1,r_{1}+s_{1}}\). Fix indices so that \(k_{1,r_{1}+s_{1}+t_{1}}\to n\) is an edge in the tree. If we omit the branch consisting of vertices \(k_{2,i}\), then the resulting tree has \(2\) branches and has finitely many orbits. The orbit where the linear spaces are as transverse as possible is dense and has stabilizer of dimension
\[n^{2}-1-\sum_{i=1}^{r_{1}+s_{1}+t_{1}}k_{1,i}(k_{1,i+1}-k_{1,i})-\sum_{i=1}^{r _{3}}k_{3,i}(k_{3,i+1}-k_{3,i}).\]
By assumption, a general choice of partial flag \(U_{2,1}\subset\cdots\subset U_{2,r_{2}}\) in \(U_{k_{1,r_{1}+1}}\) imposes the expected number of conditions
\[\sum_{i=1}^{r_{2}}k_{2,i}(k_{2,i+1}-k_{2,i}).\]
Hence, the codimension of the stabilizer of such a point in \(F(T,\phi)\) in \(\mathbb{P}GL(n)\) is the same as the dimension of \(F(T,\phi)\). We conclude that \(F(T,\phi)\) has a dense \(\mathbb{P}GL(n)\) orbit by Lemma 2.4.
Hence, for studying the density of tree varieties with three branches, it suffices to study the density of products of three partial varieties. We will concentrate on this problem for most of this section. Corollary 5.2 and Proposition 5.4 give a large collection of sparse tree varieties. It is also possible to give many examples of dense tree varieties.
**Lemma 5.5**.: _For a vertex \(v\in T\), let \(S(v)\) denote the set of vertices of \(T\) which are sources of edges in \(T\) with target \(v\). If_
\[\sum_{s_{i}\in S(v)}\phi(s_{i})\leq\phi(v)\]
_for every vertex \(v\in T\), then \(F(T,\phi)\) is dense. In particular, \(\prod_{i=1}^{N}F(k_{i,1},\ldots,k_{i,m_{i}};n)\) is dense if \(\sum_{i=1}^{N}k_{i,m_{i}}\leq n\)._
Proof.: For each vertex \(v\) with \(\sum_{s_{i}\in S(v)}\phi(s_{i})<\phi(v)\), form a new labeled tree \((T^{\prime},\phi^{\prime})\) by adding a new vertex \(v^{\prime}\) labeled \(\phi^{\prime}(v^{\prime})=\phi(v)-\sum_{s_{i}\in S(v)}\phi(s_{i})\) and a new edge \((v^{\prime},v)\). By Proposition 2.5, if \(F(T^{\prime},\phi^{\prime})\) is dense, so is \(F(T,\phi)\). We may therefore assume that \(T\) satisfies \(\sum_{s_{i}\in S(v)}\phi(s_{i})=\phi(v)\) for every vertex \(v\). Let \(\ell_{1},\ldots,\ell_{j}\) be the leaves of the tree \(T\). We have \(\sum_{i=1}^{j}\phi(\ell_{i})=n\). Fix a basis \(e_{1},\ldots,e_{n}\) for the vector space \(V\). Let \(U_{\ell_{i}}\) be disjoint coordinate subspaces. The stabilizer is a block diagonal matrix with block sizes \(\phi(\ell_{i})\) for \(1\leq i\leq j\). Hence, the dimension of the stabilizer is \(\sum_{i=1}^{j}\phi(\ell_{i})^{2}-1\). Since
\[\sum_{(s,t)\in E(T)}\phi(s)(\phi(t)-\phi(s))=\sum_{v\in T}\sum_{s\in S(v)}\phi (s)(\phi(t)-\phi(s))=\sum_{v\in T}(\phi(v)^{2}-\sum_{s\in S(v)}\phi(s)^{2})=n ^{2}-\sum_{i=1}^{j}\phi(\ell_{i})^{2},\]
we conclude that \(\dim(F(T,\phi))+\dim(\operatorname{Stab})=n^{2}-1\) and \(F(T,\phi)\) is dense by Lemma 2.4.
The following lemma gives a useful dimension reduction.
**Lemma 5.6**.: _Suppose \(k_{i,\bullet}\) are \(m\) flag vectors. Assume that \(n^{\prime}=\sum_{i=1}^{m-1}k_{i,r_{i}}\leq n<\sum_{i=1}^{m}k_{i,r_{i}}\). Let \(k^{\prime}_{m,\bullet}\) be the sequence derived from \(k_{m,\bullet}\) with respect to \(n\) and \(n^{\prime}\). Then \(\prod_{i=1}^{m}F(k_{i,\bullet};n)\) is dense if and only if \(\prod_{i=1}^{m-1}F(k_{i,\bullet};n^{\prime})\times F(k^{\prime}_{m,\bullet};n^{ \prime})\) is dense._
Proof.: Let \(U_{i,\bullet}\) be general flags with dimension vectors \(k_{i,\bullet}\) in \(V\). For \(1\leq i\leq m-1\), the vector spaces \(U_{i,k_{r_{i}}}\) span a vector space \(W\) of dimension \(n^{\prime}\). The intersection \(T_{m,\bullet}\) of \(U_{m,\bullet}\) with \(W\) has dimension vector \(k^{\prime}_{m,\bullet}\) and are general linear spaces. Hence, if \(\prod_{i=1}^{m}F(k_{i,\bullet},n)\) is dense, then \(\prod_{i=1}^{m-1}F(k_{i,\bullet};n^{\prime})\times F(k^{\prime}_{m,\bullet};n^ {\prime})\) is dense.
The stabilizer of the flags \(U_{i,\bullet}\) for \(1\leq i\leq m\), stabilizes \(U_{i,\bullet}\) for \(1\leq i\leq m-1\), \(W\) and \(T_{m,\bullet}\). Hence, we get a map
\[f:\operatorname{Stab}(\{U_{i,\bullet}\}_{i=1}^{m},V)\to\operatorname{Stab}( \{U_{i,\bullet}\}_{i=1}^{m-1},T_{m,\bullet},W).\]
Pick a basis for \(U_{m,\bullet}\) so that \(U_{m,k_{im}}\) has basis \(e_{j}\) for \(n-k_{im}+1\leq j\leq n\). Let \(e_{1},\ldots,e_{n^{\prime}}\) be a basis for \(W\). Then the matrices that map to the identity in \(\operatorname{Stab}(\{U_{i,\bullet}\}_{i=1}^{m-1},T_{m,\bullet},W)\) have the form
\[\begin{pmatrix}I_{n-k_{r_{m}}}&0&0\\ 0&I_{n^{\prime}-n+k_{r_{m}}}&A\\ 0&0&B\end{pmatrix}\]
where \(I_{j}\) denotes the \(j\times j\) diagonal matrix and \(A,B\) are obtained from the truncation of the first \(n^{\prime}+k_{r_{m}}-n\) columns of a block lower triangular matrix of sizes \(k_{m,1},\ldots,k_{m,r_{m}}\). We conclude that
\[\dim(\operatorname{Ker}(f))\leq\sum_{i=1}^{j-1}k_{m,i}(k_{m,i}-k_{m,i-1})+k_{m,j}(n-n^{\prime}-k_{m,j-1}). \tag{1}\]
Since \(\prod_{i=1}^{m-1}F(k_{i,\bullet};n^{\prime})\times F(k^{\prime}_{m,\bullet};n ^{\prime})\) is dense, we have that
\[\dim(\operatorname{Stab}(\{U_{i,\bullet}\}_{i=1}^{m-1},T_{m,\bullet},W))=n^{ \prime 2}-1-\sum_{i=1}^{m-1}\left(\left(\sum_{l=1}^{r_{i}-1}k_{i,l}(k_{i,l+1} -k_{i,l})\right)+k_{i,r_{i}}(n^{\prime}-k_{i,r_{i}})\right)\\ -\sum_{l=j}^{r_{m}-1}(k_{m,l}-n+n^{\prime})(k_{m,l+1}-k_{m,l})-(k_ {m,r_{m}}-n+n^{\prime})(n-k_{m,r_{m}})\\ =n^{2}-1-\sum_{i=1}^{m}\sum_{l=1}^{r_{i}-1}k_{i,l}(k_{i,l+1}-k_{i, l})-\sum_{i=1}^{m}k_{i,r_{i}}(n-k_{i,r_{i}})+\sum_{l=1}^{j-1}k_{m,l}(k_{m,l+1} -k_{m,l})-(n-n^{\prime})k_{m,j} \tag{2}\]
By the theorem on the dimension of fibers, we have
\[\dim(\operatorname{Stab}(\{U_{i,\bullet}\}_{i=1}^{m},V))\leq\dim(\operatorname {Stab}(\{U_{i,\bullet}\}_{i=1}^{m-1},T_{m,\bullet},W))+\dim(\operatorname{ Ker}(f)).\]
Combining this with Equations (1) and (2) and some arithmetic, we see that
\[\dim(\operatorname{Stab}(\{U_{i,\bullet}\}_{i=1}^{m},V))\leq n^{2}-1-\sum_{i= 1}^{m}\sum_{l=1}^{r_{i}-1}k_{i,l}(k_{i,l+1}-k_{i,l})-\sum_{i=1}^{m}k_{i,r_{i}}( n-k_{i,r_{i}}).\]
Hence, \(\prod_{i=1}^{m}F(k_{i,\bullet},n)\) is dense by Lemma 2.4.
**Lemma 5.7**.: _Let \(n=2k_{r}\). Then \(F(k_{1},\ldots,k_{r};n)^{3}\) is dense if and only if \(F(k_{1},\ldots,k_{r-1};k_{r})^{3}\) is dense._
Proof.: Fix three general partial flags \(W^{i}_{\bullet}\in F(k_{1},\ldots,k_{r};n)\) for \(1\leq i\leq 3\). Let \(Y^{\prime}_{j}\) be the span of \(W^{2}_{j}\) and \(W^{3}_{k_{r}}\). Let \(Z^{\prime}_{j}\) be the span of \(W^{2}_{j}\) with \(W^{2}_{k_{r}}\). Set \(Y_{j}=Y^{\prime}_{j}\cap W^{1}_{k_{r}}\) and \(Z_{j}=Z^{\prime}_{j}\cap W^{1}_{k_{r}}\). We have that \(\dim(Y_{j})=\dim(Z_{j})=j\). Observe that a general pair of partial flags \((Y_{\bullet},Z_{\bullet})\in F(k_{1},\ldots,k_{r-1};k_{r})^{2}\) occurs this way. To see this, fix two general \(k_{r}\)-dimensional subspaces \(W^{2}_{k_{r}}\) and \(W^{3}_{k_{r}}\). We can recover the partial flag \(W^{2}_{\bullet}\) by taking the span of \(Y_{\bullet}\) with \(W^{3}_{k_{r}}\) and intersecting with \(W^{2}_{k_{r}}\). We can recover the partial flag \(W^{3}_{\bullet}\) by taking the span of \(Z_{\bullet}\) with \(W^{2}_{k_{r}}\) and intersecting with \(W^{3}_{k_{r}}\). Now the construction yields back the partial flags \(Y_{\bullet}\) and \(Z_{\bullet}\). Hence, if \(F(k_{1},\ldots,k_{r};n)^{3}\) is dense, then \(F(k_{1},\ldots,k_{r-1};k_{r})^{3}\) is dense.
Conversely, suppose \(F(k_{1},\ldots,k_{r-1};k_{r})^{3}\) is dense. We get a homomorphism
\[f:\operatorname{Stab}(W^{1}_{\bullet},W^{2}_{\bullet},W^{3}_{\bullet}, \mathbb{C}^{n})\to\operatorname{Stab}(W^{1}_{\bullet},Y_{\bullet},Z_{\bullet},W^{1}_{k_{r}}).\]
Choose a basis for \(\mathbb{C}^{n}\). Set \(W^{1}_{k_{r}}\) to be the span of \(e_{i}\) with \(1\leq i\leq k_{r}\). Set \(W^{1}_{k_{r}}\) to be the span of \(e_{i}\) with \(k_{r}+1\leq i\leq n\). Finally, set \(W^{3}_{k_{r}}\) to be the span of \(e_{i}+e_{i+k_{r}}\) for \(1\leq i\leq k_{r}\). Then the stabilizer of the three subspaces has the form
\[\begin{pmatrix}A&0\\ 0&A\end{pmatrix},\]
where \(A\) is a \(k_{r}\times k_{r}\) invertible matrix. Hence, the kernel of the map \(f\) is trivial. We conclude that
\[\dim(\operatorname{Stab}(W^{1}_{\bullet},W^{2}_{\bullet},W^{3}_{\bullet}, \mathbb{C}^{n}))\leq\dim(\operatorname{Stab}(W^{1}_{\bullet},Y_{\bullet},Z_{ \bullet},W^{1}_{k_{r}}))=k_{r}^{2}-1-3\sum_{i=1}^{r-1}k_{i}(k_{i+1}-k_{i}).\]
On the other hand,
\[\dim(\operatorname{Stab}(W^{1}_{\bullet},W^{2}_{\bullet},W^{3}_{\bullet}, \mathbb{C}^{n}))\geq n^{2}-1-3\sum_{i=1}^{r}k_{i}(k_{i+1}-k_{i}).\]
Since \(n=2k_{r}\),
\[n^{2}-1-3\sum_{i=1}^{r}k_{i}(k_{i+1}-k_{i})=k_{r}^{2}-1-3\sum_{i=1}^{r-1}k_{i} (k_{i+1}-k_{i})\]
and we have equality every where. We conclude that \(F(k_{1},\ldots,k_{r};n)^{3}\) is dense.
**Proposition 5.8**.: _Assume that \(2k_{r}\leq n\) and \(2k_{i}\leq k_{i+1}\) for \(2\leq i\leq r-1\). Then \(F(k_{1},\ldots,k_{r};n)^{3}\) is dense._
Proof.: We will prove the proposition by induction on \(r\). If \(r=1\), the proposition is true by [1, Theorem 5.1]. Now suppose the proposition is true up to \(r-1\). If \(3k_{r}\leq n\), then Lemma 5.5 implies that \(F(k_{1},\ldots,k_{r};n)^{3}\) is dense. We may therefore assume that \(2k_{r}\leq n<3k_{r}\).
If \(2k_{r}=n-m\) with \(m>0\), then let \(k^{\prime}_{\bullet}\) be the sequence derived from \(k_{\bullet}\) with respect to \(m\). By applying Lemma 5.6 three times, the density of \(F(k_{\bullet},n)^{3}\) is equivalent to the density of \(F(k^{\prime}_{\bullet};n-3m)^{3}\). Observe that \(2(k_{r}-m)=n-3m\) and \(2(k_{i}-m)=2k_{i}-2m<k_{i+1}-m\). Hence, \(k^{\prime}_{\bullet}\) still satisfies the assumptions of the proposition. We therefore reduce to the case when \(2k_{r}=n\).
By Lemma 5.7, \(F(k_{1},\ldots,k_{r};2k_{r})^{3}\) is dense if and only if \(F(k_{1},\ldots,k_{r-1};k_{r})^{3}\) is dense. The latter satisfies the assumptions of the proposition and has one fewer steps. Hence, by induction, it is dense. This concludes the proof of the proposition.
For our inductive arguments, we will need a technical lemma. Let \(V\) be an \(n\)-dimensional vector space. For \(1\leq i\leq 3\), let \(U_{i}\subset T_{i}\) be three two-step flags of dimensions \(u_{i}<t_{i}\) in \(V\). Let \(W\subset V\) be a subspace of dimension \(w\) containing \(U_{2}\) and \(T_{3}\). Let \(X\) denote the variety which parameterizes such configuration of subspaces of \(V\). Let \(m=n-w\) and assume that \(m\leq u_{1}\) and \(m\leq t_{2}-u_{2}\). Let \(U_{1}^{\prime}\subset T_{1}^{\prime}\) be a two-step flag in \(W\) of dimension \(u_{1}-m,t_{1}-m\). Let \(U_{2}^{\prime}\subset T_{2}^{\prime}\) be a two-step in \(W\) of dimension \(u_{2},t_{2}-m\) and let \(U_{3}^{\prime}\subset T_{3}^{\prime}\) be a two-step flag in \(W\) of dimension \(u_{3},t_{3}\). Finally, let \(W^{\prime}\) be a linear subspace of dimension \(u_{1}+t_{2}-m\) containing \(U_{1}^{\prime}\) and \(T_{2}^{\prime}\). Let \(Y\) be the variety which parameterizes such configurations of subspaces of \(W\).
**Lemma 5.9**.: _With this notation, assume that \(w\geq u_{2}+t_{3},\ v_{1}+v_{2}>n,\ u_{1}+v_{2}<n.\) Then:_
1. _The variety_ \(X\) _is irreducible of dimension_ \[\sum_{i=1}^{3}(u_{i}(t_{i}-u_{i})+t_{i}(n-t_{i}))+(w-u_{2}-t_{3})m.\]
2. _The variety_ \(Y\) _is irreducible of dimension_ \[\sum_{i=1}^{3}(u_{i}(t_{i}-u_{i})+t_{i}(n-t_{i}))+(w-u_{2}-t_{3})m-2mw-m^{2}.\]
3. _The_ \(\mathbb{PGL}(n)\) _action on_ \(X\) _has a dense orbit if and only if the_ \(\mathbb{PGL}(w)\) _action on_ \(Y\) _has a dense orbit._
Proof.: We first observe that the varieties \(X\) and \(Y\) parameterizing the specified configurations are irreducible varieties. If we omit \(T_{2}\), then we obtain the tree variety associated to the tree
The choice of \(T_{2}\) containing \(U_{2}\) realizes \(X\) as a Grassmannian bundle \(G(t_{2}-u_{2},V/U_{2})\) over this tree variety. By Theorem 2.1, \(X\) is irreducible and its dimension is as claimed. A similar argument shows that \(Y\) irreducible of the claimed dimension. Omitting \(T_{3}^{\prime}\) gives rise to the tree variety associated to the tree
The choice of \(T_{3}^{\prime}\) containing \(U_{3}^{\prime}\) realizes \(Y\) as a Grassmannian bundle \(G(t_{3}-u_{3},W/U_{2}^{\prime})\) over this tree variety. By Theorem 2.1, \(Y\) is irreducible of the claimed dimension.
There is a rational map \(X\dashrightarrow Y\) given by setting
\[U_{i}^{\prime}=U_{i}\cap W,\quad T_{i}^{\prime}=T_{i}\cap W,\quad W^{\prime}= \overline{U_{1}T_{2}}\cap W.\]
A general configuration in \(W\) occurs as the intersection of a configuration in \(V\) with \(W\). Hence, if the configuration in \(V\) has a dense orbit, then the configuration in \(W\) has a dense orbit. We need to prove the converse. Given a general configuration, we obtain a homomorphism
\[f:\operatorname{Stab}(U_{i},T_{i},W;V)\to\operatorname{Stab}(U^{\prime}_{i},T^ {\prime}_{i},W^{\prime};W).\]
First observe that the kernel of \(f\) is trivial. To see this, we may choose a basis \(e_{i}\), \(1\leq i\leq n\), for \(V\) so that \(W\) is spanned by \(e_{i}\) with \(1\leq i\leq w\) and \(T_{2}\) is spanned by \(e_{i}\) with \(n-t_{2}+1\leq i\leq n\). Finally, we may choose \(T_{1}\) to be spanned by \(e_{i}+e_{w+i}\) for \(1\leq i\leq n\)
Hence,
\[\dim(\operatorname{Stab}(U_{i},T_{i},W;V))\leq\dim(\operatorname{Stab}(U^{ \prime}_{i},T^{\prime}_{i},W^{\prime};W))\]
Since \(Y\) has a dense orbit, the dimension of \(\operatorname{Stab}(U^{\prime}_{i},T^{\prime}_{i},W^{\prime};W)\) is
\[w^{2}-1+2mw+m^{2}-\sum_{i=1}^{3}(u_{i}(t_{i}-u_{i})+t_{i}(n-t_{i})).\]
Since \(n=w+m\), we have
\[n^{2}-1-\sum_{i=1}^{3}u_{i}(t_{i}-u_{i})-\sum_{i=1}^{3}t_{i}(n-t_{i})=w^{2}-1- \sum_{i=1}^{3}u_{i}(t_{i}-u_{i})-\sum_{i=1}^{3}t_{i}(n-t_{i})+2mw+m^{2}.\]
Since the latter is the dimension of the stabilizer of a general point in \(Y\) and bounds the dimension of the stabilizer of a general point in \(X\), we conclude that if \(Y\) has a dense orbit, then so does \(X\).
**Theorem 5.10**.: _Let \(F(k_{1},k_{2};n)\) be a two-step partial flag variety. Then \(F(k_{1},k_{2};n)^{3}\) is sparse if and only if \(k_{1}+k_{2}=n\)._
Proof.: By Proposition 5.1, we know that if \(k_{1}+k_{2}=n\), then \(F(k_{1},k_{2};n)^{3}\) is sparse. We need to show that if \(k_{1}+k_{2}\neq n\), then \(F(k_{1},k_{2};n)^{3}\) is dense.
By replacing \(F(k_{1},k_{2};n)\) with \(F(n-k_{2},n-k_{1};n)\) if necessary, we may assume that \(k_{1}+k_{2}<n\). If \(3k_{2}\leq n\), then \(F(k_{1},k_{2};n)^{3}\) is dense by Lemma 5.5.
If \(2k_{2}\leq n<3k_{2}\), then let \(m=n-2k_{2}\). If \(m\geq k_{1}\), then we apply Lemma 5.6 three times. First, the density of \(F(k_{1},k_{2};n)^{3}\) is equivalent to the density of \(F(k_{1},k_{2};2k_{2})^{2}\times G(k_{2}-m;2k_{2})\). The density of the latter is in turn equivalent to the density of \(F(k_{1},k_{2};2k_{2}-m)\times G(k_{2}-m;2k_{2}-m)^{2}\), which is equivalent to the density of \(G(k_{2}-m;2k_{2}-2m)^{3}\). Since the latter is dense by [1, Theorem 5.1], we conclude that \(F(k_{1},k_{2};n)^{3}\) is dense. If \(m<k_{1}\), then by applying Lemma 5.6 three times, the density of \(F(k_{1},k_{2};n)^{3}\) is equivalent to the density of \(F(k_{1}-m,k_{2}-m;2k_{2}-2m)^{3}\). By Lemma 5.7, the density of the latter is equivalent to the density of \(G(k_{1}-m;k_{2}-m)^{3}\). Since the latter is dense, we conclude that \(F(k_{1},k_{2};n)^{3}\) is dense in this case.
Finally, we may assume that \(k_{2}<n<2k_{2}\). If \(2k_{1}+k_{2}<n\), then let \(m=n-2k_{1}-k_{2}\). By applying Lemma 5.6 three times, the density of \(F(k_{1},k_{2};n)^{3}\) is equivalent to the density of \(F(k_{1},k_{2}-2m;n-3m)^{3}\). Hence, we reduce to the case \(2k_{1}+k_{2}\geq n\).
If \(2k_{1}+k_{2}\geq n\), let \(U_{i}\subset T_{i}\) for \(1\leq i\leq 3\) be three general partial flags of type \(k_{1},k_{2}\). Let \(m=n-k_{1}-k_{2}\). Let \(W_{i,j}\) denote the span of \(U_{i}\) and \(T_{j}\). We will apply Lemma 5.9 three times to reduce the density of \(F(k_{1},k_{2};n)^{3}\) to that of \(F(k_{1}-m,k_{2}-2m;n-3m)^{3}\). First apply Lemma 5.9, setting \(W=W_{2,3}\). Denote the intersection of a linear space with \(W\) with a prime. Then \(F(k_{1},k_{2};n)^{3}\) is dense if and only if the configuration \((U^{\prime}_{i},T^{\prime}_{i},W^{\prime}_{1,2})\) is dense in \(W\). Now apply Lemma 5.9 setting \(W=W^{\prime}_{1,2}\). Denote the intersections of the vector spaces with \(W^{\prime}_{1,2}\) with double
primes. Set \(W^{\prime\prime}_{3,1}:=U^{\prime}_{3}T^{\prime}_{1}\cap W^{\prime}_{1,2}\). Then the configuration \((U^{\prime}_{i},T^{\prime}_{i},W^{\prime}_{1,2})\) in \(W\) is dense if and only if the configuration \((U^{\prime\prime}_{i},T^{\prime\prime}_{i},W^{\prime\prime}_{3,1})\) is dense in \(W^{\prime}_{1,2}\). Finally, we apply Lemma 5.9 by setting \(W=W^{\prime\prime}_{3,1}\). Denote the intersections of the vector spaces with \(W^{\prime\prime}_{3,1}\) by triple primes. Notice that \(U_{2}\) and \(T^{\prime\prime}_{3}\) span \(W^{\prime\prime}_{3,1}\). We conclude that the configuration \((U^{\prime\prime}_{i},T^{\prime\prime}_{i},W^{\prime\prime}_{3,1})\) is dense in \(W^{\prime}_{1,2}\) if and only if the configuration \((U^{\prime\prime}_{i},T^{\prime\prime}_{i})\) is dense in \(W^{\prime\prime}_{3,1}\). We have thus reduced the density of \(F(k_{1},k_{2};n)^{3}\) to that of \(F(k_{1}-m,k_{2}-2m;n-3m)^{3}\). Notice that \(k_{1}-m+k_{2}-2m<n-3m\) by assumption. If \(2k_{2}-4m\leq n-3m\), we are done by the previous cases. Otherwise, we can continue reducing the size of \(k_{1}\) and \(k_{2}\) by \(m\) and \(2m\), respectively. Since this cannot continue indefinitely, we conclude that \(F(k_{1},k_{2};n)^{3}\) is dense. This concludes the proof of the theorem.
We conclude with a few remarks about the action of \(\mathbb{P}\mathrm{GL}(n)\) on products of Grassmannians. Classifying the products of at least 5 Grassmannians with dense orbit is a hard problem. However, one can say a little more about certain families of such products.
**Lemma 5.11**.: _The action of \(\mathbb{P}\mathrm{GL}(n)\) on \(\prod_{i=1}^{m}G(k_{i},n)\) has a dense orbit if_
\[\sum_{i=1}^{m-1}k_{i}\leq n\quad\text{and}\quad k_{m}\leq n-\sum_{i=1}^{m-1}k_ {i}+\min_{1\leq i\leq m-1}k_{i}.\]
Proof.: For simplicity set \(s=n-\sum_{i=1}^{m-1}k_{i}\). Fix a basis \(e_{j}\), \(1\leq j\leq n\), of \(V\). For \(1\leq i\leq m-1\), let \(W_{i}\) be the vector space spanned by \(e_{j}\) with \(1+\sum_{l=1}^{i-1}k_{l}\leq j\leq\sum_{l=1}^{i}k_{l}\). Let \(W_{m}\) be the vector space spanned by \(e_{j}\) for \(1+\sum_{l=1}^{m-1}k_{l}\leq j\leq n\) and \(e_{j}+e_{j+k_{1}}+e_{j+k_{1}+k_{2}}+\cdots+e_{j+\sum_{i=1}^{m-1}k_{i}}\) for \(1\leq j\leq k_{m}-s\). Then the stabilizer of these vector spaces have the form
\[\begin{bmatrix}A&B_{1}&0&0&\cdots&D\\ 0&C_{1}&0&0&\cdots&0\\ 0&0&A&B_{2}&\cdots&D\\ 0&0&0&C_{2}&\cdots&0\\ &\cdots&&&\\ 0&0&0&0&\cdots&E\end{bmatrix}\]
This stabilizer of this configuration has dimension
\[(k_{m}-s)^{2}+\sum_{i=1}^{m-1}k_{i}(k_{i}-k_{m}+s)+s^{2}+s(k_{m}-s)-1.\]
Since \(n=\sum_{i=1}^{m-1}k_{i}+s\), we conclude that this quantity is equal to
\[\sum_{i=1}^{m}k_{i}^{2}+n(s-k_{m})-1.\]
Observe that this quantity is also equal to
\[n^{2}-1-\sum_{i=1}^{m}k_{i}(n-k_{i})=n^{2}-1+\sum_{i=1}^{m}k_{i}^{2}-n(n-s+k_{ m})=\sum_{i=1}^{m}k_{i}^{2}+n(s-k_{m})-1.\]
Since \(\dim(\mathrm{Stab}(W_{i};V))=\dim(\mathbb{P}\mathrm{GL}(n))-\dim(\prod_{i=1}^{ m}G(k_{i},n))\), we conclude that \(\mathbb{P}\mathrm{GL}(n)\) has a dense orbit on \(\prod_{i=1}^{m}G(k_{i},n)\)
_Example 5.12_.: The assumption \(k_{m}\leq n-\sum_{i=1}^{m-1}k_{i}+\min_{1\leq i\leq m-1}k_{i}\) cannot be weakened in general. For example, set \(n=8\) and \(k_{1}=k_{2}=1,k_{3}=k_{4}=2\) and \(k_{5}=3\). Then \(\sum_{i=1}^{4}k_{i}=8=n\). However, \(3>n-\sum_{i=1}^{m-1}k_{i}+\min_{1\leq i\leq m-1}k_{i}=1\). For \(1\leq i\leq 5\), let \(W_{i}\) be a general linear space of dimension \(k_{i}\). Let \(I\) be any three element subset of \(\{1,2,3,4\}\) and let \(j\) be the element in the complement of \(I\). Then the span of \(W_{i}\) for \(i\in I\) intersects \(W_{5}\) in a subspace of dimension \(3-k_{j}\). In this way, we get 4 general subspaces of dimensions \(1,1,2,2\) in \(W_{5}\). Since \(\mathbb{PGL}(3)\) does not have a dense orbit on this configuration, the original configuration does not have a dense orbit.
**Theorem 5.13**.: _Let \(\underline{\textbf{d}}=(d_{1},d_{2},d_{3},d_{4},n-d_{5};n)\) be a dimension vector such that \(n\geq d_{1}+d_{2}+d_{3}+d_{4}\) and \(d_{5}\geq d_{4}\geq d_{3}\geq d_{2}\geq d_{1}\). Then \(\underline{\textbf{d}}\) is dense if and only if \(d_{1}+d_{2}+d_{3}+d_{4}\neq 2d_{5}\)_
Proof.: Let \(V\) be an \(n\) dimensional vector space. By Proposition 2.6 we can consider the vector spaces \(V_{d_{1}},V_{d_{2}},V_{d_{3}},V_{d_{4}},W\) with the corresponding dimensions \(n-d_{1},n-d_{2},n-d_{3},n-d_{4},d_{5}\) respectively. Now consider the following group homomorphism that is constructed by the restriction map:
\[f:\operatorname{Stab}(V_{d_{1}},V_{d_{2}},V_{d_{3}},V_{d_{4}},W;V)\to \operatorname{Stab}(V_{d_{1}}\cap W,V_{d_{2}}\cap W,V_{d_{3}}\cap W,V_{d_{4}} \cap W;W)\]
Hence we have
\[\dim\operatorname{Stab}(V_{d_{1}},V_{d_{2}},V_{d_{3}},V_{d_{4}},W;V)=\dim \operatorname{Stab}(V_{d_{1}}\cap W,V_{d_{2}}\cap W,V_{d_{3}}\cap W,V_{d_{4}} \cap W;W)+\dim\ker f\]
Now we show that kernel of \(f\) is trivial. For this purpose we consider an element in the preimage of the identity in \(\operatorname{Stab}(V_{d_{1}}\cap W,V_{d_{2}}\cap W,V_{d_{3}}\cap W,V_{d_{4}} \cap W;W)\). Choose coordinates such that \(V_{d_{1}}\) be given by \(x_{1}=x_{2}=..=x_{d_{1}}=0\), \(V_{d_{2}}\) be given by \(x_{d_{1}+1}=..=x_{d_{2}}=0\), \(V_{d_{3}}\) be given by \(x_{d_{2}+1}=..=x_{d_{3}}=0\) and \(V_{d_{4}}\) be given by \(x_{d_{3}+1}=..=x_{d_{4}}=0\). Then \(\operatorname{Stab}(V_{d_{1}},V_{d_{2}},V_{d_{3}},V_{d_{4}};V)\) is given by the \(n\times n\) block matrix:
\[\begin{bmatrix}A&0&0&0\\ 0&B&0&0\\ 0&0&C&0\\ 0&0&0&D\end{bmatrix}\]
where \(A,B,C,D\) are \(d_{1}\times d_{1},d_{2}\times d_{2},d_{3}\times d_{3},d_{4}\times d_{4}\) matrices, respectively. Finally we show that if this matrix acts as an identity on \(W\) then it is in fact the identity. Now let \(W\) be spanned by the vectors of the 5 such that it has a basis \(\{e_{1}+e_{d_{1}+1}+e_{d_{1}+d_{2}+1}+e_{d_{1}+d_{2}+d_{3}+1},e_{d_{1}+d_{2}+ d_{3}+d_{4}+1},e_{d_{1}+2}+..+e_{d_{1}+d_{2}+d_{3}+d_{4}+2},..\}\) where an element from each block of A,B,C,D is taken for each basis element and summed through all blocks and if one element does not exist, we omit that element. As we have \(d_{5}\geq d_{4}\) we use all the elements in the block matrix and hence if we fix those elements and their sums with a element in \(\operatorname{Stab}(V_{d_{1}}\cap W,V_{d_{2}}\cap W,V_{d_{3}}\cap W,V_{d_{4}} \cap W;W)\), then the matrix has to be identity matrix. Hence \(\operatorname{Stab}(V_{d_{1}},V_{d_{2}},V_{d_{3}},V_{d_{4}},W;V)\) acts with a dense orbit if and only if \(\operatorname{Stab}(V_{d_{1}}\cap W,V_{d_{2}}\cap W,V_{d_{3}}\cap W,V_{d_{4}} \cap W;W)\) acts with a dense orbit.
Now from [1, Theorem 5.1] we have \(\operatorname{Stab}(V_{d_{1}}\cap W,V_{d_{2}}\cap W,V_{d_{3}}\cap W,V_{d_{4}} \cap W;W)\) does not act with a dense orbit if and only if \(d_{1}+d_{2}+d_{3}+d_{4}=2d_{5}\) and the theorem follows.
|
2304.10264 | The effects of the widths on the one-loop electroweak corrections to the
$pp \to WW$ process | In this paper, we study the effects of the widths of unstable particles on
the one-loop electroweak corrections for the $pp \to WW$ process at the TeV
scale within the framework of the complex mass scheme. We also investigate, for
this same process, the unitarity of the theory at high energies. | N. Bekheddouma Abdi, R. Bouamrane, K. Khelifa-Kerfa | 2023-04-20T12:41:13Z | http://arxiv.org/abs/2304.10264v3 | # The effects of the widths on the one-loop electroweak corrections to the \(pp\to WW\) process
###### Abstract
In this paper, we study the effects of the widths of unstable particles on the one-loop electroweak corrections for the \(pp\to WW\) process at the TeV scale within the framework of the complex mass scheme. We also investigate, for this same process, the unitarity of the theory at high energies.
Keywords:CMS, Electroweak, Renormalisation, Cross-section, One-loop Corrections, Unitarity, High-energy
## 1 Introduction
After the discovery of the Higgs boson in 2012 at CERN, the particle responsible, via the Brout-Englert-Higgs mechanism, for generating the masses of the particles of the Standard Model (SM), we entered the era of Higgs-precision tests of SM. Since most known fundamental particles are unstable, it becomes important to consider the width, \(\Gamma\), of these particles when evaluating physical observables [1; 2; 3]. Since the electroweak sector being the most sensitive to unstable particles, such as the weak gauge bosons, the Higgs boson and the top quark, we study, within the framework of the SM, the effects of the widths of the latter on the \(pp\to WW\) process.
Knowing that the inclusion of the finite width is not trivial, since it can invoke, following a mixture of perturbative orders, a break in the gauge invariance during calculations of radiative corrections to one-loop [4; 5]. Many approaches have been proposed in the past to incorporate the said width into perturbative computations, such as the fixed-width scheme [5], the narrow-width approximation [6], the pole scheme [7; 8] and the effective field theory [9; 10]. But the complex mass scheme [11; 12; 13; 14] is the only one, by construction, which preserves all of the algebraic relations which satisfy gauge invariance in the resonant and non-resonant regions, where the main idea is based on an analytic continuation of the parameters of the Lagrangian
SM, which are related to the masses of unstable particles, in the complex plane. Therefore, renormalizability and unitarity are also preserved. This scheme has been implemented in various high energy software such as MadGraph5 aMC@NLO[14; 15], OpenLoops 2[16] and Recola[17]. The aim of this paper is to study the effect of the width of the weak gauge bosons, the Higgs and the top quark on the process \(pp\to WW\) at the one-loop level within the complex mass scheme.
The paper is organised as follows. In section 2 we summarise the main ideas of the CMS. Then, in section 3, we compute the cross section of the process considered herein, i.e., \(pp\to WW\). up to one-loop, within both the usual on-shell scheme (which we shall be referring to as "real scheme") and the complex mass scheme. We analyse the results obtained in the said two schemes and discuss the effects of the width on the unitarity. Finally, we draw our conclusions in section 4.
## 2 Complex mass scheme in a nutshell
The complex mass scheme is a renormalisable scheme which deals properly with unstable particles in all phase space. It was first used in tree-level calculations with W/Z resonances, then generalised to \(\mathcal{O}(\alpha)\)[11; 12]. In this scheme and at the tree level, the real masses of the weak gauge bosons \(W/Z\) and the Higgs boson are changed to complex quantities, defined as the position of the poles of the corresponding propagators which have complex momenta \(k\). To preserve the gauge invariance we have to introduce the complex masses everywhere in the Feynman rules, in such a way that the bare Lagrangian remains invariant, particularly in the weak mixing angle;
\[\hat{\mu}_{{}_{W}}^{2}=M_{{}_{W}}^{2}-\imath M_{{}_{W}}\Gamma_{{}_{W}},\quad \hat{\mu}_{{}_{Z}}^{2}=M_{{}_{Z}}^{2}-\imath M_{{}_{Z}}\Gamma_{{}_{Z}},\quad \hat{\mu}_{{}_{H}}^{2}=M_{{}_{H}}^{2}-\imath M_{{}_{H}}\Gamma_{{}_{H}}, \tag{1}\]
and
\[\cos^{2}\theta_{{}_{W}}\equiv\hat{c}_{{}_{W}}^{2}=1-\hat{s}_{{}_{W}}^{2}= \frac{\hat{\mu}_{{}_{W}}^{2}}{\hat{\mu}_{{}_{Z}}^{2}}\simeq\frac{M_{W}^{2}}{M _{Z}^{2}}\left[1-\imath\left(\frac{\Gamma_{W}}{M_{W}}-\frac{\Gamma_{Z}}{M_{Z} }\right)\right]. \tag{2}\]
The hat over the masses and the mixing angles denotes the fact that they are complex-valued. Since the gauge invariance is not affected by this analytical continuation of the mass in the complex \(k^{2}\) plane: \(m\longrightarrow\hat{\mu}=m-\imath\Gamma/2\longrightarrow\hat{\mu}(k)=m(k)- \imath\Gamma(k)/2\), thus the Ward identity and that of Slavnov-Taylor are preserved. This signifies that the elements of the S-matrix are independent of the gauge parameters.
Although the introduction of the complex mass in the resonant propagators is trivial,
\[\frac{1}{k^{2}-M_{B}^{2}} \longrightarrow\frac{1}{k^{2}-\hat{\mu}_{B}^{2}}\simeq\frac{1}{k^{2}- M_{B}^{2}\left(1-\imath\frac{\Gamma_{B}}{M_{B}}\right)}, \tag{3}\] \[\frac{1}{\not{p}-m_{F}} \longrightarrow\frac{1}{\not{p}-\hat{\mu}_{F}}=\frac{1}{\not{p}-m_ {F}\left(1-\imath\frac{\Gamma_{F}}{2m_{F}}\right)}, \tag{4}\]
where the subscripts \(B\) and \(F\) stand for bosons and fermions, respectively, it induces in other regions (such as the weak mixing angle) when passing to the complex mass scheme spurious terms of order \({\cal O}(\alpha)\) in the tree-level amplitude, thus only affecting loop-level calculations.
To generalise the CMS to the one-loop level while keeping the bare Lagrangian invariant we split the real masses in the latter for the unstable particles into renormalised complex masses and complex counter-terms. The resultant Feynman rules enable to perform perturbative calculations exactly as in the usual renormalisation on-shell scheme. To this end, we add and subtract the same imaginary part of each mass of the unstable particle. One of these imaginary parts is incorporated into the free propagator to define the complex mass of the corresponding unstable particle. The other part is introduced into the vertex counter-term. The first term is thus resummed but the second one is not. This prescription does not affect the gauge invariance but may invoke a violation of unitarity of order \({\cal O}(\alpha^{2})\) in calculations at order \({\cal O}(\alpha)\). This is due to the fact that the modified renormalised Lagrangian is not hermitian [18]. Apart from this problem, the complex mass scheme is coherent and gauge invariant in the next-to-leading order (NLO) calculations. Its implementation in a numerical code at one-loop is feasible, since it suffices to redefine the counter-terms by including imaginary parts for two-point functions. The complex masses are not only introduced for the gauge bosons but for all unstable particles relevant to the electroweak sector such as the top quark.
### Complex renormalisation
In this section we summarise the procedure of the generalised renormalisation which takes into account the complex masses in the 't Hooft-Feynman gauge in a straightforward way [11].
Since the bare Lagrangian is unaffected the complex masses of the gauge bosons, \(W\) and \(Z\), are introduced in the latter after decomposing the bare real masses into
renormalised complex masses and complex counter-terms:
\[M_{W,0}^{2}=\hat{\mu}_{W}^{2}+\delta\hat{\mu}_{W}^{2},\qquad\quad M_{Z,0}^{2}= \hat{\mu}_{Z}^{2}+\delta\hat{\mu}_{Z}^{2}, \tag{5}\]
where we note that the following consistency condition should be respected:
\[\text{Im}(\hat{\mu}_{V}^{2})=-\,\text{Im}(\delta\hat{\mu}_{V}^{2}). \tag{6}\]
In the above equation the subscript \(0\) labels the bare quantities, and \(V\) stands for \(W\) or \(Z\) bosons. In a similar fashion, we observe that the renormalised gauge fields are related to the bare ones via the following relations:
\[W_{0}^{\pm}=\left(1+\frac{1}{2}\delta\mathcal{Z}_{W}\right)W^{\pm}, \tag{7}\]
\[\left(\begin{array}{c}Z_{0}\\ A_{0}\end{array}\right)=\left(\begin{array}{cc}1+\frac{1}{2}\delta \mathcal{Z}_{ZZ}&\frac{1}{2}\delta\mathcal{Z}_{ZA}\\ \frac{1}{2}\delta\mathcal{Z}_{AZ}&1+\frac{1}{2}\delta\mathcal{Z}_{AA}\end{array} \right)\left(\begin{array}{c}Z\\ A\end{array}\right). \tag{8}\]
Since the renormalisation conditions are the same for stable and unstable particles, that is, the position of the poles of the propagator equals the square of the physical mass and the residue of the propagator equals \(1\), they assume similar forms in CMS as those in the usual scheme but without taking the real part of the renormalised transverse self energy (T):
\[\bar{\Sigma}_{T}^{W}(\hat{\mu}_{W}^{2})=0,\qquad\bar{\Sigma}_{T}^ {ZZ}(\hat{\mu}_{Z}^{2})=0,\] \[\bar{\Sigma}_{T}^{AZ}(0)=0,\qquad\quad\bar{\Sigma}_{T}^{AZ}(\hat {\mu}_{Z}^{2})=0,\] \[\bar{\Sigma}_{T}^{\prime W}(\hat{\mu}_{W}^{2})=0,\qquad\quad\bar{ \Sigma}_{T}^{\prime ZZ}(\hat{\mu}_{Z}^{2})=0,\qquad\bar{\Sigma}_{T}^{\prime AA }(0)=0, \tag{9}\]
where the prime means differentiation with respect to the argument and the bar on the \(\Sigma\)'s indicates that they are renormalised. The first two terms in Eq. (9) fix the counter-terms of the masses of the \(W\) and \(Z\) bosons, while the last five terms fix the counter-terms of their fields. Knowing that the generalised renormalised transverse self energies are the same as the usual on-shell scheme with the replacement of the real masses and counter-terms by their complex counterparts. The solutions of the said conditions (9) are as follows:
\[\delta\hat{\mu}_{W}^{2}=\Sigma_{T}^{W}(\hat{\mu}_{W}^{2}),\qquad \delta\mathcal{Z}_{W}=-\Sigma_{T}^{\prime W}(\hat{\mu}_{W}^{2}),\] \[\delta\mathcal{Z}_{ZA}=\frac{2}{\hat{\mu}_{Z}^{2}}\Sigma_{T}^{AZ }(0),\quad\delta\mathcal{Z}_{AZ}=-\frac{2}{\hat{\mu}_{Z}^{2}}\Sigma_{T}^{AZ}( \hat{\mu}_{Z}^{2}),\] \[\delta\mathcal{Z}_{W}=-\Sigma_{T}^{{}^{\prime}W}(\hat{\mu}_{W}^{ 2}),\quad\delta\mathcal{Z}_{ZZ}=-\Sigma_{T}^{{}^{\prime}ZZ}(\hat{\mu}_{Z}^{2}),\quad\delta\mathcal{Z}_{AA}=-\Sigma_{T}^{{}^{\prime}AA}(0). \tag{10}\]
It requires analytical continuation to compute the above two-point functions with complex arguments. To avoid this complication we expand the self energies around real arguments. To see how to transform the renormalised self energies and the solutions of the renormalisation conditions, we concentrate on the case of the W gauge boson. We have
\[\bar{\Sigma}_{T}^{W}(k^{2})=\Sigma_{T}^{W}(k^{2})-\delta\hat{\mu} _{W}^{2}+(k^{2}-\hat{\mu}_{W}^{2})\delta\mathcal{Z}_{W}, \tag{11}\]
with
\[\Sigma_{T}^{W}(\hat{\mu}_{W}^{2})=\Sigma_{T}^{W}(M_{W}^{2})+\left( \hat{\mu}_{W}^{2}-M_{W}^{2}\right)\Sigma_{T}^{{}^{\prime}W}(M_{W}^{2})+ \mathcal{O}(\alpha^{3}). \tag{12}\]
Neglecting the terms at \(\mathcal{O}(\alpha^{3})\) and beyond we obtain the modified solutions, which when inserted into (11) results in a form of the renormalised transverse self energy that resembles that of the usual on-shell scheme but without taking the real part of the solutions:
\[\bar{\Sigma}_{T}^{W}(k^{2})=\Sigma_{T}^{W}(k^{2})-\delta M_{W}^{ 2}+(k^{2}-M_{W}^{2})\delta\mathcal{Z}_{W}\] \[\delta M_{W}^{2}=\Sigma_{T}^{W}(M_{W}^{2}),\qquad\delta\mathcal{Z }_{W}=-\Sigma_{T}^{W}(M_{W}^{2}). \tag{13}\]
While in the on-shell scheme the self-energies are calculated with real renormalised masses, in the CMS self-energies, Eq. (13), ought to be calculated with complex masses, but with real squared momenta. This enables us to avoid analytic continuation in the momentum space.
In order to correctly address resonances at order \(\mathcal{O}(\alpha)\) one ought to take into account the \(W\) boson width, \(\Gamma_{W}\), including \(\mathcal{O}(\alpha)\) corrections. This may be obtained in an iterative way from the following equation:
\[M_{W}\Gamma_{W}=\mathrm{Im}\left\{\Sigma_{T}^{W}(M_{W}^{2})\right\}-M_{W} \Gamma_{W}\,\mathrm{Re}\left\{\Sigma_{T}^{\prime W}(M_{W}^{2})\right\}+ \mathcal{O}(\alpha^{3}). \tag{14}\]
This latter equation can be easily deduced from the imaginary part of (12). Furthermore, the complex weak mixing angle is renormalised as follows:
\[\frac{\delta\hat{c}_{{}_{W}}}{\hat{c}_{{}_{W}}}=\frac{1}{2}\left( \frac{\delta\hat{\mu}_{W}^{2}}{\hat{\mu}_{W}^{2}}-\frac{\delta\hat{\mu}_{Z}^{2 }}{\hat{\mu}_{Z}^{2}}\right)=\frac{1}{2}\left[\frac{\Sigma_{T}^{W}(\hat{\mu}_ {W}^{2})}{\hat{\mu}_{W}^{2}}-\frac{\Sigma_{T}^{Z}(\hat{\mu}_{Z}^{2})}{\hat{ \mu}_{Z}^{2}}\right]. \tag{15}\]
For the Higgs boson, the renormalisation constants can be approached in the same manner as before;
\[M_{H,0}^{2}=\hat{\mu}_{H}^{2}+\delta\hat{\mu}_{H}^{2}, \tag{16}\]
with
\[\delta\hat{\mu}_{H}^{2} =\Sigma^{H}(\hat{\mu}_{H}^{2}),\] \[=\Sigma^{H}(M_{H}^{2})+(\hat{\mu}_{H}^{2}-M_{H}^{2})\Sigma^{\prime H }(M_{H}^{2})+\mathcal{O}(\alpha^{3}).\] \[\delta\mathcal{Z}_{H} =-\Sigma^{{}^{\prime}H}(\hat{\mu}_{H}^{2}),\] \[=-\Sigma^{\prime H}(M_{H}^{2})+\mathcal{O}(\alpha^{2}). \tag{17}\]
Hence the renormalised self energy for the Higgs boson up to the \(\mathcal{O}(\alpha^{2})\) can be written as:
\[\bar{\Sigma}^{H}(k^{2})=\Sigma^{H}(k^{2})-\delta M_{H}^{2}+(k^{2}-M_{H}^{2}) \,\delta\mathcal{Z}_{H} \tag{18}\]
where
\[\delta M_{H}^{2}=\Sigma^{H}(M_{H}^{2}),\qquad\delta\mathcal{Z}_{H}=-\Sigma^{ \prime H}(M_{H}^{2}), \tag{19}\]
Since in the CMS, the complex masses are not introduced solely for gauge bosons and the Higgs boson but for all unstable particles such as the top quark. The renormalisation of this latter may be treated in a similar manner as before with the introduction of its complex mass and counter-term:
\[\hat{\mu}_{t}^{2} =m_{t}^{2}-m_{t}\,\Gamma_{t},\] \[m_{t,0} =\hat{\mu}_{t}+\delta\hat{\mu}_{t}. \tag{20}\]
The generalised renormalisation constants are determined by:
\[\delta\hat{\mu}_{t} =\frac{\hat{\mu}_{t}}{2}\left[\Sigma^{t,R}(\hat{\mu}_{t}^{2})+ \Sigma^{t,L}(\hat{\mu}_{t}^{2})+2\Sigma^{t,s}(\hat{\mu}_{t}^{2})\right],\] \[\delta\mathcal{Z}_{t,\sigma} =-\Sigma^{t,\sigma}(\hat{\mu}_{t}^{2})-\hat{\mu}_{t}^{2}\left[ \Sigma^{{}^{\prime}t,R}(\hat{\mu}_{t}^{2})+\Sigma^{{}^{\prime}t,L}(\hat{\mu}_ {t}^{2})+2\Sigma^{{}^{\prime}t,s}(\hat{\mu}_{t}^{2})\right], \tag{21}\]
where \(\sigma=R,L\) indicates the left- and right-handed components of the top self-energy, \(\Sigma^{t}(p)\), following the convention of Ref. [19]. The generalised renormalised self-energy of the top quark is determined by:
\[\bar{\Sigma}^{t}(p)=\left(\Sigma^{t,R}(p^{2})+\delta\mathcal{Z}_{ t,R}\right)/\!\!\!p\,P_{R}+\left(\Sigma^{t,L}(p^{2})+\delta\mathcal{Z}_{t,L} \right)/\!\!\!p\,P_{L}+\\ +\hat{\mu}_{t}\left[\Sigma^{t,s}-\frac{1}{2}\left(\delta\mathcal{ Z}_{t,R}+\delta\mathcal{Z}_{t,L}\right)-\frac{\delta\hat{\mu}_{t}}{\hat{\mu}_{t}} \right], \tag{22}\]
where the factors \(P_{R,L}\) are defined below. This becomes, after the expansion of the self-energies of (21) around the real mass \(m_{t}^{2}\) and negligence of higher order terms:
\[\bar{\Sigma}^{t}(p)=\left(\Sigma^{t,R}(p^{2})+\delta\mathcal{Z}_ {t,R}\right)/\!\!\!p\,P_{R}+\left(\Sigma^{t,L}(p^{2})+\delta\mathcal{Z}_{t,L} \right)/\!\!\!p\,P_{L}+\\ +\hat{\mu}_{t}\left[\Sigma^{t,s}-\frac{1}{2}\left(\delta\mathcal{ Z}_{t,R}+\delta\mathcal{Z}_{t,L}\right)-\frac{\delta m_{t}}{m_{t}}\right], \tag{23}\]
with
\[\delta m_{t} =\frac{m_{t}}{2}\left[\Sigma^{t,R}(m_{t}^{2})+\Sigma^{t,L}(m_{t}^{2} )+2\Sigma^{t,s}(m_{t}^{2})\right],\] \[\delta\mathcal{Z}_{t,\sigma} =-\Sigma^{t,\sigma}(m_{t}^{2})-m_{t}^{2}\left[\Sigma^{{}^{\prime} t,R}(m_{t}^{2})+\Sigma^{{}^{\prime}t,L}(m_{t}^{2})+2\Sigma^{{}^{\prime}t,s}(m_{t}^{2 })\right]. \tag{24}\]
Before ending this section it is important to recall that the masses of the external particles for a given process must be real, i.e., they are considered as stable particles as had been shown by Veltman [20]. Moreover, the same particles should not be taken in the same process as internally unstable (resonances) and externally stable, because they cannot be treated simultaneously by two different schemes (usual on-shell and CMS) [21].
## 3 Width effects on \(pp\to WW\) at one loop
### Cross-section at lowest order
We shall be investigating the process
\[P(p_{1})+P(p_{2})\to W^{+}(k_{1},\lambda_{1})+W^{-}(k_{2},\lambda_{2}), \tag{25}\]
where \(P_{i},k_{i}\) are the momenta of the protons and the W bosons respectively, and \(\lambda_{i}\) are the polarisations (\(\lambda_{i}=0\) for longitudinal polarisations, referred to as \(L\), \(\lambda_{i}=\pm 1\) for transverse polarisations, referred to as \(T\), and non-polarised referred to by \(U\)). The tree-level Born Feynman diagrams corresponding to our process are shown in Fig. 1.
Figure 1: Feynman diagrams for the Born process \(q\bar{q}\to W^{+}W^{-}\).
The four-momenta of the protons and bosons are given by:
\[p_{1}=\frac{\sqrt{s}}{2}(1,0,0,\beta_{q}), k_{1}=\frac{\sqrt{s}}{2}(1,\beta\sin\theta,0,\beta\cos\theta),\] \[p_{2}=\frac{\sqrt{s}}{2}(1,0,0,-\beta_{q}), k_{2}=\frac{\sqrt{s}}{2}(1,-\beta\sin\theta,0,-\beta\cos\theta), \tag{10}\]
where \(\beta=\sqrt{1-4M_{W}^{2}/s}\), \(\beta_{q}=\sqrt{1-4m_{q}^{2}/s}\), \(M_{W}\) is the \(W\) boson's mass, \(m_{q}\) is the quark's mass and \(\theta\) is the scattering angle in the centre-of-mass of the system, with \(p_{1}^{2}=p_{2}^{2}=m_{q}^{2}\) and \(k_{1}^{2}=k_{2}^{2}=M_{W}^{2}\). The _longitudinal_ and _transverse_ polarisation vectors of the final bosons read:
\[\epsilon_{L}^{1}(0)=\frac{\sqrt{s}}{2M_{W}}(\beta,\sin\theta,0, \cos\theta), \epsilon_{T}^{1}(\pm)=(0,\cos\theta,\mp\imath,-\sin\theta)/\sqrt{2}.\] \[\epsilon_{L}^{2}(0)=\frac{\sqrt{s}}{2M_{W}}(\beta,-\sin\theta,0, -\cos\theta). \epsilon_{T}^{2}(\pm)=(0,-\cos\theta,\mp\imath,\sin\theta)/ \sqrt{2}. \tag{11}\]
It is worth mentioning that only the longitudinal polarisation is massive. In CMS, the widths of the unstable particles are introduced, at tree-level, through propagators of \(Z,W,H,t\) and the weak mixing angle. We shall only be considering, in this section, the mode \(pp\to W_{L}^{+}W_{L}^{-}\) to analytically study the effect of the widths on the tree-level amplitude in the CMS framework, since this mode is affected by the width \(\Gamma_{W}\) unlike the transverse mode. We then compute, using MadGraph5 aMC@NLO, the Born cross-sections for different polarisations of the \(W\) gauge boson.
To see that the unitarity is preserved in the case where the masses of the unstable particles are taken real at high energies, that is in the usual _on-shell_ (OS) scheme, we introduce in the calculations of the amplitudes of the Feynman diagrams (Fig. 1) the variable \(x=s/4M_{W}^{2}\) with \(x\gg 1\). We find, with the aid of FeynCalc[7], the following expression for the total amplitude at high energy:
\[\mathcal{M}_{\rm tot}^{\rm OS} =\mathcal{M}_{\gamma}^{s}+\mathcal{M}_{Z}^{s}+\mathcal{M}_{H}^{s} +\mathcal{M}_{q}^{t}\] \[=\frac{e^{2}(1+2T_{3}^{f})M_{Z}^{2}}{4\left(M_{Z}^{2}-M_{W}^{2} \right)M_{W}^{2}}\,\bar{v}(p_{2})\left[m_{q}(P_{R}-P_{L})-2q_{1}P_{L}\right]u( p_{1})+\mathcal{O}(1/x), \tag{12}\]
where \(M_{W}=s_{W}\,M_{Z}\), \(P_{R,L}=(1\pm\gamma_{5})/2\) and \(T_{3}^{f}=+1/2(-1/2)\) for \(u,c,t(d,s,b)\) quarks.
To study the effect of the decay width of the unstable massive particles (\(W,Z,H\) and \(t\)) of figure 1 on the unitarity of the amplitudes at tree-level and analyse the obtained results at high energies with respect to real mass scheme (OS), we implement the CMS in the process of longitudinal \(W\) boson pair production \(pp\to W_{L}W_{L}\) (since its polarisation vectors depend on the mass). The expressions of the various
amplitudes of the Feynman diagrams 1 following the prescriptions of Denner [21] are reported in appendix A (Eqs. (A.1)-(A.4)). The resultant total amplitude is as follows:
\[\mathcal{M}_{\rm tot}^{\rm CMS}=\mathrm{Re}\{\mathcal{M}_{\rm tot }^{\rm CMS}\}+\imath\,\mathrm{Im}\{\mathcal{M}_{\rm tot}^{\rm CMS}\},\] (3.5a) where \[\mathrm{Re}\{\mathcal{M}_{\rm tot}^{\rm CMS}\} =\frac{e^{2}(1+2T_{3}^{f})M_{Z}^{2}}{4(M_{Z}^{2}-M_{W}^{2})M_{W} ^{2}}\,\bar{v}(p_{2})\Big{[}m_{q}(P_{R}-P_{L})-2\not{q}_{1}P_{L}\Big{]}u(p_{1} )+\mathcal{O}(1/x),\] \[\mathrm{Im}\{\mathcal{M}_{\rm tot}^{\rm CMS}\} =\frac{e^{2}(1+2T_{3}^{f})\,\Gamma_{Z}M_{Z}}{4(M_{Z}^{2}-M_{W}^{ 2})^{2}}\,\bar{v}(p_{2})\,\Big{[}m_{q}(P_{R}-P_{L})-2\not{q}_{1}P_{L}\Big{]}\, u(p_{1})+\mathcal{O}(1/x).\] (3.5b) The following points are to be noticed:
* If we set all widths of all internal particles introduced in our process, \(\Gamma_{W}=\Gamma_{Z}=\Gamma_{H}=\Gamma_{t}=0\), then the real parts of the amplitudes (A.1)-(A.4) reduce to those of the real scheme (OS), while the imaginary parts vanish.
* The real part of the total amplitude \(\mathcal{M}_{\rm tot}^{\rm CMS}\) is not affected by the widths of the internal particles, \(Z,H\) and \(t\), which cancel out in the final expression. The said real part equals the total amplitude in the real scheme; \(\mathrm{Re}\{\mathcal{M}_{\rm tot}^{\rm CMS}\}=\mathcal{M}_{\rm tot}^{\rm OS}\), whilst, the imaginary part of \(\mathcal{M}_{\rm tot}^{\rm CMS}\) is affected by the width \(\Gamma_{Z}\) of the internal boson \(Z\). It is proportional to \(\mathcal{M}_{\rm tot}^{\rm OS}\): \[\mathrm{Im}\{\mathcal{M}_{\rm tot}^{\rm CMS}\}=\frac{M_{W}^{2}\,\Gamma_{Z}\,M_ {Z}}{M_{Z}^{2}-M_{W}^{2}}\,\mathcal{M}_{\rm tot}^{\rm OS}.\] (3.6)
* The total amplitude depends on the mass and width of the internal boson \(Z\) and the masses of the external particles \(M_{W}\) and \(m_{q}\). The other masses, \(M_{H}\) and \(m_{t}\), and their respective widths, \(\Gamma_{H}\) and \(\Gamma_{t}\), cancel out.
* The effect of the width \(\Gamma_{Z}\) of the internal gauge boson \(Z\) on the amplitude at tree-level at high energy is around 2%. In effect, the ratio of the amplitudes is defined by \[\frac{\delta\mathcal{M}_{\rm tot}}{\mathcal{M}_{\rm tot}^{\rm OS} }=\frac{\mathcal{M}_{\rm tot}^{\rm OS}-\mathcal{M}_{\rm tot}^{\rm CMS}}{ \mathcal{M}_{\rm tot}^{\rm OS}}=-\imath\,\frac{M_{W}^{2}}{M_{Z}^{2}-M_{W}^{2} }\,\frac{\Gamma_{Z}}{M_{Z}}.\] (3.7)
* Following the instructions of Denner [21] we have taken, in all amplitudes of Feynman diagrams, the width of the external gauge boson \(\Gamma_{W}=0\). We may,
however, see the effect of \(\Gamma_{W}\) on the tree-level amplitude at high energy by introducing it at the level of longitudinal polarisation vectors and the weak mixing angle. We find, as before, that only the imaginary part of \({\cal M}_{\rm tot}^{\rm CMS}\) is affected by the widths \(\Gamma_{Z}\) and \(\Gamma_{W}\) (Eqs. (A.5) - (A.8)). It is not affected, however, by the widths of the Higgs boson and the top quark. \[{\rm Re}\{{\cal M}_{\rm tot}^{\rm CMS}\} = {\cal M}_{\rm tot}^{\rm OS},\] \[{\rm Im}\{{\cal M}_{\rm tot}^{\rm CMS}\} = \frac{e^{2}(1+2T_{3}^{f})\,M_{Z}}{2(M_{Z}^{2}-M_{W}^{2})^{2}}\ \bar{v}(p_{2})\left\{\,\frac{\Gamma_{Z}}{M_{Z}}\left[m_{q}\ (1-2P_{L})-2q_{\! \!1}P_{L}\right]+\right.\] (3.8) \[+ \left.\frac{\Gamma_{W}}{M_{W}}\,\frac{1}{M_{Z}}\Bigg{[}m_{q} \left(\frac{T_{3}^{f}}{1+2T_{3}^{f}}-P_{L}\right)-q_{\!\!1}\Bigg{(}2\frac{Q_{ f}}{M_{Z}}\left(M_{Z}^{2}-M_{W}^{2}\right)+\right.\] \[+ \left.P_{L}\Bigg{)}\Bigg{]}\right\}\times u(p_{1})+{\cal O}(1/x).\] The above equations (3.8) reduce to those in Eq. (3.5) if \(\Gamma_{W}=0\).
* The total tree-level amplitude at high energy is finite in the case where the external bosons are considered either as stable or unstable, thus respecting the unitarity condition.
### Numerical results
To perform the numerical calculations of the cross sections at tree-level both in CMS and OS schemes, we use the fixed-order Monte Carlo program MadGraph5_aMC@NLO. The following input parameters have been implemented:
\[M_{H} = 125\,{\rm GeV},\qquad M_{Z}=91.188\,{\rm GeV},\qquad M_{W}=80.4 01\,{\rm GeV},\] \[\Gamma_{H} = 0.008\,{\rm GeV},\qquad\Gamma_{Z}=2.4952\,{\rm GeV},\qquad\Gamma_ {W}=2.092698\,{\rm GeV},\] \[\alpha^{-1} = 137.0359895,\ \ m_{t}=173.2{\rm GeV},\qquad\qquad\Gamma_{t}=1.3{\rm GeV}. \tag{3.9}\]
In Figure 2, we present, for different polarisations of the external gauge bosons, the effect of the widths \(\Gamma_{Z},\Gamma_{W},\Gamma_{H}\) and \(\Gamma_{t}\) on the Born cross sections in the CMS framework in comparison with OS. We have added here the effect of the width of the \(W\) boson since it does not appear at the internal of Feynman diagrams at tree-level (Fig. 1). Hence it does not pose the problem highlighted by Denner [21] at one-loop, which states that a given particle cannot be treated by two different schemes when it is taken to be stable externally and unstable internally.
For different combinations of polarisations of external bosons, LL, TT and LT + TL, the cross sections behave as \(1/s\) at high energy in the real scheme and preserves
such behaviour in CMS. These results confirm that the instability of internal and external particles does not affect unitarity at tree-level. We note the following points for the cross sections in both schemes: they are of the same order for all energies used; the maximum is around \(1000\,\mathrm{GeV}\); in the range \(4000-6000\,\mathrm{GeV}\), the contributions of the LL and LT+TL modes are almost null, whilst the TT and UU modes have non-vanishing values. The TT mode is dominant and makes the principal contribution for the UU mode.
In figures 3 and 4, we see that relative to Fig. 2 the widths \(\Gamma_{H}\) and \(\Gamma_{t}\) have almost no effect. This is due to the weak ratio \(\Gamma_{H}/M_{H}\sim 6\times 10^{-3}\%\) and \(\Gamma_{t}/m_{t}\sim 0.7\%\), although the latter is relatively higher.
Fig. 5 shows that the introduction of \(\Gamma_{Z}\) affects, both at low and high energies, the Born cross section at around \(2\%\). This
Figure 3: Cross sections in OS and CMS at tree-level for \(pp\to W_{U}W_{U}\) as a function of \(\sqrt{s}\) for the case \(\Gamma_{H}=0\) (left) and their ratio (right).
Figure 2: (Left) plots for real (OS) and CMS cross sections as a function of the scattering energy \(\sqrt{s}\) for various polarisations of the \(W^{+}W^{-}\) bosons. (Right) ratio of the real (OS) and CMS cross sections.
effect has its origin in the relatively high value of the ratio \(\Gamma_{Z}/M_{Z}\sim 2.7\%\).
Moreover, if we take into consideration the gauge boson \(W\) as an unstable particle with a ratio \(\Gamma_{W}/M_{w}\sim 2.6\%\), Fig. 6 shows that \(\Gamma_{W}\) has a comparable effect to that of \(\Gamma_{Z}\) if taken separately but opposite if combined together (see Fig. 2). In the next section we consider the CMS effect on the one-loop correction of the \(pp\to W^{+}W^{-}\) process.
### One-loop corrections (NLO)
In Fig. 7, we show a selected sample of self-energy, vertex and box diagrams, which are most sensitive to the widths of the unstable particles, among about 1000 diagrams contributing to the \(pp\to WW\) process that we have generated using the fixed-order Monte Carlo program MadGraph5_aMC@NLO. In the following, we present the results of NLO cross sections (Born + corrections) in both CMS and OS schemes for different
Figure 4: Cross sections in OS and CMS at tree-level for \(pp\to W_{U}W_{U}\) as a function of \(\sqrt{s}\) for the case \(\Gamma_{t}=0\) (left) and their ratio (right).
Figure 5: Cross sections in OS and CMS at tree-level for \(pp\to W_{U}W_{U}\) as a function of \(\sqrt{s}\) for the case \(\Gamma_{W}=0\) (left) and their ratio (right).
widths of unstable particles. We consider in what follows below only the effect of the internal particle widths \(\Gamma_{H}\), \(\Gamma_{Z}\) and \(\Gamma_{t}\) on the one-loop corrections in the case where the external bosons are unpolarized and stable. This is for two reasons; one is linked to the fact that the Monte Carlo program MadGraph5_aMC@NLO has not yet implemented one-loop polarizations for this type of process; the other reason is linked to Denner's remark that the external particles must be considered as stable in accordance with the work of Veltman [20; 22]. We therefore take the width of the \(W\) boson, \(\Gamma_{W}\), to be zero everywhere in the one-loop Feynman diagrams, since the same particle cannot be treated by two different renormalisation schemes simultaneously for the same process.
It is worthwhile mentioning that our study does not involve the one-loop non-abelian QCD corrections. We only deal with the one-loop renormalisation of the electroweak contributions in both CMS and the usual OS schemes.
#### Input parameters:
For the purpose of our numerical investigations, we have used the following input values, in addition to those used for the tree level case (11):
\[m_{e}=0.51099906\,\mathrm{MeV}, m_{u}=47.0\,\mathrm{MeV}, m_{d}=47.0\,\mathrm{MeV},\] \[m_{\mu}=105.658389\,\mathrm{MeV}, m_{c}=1.55\,\mathrm{GeV}, m_{s}=150\,\mathrm{MeV},\] \[m_{\tau}=177.1\,\mathrm{MeV}, m_{b}=4.5\,\mathrm{GeV}. \tag{12}\]
For the sake of deriving our results we have replaced the one-loop squared amplitude by the following formula [23]:
\[|\mathcal{M}|^{2}\to|\mathcal{M}_{Born}|^{2}(1+\delta_{soft})+2\,\mathcal{R}e \left(\mathcal{M}_{Born}^{*}\delta\mathcal{M}\right), \tag{13}\]
Figure 6: Cross sections in OS and CMS at tree-level for \(pp\to W_{U}W_{U}\) as a function of \(\sqrt{s}\) for the case \(\Gamma_{Z}=0\) (left) and their ratio (right).
here \(\delta_{soft}\) takes into consideration soft bremstrahlung and is required in dealing with infrared divergences. Moreover, \(\delta{\cal M}\) contains all one-loop Feynman diagrams as well as their corresponding counter-terms.
#### Results and discussion:
In figures 8 and 9, we present, for stable and unpolarized external bosons, the full cross sections at one-loop in OS and CMS schemes respectively. At high energies, they behave like \(1/s\), which confirms that the two schemes do not break the unitarity of the one-loop results. At low energies, the corrections are positive and of the order of \(5-10\,\%\) in both schemes. Once the full NLO cross sections reach their maxima, around \(1000\,\)GeV, the corrections become negative and large. They maintain this behaviour throughout the remaining energy range. In the range \(2000-6000\,\)GeV the corrections are around \(10-30\,\%\) in OS and \(10-40\,\%\) in CMS. Furthermore, they reach about \(55\,\%\) in OS and \(65\,\%\) in CMS around \(14\,\)TeV. Thus at high energies, we notice that the corrections are of the order of the Born cross section in both schemes. Hence the \({\cal O}(\alpha^{2})\) corrections should be taken into consideration in the calculation of the corrected cross section, in order to obtain more reliable and precise results.
oop. Whereas \(\Gamma_{H}\) and \(\Gamma_{t}\) present insignificant effects, even if the former preserves its ability to restore unitarity as in the case of the actual OS scheme.
Figure 12 shows a comparison between the full cross sections in the two aforementioned schemes. At low energy, the effect of the width \(\Gamma_{Z}\) on the full cross section is around \(2.5\,\%\), which vanishes around \(1000\,\mathrm{GeV}\) and then increases with energy until it reaches around \(15\,\%\) at \(14\,\) TeV. This behaviour is expected to hold for further high energies. That is, the effect of \(\Gamma_{Z}\) increases with energy. This, however, remains to be verified, especially if this process is considered as an internal part of another process with stable external particles. This is to be investigated in our future work.
Figure 8: Plots for OS cross sections as a function of the scattering energy \(\sqrt{s}\), comparing Born and NLO corrections.
Figure 9: Plots for CMS cross sections as a function of the scattering energy \(\sqrt{s}\), comparing Born and NLO corrections.
Figure 11: Plots for CMS cross sections as a function of the scattering energy \(\sqrt{s}\), comparing Born and NLO corrections, for the case \(\Gamma_{t}=0\).
Figure 12: Plots for SM (real) and SMc (CMS) cross sections as a function of the scattering energy \(\sqrt{s}\) at NLO.
Figure 10: Plots for CMS cross sections as a function of the scattering energy \(\sqrt{s}\), comparing Born and NLO corrections, for the case \(\Gamma_{H}=0\).
Conclusion
In the present work, we made an extension of the calculations of the electroweak radiative corrections to a loop of the process \(pp\to WW\) to the complex mass scheme, where the widths of the unstable particles are introduced respecting the identity of Ward and that of Slavnov- Taylor.
At tree level, we obtained for the longitudinal mode of the external gauge bosons a total amplitude proportional to the width \(\Gamma_{Z}\) of the internal gauge boson Z. We have shown that upon the inclusion of the width \(\Gamma_{W}\) of the external \(W\) gauge boson, the total amplitude depends on both \(\Gamma_{W}\) and \(\Gamma_{Z}\). The other widths \(\Gamma_{H}\) and \(\Gamma_{t}\) of the Higgs boson and the top quark have practically no effect on neither the amplitude nor the Born cross section. The effect of \(\Gamma_{Z}\) on the Born cross section is about 2 % for unpolarized and stable W bosons. When the width \(\Gamma_{W}\) of the \(W\) external gauge boson is included, it produces an effect on the Born cross section comparable to that of \(\Gamma_{Z}\) if they are introduced separately, and an opposite effect to \(\Gamma_{Z}\) if they are introduced together.
The Born and full one-loop cross sections behave as \(1/s\) at high energies in both CMS and OS schemes, thus preserving the unitarity of the theory. The one-loop corrections are affected by \(\Gamma_{Z}\) up to 65% in the CMS scheme at energy of about 14 TeV. They reach for the same case about 55% in the real OS scheme. They are therefore of the order of the Born cross section (at high energy), and hence higher orders, \({\cal O}(\alpha^{2})\) and above, have to be included for any meaningful predictions.
Finally, comparisons of the full cross sections in OS and CMS schemes, revealed that the width \(\Gamma_{Z}\) affects the NLO cross section by about 5% around 2 TeV and 15% around 14 TeV. This effect therefore increases with increasing energy and its behaviour after 14 TeV remains to be verified. For a full study of this type of problems where the effect of the width \(\Gamma_{W}\) is not neglected, as well as the widths of the other unstable particles, \(\Gamma_{Z},\Gamma_{H}\) and \(\Gamma_{t}\), on the one-loop corrections, it suffices to relate this process, i.e, \(pp\to WW\), to another one where the external particles are stable. Another important point to consider for future work is the effect of various widths in the case where polarisation is not neglected in tree and one-loop calculations.
## Acknowledgments
We would like to thank Dr. Noureddine Bouayed for support and encouragement to carry out this work, valuable discussions particularly on complex renormalisation as well as collaboration on the usage of FeynCalc.
## Appendix A Amplitudes in CMS
The different high-energy amplitudes at the tree level in CMS in the case where the longitudinal external gauge bosons are stable and unstable are given by the following expressions:
\[\mathrm{Re}\{{\cal M}_{\gamma}^{s}\} =-Q_{f}\,e^{2}\,\frac{\bar{v}(p_{2})\;\;\not\!q_{1}u(p_{1})}{M_{W}^ {2}}+{\cal O}(1/x),\] \[\mathrm{Im}\{{\cal M}_{\gamma}^{s}\} =0+{\cal O}(1/x). \tag{105}\]
\[\mathrm{Re}\{{\cal M}_{Z}^{s}\} =e^{2}\,\frac{\bar{v}(p_{2})}{M_{W}^{2}}\Big{[}Q_{f}\;\;\not\!q_{1 }+\frac{M_{Z}^{2}\;T_{3}^{f}}{2(M_{Z}^{2}-M_{W}^{2})}\Big{(}m_{q}(1-2P_{L})-2 \not\!q_{1}P_{L}\Big{)}\Big{]}u(p_{1})+{\cal O}(1/x),\] \[\mathrm{Im}\{{\cal M}_{Z}^{s}\} =e^{2}\,\frac{\Gamma_{Z}\;M_{Z}\;\;T_{3}^{f}}{2(M_{Z}^{2}-M_{W}^{ 2})^{2}}\;\;\bar{v}(p_{2})\Big{(}m_{q}(1-2P_{L})-2\not\!q_{1}P_{L}\Big{)}u(p_{ 1})+{\cal O}(1/x). \tag{106}\]
\[\mathrm{Re}\{{\cal M}_{q}^{s}\} =e^{2}\,\frac{-M_{Z}^{2}}{2M_{W}^{2}(M_{Z}^{2}-M_{W}^{2})}\;\; \bar{v}(p_{2})\Big{(}m_{q}+\not\!q_{1}\Big{)}P_{L}u(p_{1})+{\cal O}(1/x),\] \[\mathrm{Im}\{{\cal M}_{q}^{s}\} =e^{2}\,\frac{-\Gamma_{Z}M_{Z}}{2(M_{Z}^{2}-M_{W}^{2})^{2}}\;\; \bar{v}(p_{2})\Big{(}m_{q}+\not\!q_{1}\Big{)}P_{L}u(p_{1})+{\cal O}(1/x). \tag{107}\]
\[\mathrm{Re}\{{\cal M}_{H}^{s}\} =e^{2}\,\frac{m_{q}M_{Z}^{2}}{4M_{W}^{2}(M_{Z}^{2}-M_{W}^{2})}\; \;\bar{v}(p_{2})u(p_{1})+{\cal O}(1/x),\] \[\mathrm{Im}\{{\cal M}_{H}^{s}\} =e^{2}\,\frac{\Gamma_{Z}m_{q}M_{Z}}{4(M_{Z}^{2}-M_{W}^{2})^{2}}\; \;\bar{v}(p_{2})u(p_{1})+{\cal O}(1/x). \tag{108}\]
\[\mathrm{Re}\{{\cal M}_{\gamma}^{s}\} =-Q_{f}\,e^{2}\,\frac{\bar{v}(p_{2})\;\;\not\!q_{1}u(p_{1})}{M_{W} ^{2}}+{\cal O}(1/x),\] \[\mathrm{Im}\{{\cal M}_{\gamma}^{s}\} =-Q_{f}\,e^{2}\,\frac{\Gamma_{W}}{M_{W}}\,\frac{\bar{v}(p_{2}) \not\!q_{1}u(p_{1})}{M_{W}^{2}}+{\cal O}(1/x). \tag{109}\]
\[\mathrm{Re}\{{\cal M}_{Z}^{s}\} =e^{2}\,\frac{\bar{v}(p_{2})}{M_{W}^{2}}\Big{[}Q_{f}\;\;\not\!q_{1 }+\frac{M_{Z}^{2}\;T_{3}^{f}}{2(M_{Z}^{2}-M_{W}^{2})}\Big{(}m_{q}(1-2P_{L})-2 \not\!q_{1}P_{L}\Big{)}\Big{]}u(p_{1})+{\cal O}(1/x),\] \[\mathrm{Im}\{{\cal M}_{Z}^{s}\} =e^{2}\,\frac{M_{Z}\,T_{3}^{f}}{2(M_{Z}^{2}-M_{W}^{2})^{2}}\,\bar {v}(p_{2})\Bigg{\{}\Gamma_{Z}+\frac{\Gamma_{W}}{M_{W}}\,\frac{M_{Z}\,(M_{Z}^{2 }-2M_{W}^{2})}{M_{W}^{2}}\Bigg{\}}\Bigg{\{}m_{q}\,(1-2P_{L})\] \[-2\not\!q_{1}P_{L}\Bigg{\}}\,u(p_{1})+Q_{f}\frac{\Gamma_{W}}{M_{W }}\,\frac{(M_{Z}^{2}-2M_{W}^{2})}{M_{Z}^{2}-M_{W}^{2}}\,\not\!q_{1}\,\bar{v}(p _{2})\,u(p_{1})+{\cal O}(1/x). \tag{110}\]
\[\text{Re}\{{\cal M}_{q}^{s}\} =e^{2}\,\frac{-M_{Z}^{2}}{2M_{W}^{2}(M_{Z}^{2}-M_{W}^{2})}\;\;\bar{v} (p_{2})\Big{(}m_{q}+q\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt_{ \hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt _{\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt_{ \hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt_{ \hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt_{ \hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt_{ \hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt_{ \hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt _{\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt _{\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt _{\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt _{\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt _{\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt {\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt {\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt {\hskip-5.690551pt_{\hskip-5.690551pt_{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt_{\hskip-5.690551pt{\hskip-5.690551pt_{\hskip-5.690551pt {\hskip-5.690551pt_{\hskip-5.690551pt{\hskip-5.690551pt_{\hskip-5.690551pt {\hskip-5.690551pt_{\hskip-5.690551pt{\hskip-5.690551pt_{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt_{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt_{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.6905551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.6905551pt{\hskip-5.690551pt{\hskip-5.6905551pt{\hskip-5.690551pt {\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.6905551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.6905551pt{\hskip-5.6905551pt{\hskip-5.690551pt{\hskip-5.690551pt {\hskip-5.6905551pt{\hskip-5.690551pt{\hskip-5.690551pt{\hskip-5.6905551pt {\hskip-5.690551pt{\hskip-5. |
2305.07431 | Quantitative Magnetic Isoperimetric Inequality | In 1996 Erdoes showed that among planar domains of fixed area, the smallest
principal eigenvalue of the Dirichlet Laplacian with a constant magnetic field
is uniquely achieved on the disk. We establish a quantitative version of this
inequality, with an explicit remainder term depending on the field strength
that measures how much the domain deviates from the disk. | Rohan Ghanta, Lukas Junge, Leo Morin | 2023-05-12T12:53:21Z | http://arxiv.org/abs/2305.07431v1 | # Quantitative magnetic isoperimetric inequality
###### Abstract.
In 1996 Erdos showed that among planar domains of fixed area, the smallest principal eigenvalue of the Dirichlet Laplacian with a constant magnetic field is uniquely achieved on the disk. We establish a quantitative version of this inequality, with an explicit remainder term depending on the field strength that measures how much the domain deviates from the disk.
## 1. Introduction
To solve a problem in Probability and Mathematical Physics [11],[12], Erdos developed the magnetic isoperimetric inequality [10]. It generalizes the Faber-Krahn inequality to the magnetic Laplacian. Starting with Polya and Szego [19], Faber-Krahn-type results have been established by proving rearrangement inequalities. The inclusion of a magnetic field, however, makes it notoriously difficult to implement the standard symmetrization methods. Erdos met the challenge head on: he managed to prove a magnetic rearrangement inequality, which is reminiscent of the celebrated Polya-Szego inequality but with an interesting caveat. Such symmetry results with a magnetic field are-alasi-very few and far between [1],[5].
Still another compelling feature is that rearrangements alone are not sufficient for arguing the magnetic isoperimetric inequality. This stands in sharp contrast to the classical Faber-Krahn setting. To complete the proof Erdos introduced a new inequality, tailored specifically for a magnetic Schrodinger operator on a disk and for which there exists no analog in the absence of a magnetic field.
We improve Erdos' result. He showed that if a planar domain is not a disk, then the principal eigenvalue of the Dirichlet magnetic Laplacian is strictly larger on that domain than on the disk of same area. We take the next step and establish stability: if the principal eigenvalue of the magnetic Laplacian is just slightly larger on a planar domain than on the disk of same area, then that domain is only slightly different from the disk. Faint perturbations of the smallest principal eigenvalue will not induce a dramatic change in the underlying geometry-and this dynamic is very sensitive to the field strength. We prove our stability estimate with a remainder term that quantifies the difference between the domain and the disk.
Quantitative Faber-Krahn-type inequalities have been developed almost exclusively around the classical theory of rearrangements. Fueled in large part by the seminal work of Fusco et al. [13], the last decade has given rise to an entire industry now devoted to the stability of a remarkable range of geometric and functional inequalities. Our paper provides the first stability result with a magnetic field. And here, the well-established rearrangement framework is no longer sufficient.
## 2. Statement of Problem and Main Result
Let \(\Omega\subset\mathbb{R}^{2}\) be a bounded, connected open set with a smooth boundary. The principal eigenvalue of the Dirichlet magnetic Laplacian on the planar domain \(\Omega\) is
\[\lambda(B,\Omega):=\inf_{f\in H_{0}^{1}(\Omega)}\frac{\int_{\Omega}|(-i\nabla- \alpha)f|^{2}dx}{\int_{\Omega}|f|^{2}dx}, \tag{2.1}\]
where \(\alpha=\frac{B}{2}\left(-x_{2},x_{1}\right)\) is a magnetic vector potential generating a homogeneous magnetic field of strength \(B\geq 0\), i.e. \(\operatorname{rot}(\alpha)=B\). We denote by \(D_{R}\) a disk of radius \(R\), centered at the origin, with the same area as \(\Omega\), i.e. \(|\Omega|=|D_{R}|=\pi R^{2}\).
In 1996 Erdos [10] proved the magnetic isoperimetric inequality
\[\lambda(B,\Omega)\geq\lambda(B,D_{R}), \tag{2.2}\]
with equality if and only if \(\Omega\) is a disk. In the absence of a magnetic field, i.e. \(B=0\), his result reduces to the usual Faber-Krahn inequality.
In this paper, we want to add to the right-hand side of (2.2) a remainder term that measures how much the planar domain \(\Omega\) deviates from being a disk. This would make it possible to understand the shape of \(\Omega\) now in terms of how close it is to achieving equality in (2.2). Cf. [7] & references therein.
We measure the difference between \(\Omega\) and the disk in the usual way in terms of the interior deficiency and the Fraenkel asymmetry of the domain.
**Definition**.: The interior deficiency (asymmetry) of a set is defined as
\[\mathcal{A}_{I}\left(\Omega\right):=\frac{R-\rho_{-}(\Omega)}{R},\]
where \(\rho_{-}(\Omega)\) denotes the radius of the largest ball contained in \(\Omega\), and \(R\) as above is the radius of \(D_{R}\).
**Definition**.: The Fraenkel asymmetry of a set is defined as
\[\mathcal{A}_{F}\left(\Omega\right):=\inf_{x_{0}\in\mathbb{R}^{2}}\frac{| \Omega\Delta(x_{0}+D_{R})|}{2|\Omega|}.\]
Both asymmetries are bounded by one and vanish if and only if the set is a disk.
Our main result is a quantitative version of the magnetic isoperimetric inequality.
**Theorem 2.1**.: _Let \(\mathcal{A}\left(\Omega\right)\) denote either the interior asymmetry or the Fraenkel asymmetry. In the case of the interior asymmetry we also assume \(\Omega\) is simply connected. Then there is a universal constant \(c>0\), independent of \(\Omega\) and \(B\), such that_
\[\lambda(B,\Omega)\geq\lambda(B,D_{R})(1+ce^{-\frac{5}{6}BR^{2}}\mathcal{A}( \Omega)^{\frac{10}{3}})\,. \tag{2.3}\]
_Moreover, if \(0\leq BR^{2}\leq\frac{1}{\pi}\), then_
\[\lambda(B,\Omega)\geq\lambda(B,D_{R})(1+c\mathcal{A}(\Omega)^{3})\,. \tag{2.4}\]
_Remark 2.2_.: The quantity \(\mathcal{A}\left(\Omega\right)\) is scale invariant. Furthermore \(\lambda\) scales like \(t^{2}\lambda(B,t\Omega)=\lambda(t^{2}B,\Omega)\) for \(t>0\), so the factor \(BR^{2}\) appearing in our constant is the natural parameter for this problem.
In the absence of a magnetic field, i.e. \(B=0\), the estimate in (2.4) reduces to Hansen and Nadirashvili's quantitative Faber-Krahn inequality with the asymmetry cubed [15],[3]. More recently, Brasco et al. [8] proved it with the square power: this is the sharp form, because the exponent cannot be any smaller [4],[18]. Our
magnetic version in (2.3) should likewise instead have the square of the asymmetry and, in principle, one could adapt Brasco et al.'s argument to achieve this. Their state-of-the-art methods, however, are nonconstructive and will not yield an explicit constant. This would make it impossible to understand the pertinent role of the magnetic field strength \(B\) in the stability of Erdos' inequality.
Our methods, on the other hand, yield an explicit constant with a natural dependence on the field strength. Physical intuition suggests that as \(B\to\infty\) the principal eigenfunctions start to localize on a length scale proportional to \(1/\sqrt{B}\), away from the boundary, and therefore \(\lambda(B,\cdot)\) becomes less sensitive to the shape of the domain: it can but faintly distinguish between even very dissimilar shapes, and the little sensitivity that remains comes from the fact that these eigenfunctions can still feel about near the boundary with their exponentially small tails. Now \(\Omega\) can look rather different from \(D_{R}\) and yet \(\lambda(B,\Omega)\approx\lambda(B,D_{R})\): a strong magnetic field compromises stability. We manage to capture this picture in (2.3) with our constant which vanishes, exponentially, as \(B\to\infty\).
To prove his Faber-Krahn-type inequality in (2.2), Erdos started out in the usual way by establishing a rearrangement inequality. See Lemma 3.1. While there are certainly nontrivial magnetic aspects to the argument, Erdos essentially mimicked the standard proof [20] of the analogous Polya-Szego inequality using the coarea formula and the isoperimetric inequality. But in imposing the Polya-Szego scheme on his problem, he was forced to change the magnetic field on the disk. The vector potential on the right-hand side of (3.1) is no longer the same: and thus his magnetic rearrangement inequality cannot readily imply (2.2) in the same way that the Polya-Szego inequality yields Faber-Krahn.
To deal with this mis-match between the magnetic fields on \(\Omega\) and \(D_{R}\), Erdos developed the _comparison lemma_ on the disk. See Remark 4.2. It compares the ground-state energies of the operator on the right-hand side of (3.1) corresponding to different magnetic fields. This in turn allowed him to recover the original magnetic field on \(D_{R}\) and finish proving (2.2). His comparison lemma is built on the variational principle and has nothing to do with rearrangements. And unlike his rearrangement inequality, it has no analog in the absence of a magnetic field.
To prove our stability estimate in Theorem 2.1, we also start out in the usual way by establishing a quantitative version of Erdos' rearrangement inequality. See Proposition 3.2. This is nothing new: in the absence of a magnetic field, i.e. \(B=0\), it just reduces to the quantitative version of the Polya-Szego inequality that was used in proving stability of Faber-Krahn [7]. Here we mimic Erdos' proof but instead apply the _quantitative isoperimetric inequality_ on the level sets.
**Theorem 2.3**.: _Let \(U\subset\mathbb{R}^{2}\) be a bounded set with smooth boundary, and let \(\mathcal{P}(U)\) denote the perimeter of \(U\). Let \(\mathcal{A}(U)\) denote either the interior asymmetry or the Fraenkel asymmetry. In the case of the interior asymmetry we also assume \(U\) is simply connected. Then there is a universal constant \(c>0\) such that_
\[\mathcal{P}(U)\geq 2\sqrt{\pi}\left|U\right|^{\frac{1}{2}}\left(1+c\mathcal{ A}(U)^{2}\right).\]
This was first proved by Bonnesen in 1924 for simply connected planar sets using the interior asymmetry [6],[18]. In 2008 Fusco et al. proved a more general version using the Fraenkel asymmetry [13]. Theorem 2.3 forms the backbone of the first part of the paper.
In Lemma 4.1 we establish a quantitative version of Erdos' comparison lemma. Now this is really a new estimate, which stands completely outside of the rearrangement framework-and it only enters the scene when \(B\) is large.
In Corollary 4.3 we present two very different lower bounds on the quantity \(\lambda(B,\Omega)-\lambda(B,D_{R})\), both involving the asymmetry of the level sets of the principal eigenfunction corresponding to \(\lambda(B,\Omega)\). The first bound, (4.7), is based on our quantitative version of the rearrangement inequality. The second bound, (4.8), is based on our quantitative version of the comparison lemma.
As usual, the main difficulty lies in going from the asymmetry of these level sets in Corollary 4.3 to the asymmetry of the whole domain. We deal with this in the second part of the paper. When \(B\) is small, we operate entirely within the rearrangement framework just as in the classical Faber-Krahn setting. Here our argument is a direct perturbation of Hansen and Nadirashvili's proof of their quantitative Faber-Krahn inequality [15]. We only use the first bound, given in (4.7), of Corollary 4.3 which is based on the quantitative version of the rearrangement inequality. This is enough to prove the estimate in (2.4) of Theorem 2.1.
But as \(B\) increases, our weak-field adaptation of Hansen and Nadirashvili's technique breaks down: with a strong magnetic field, the rearrangement framework alone is no longer sufficient for establishing stability. Here we make full use of both the quantitative version of the rearrangement inequality _and now_ our quantitative version of the comparison lemma. A distinctive feature of our argument is the necessary interplay between the traditional bound in (4.7)-rooted firmly within the paradigmatic framework of rearrangement inequalities-and our _magnetic bound_ in (4.8), which is unique to our problem and irreducible to any other estimate used in establishing stability of a Faber-Krahn-type inequality.
### Part \(1\). The Magnetic Isoperimetric Inequality
Here we re-prove Erdos' magnetic isoperimetric inequality but with a remainder term involving the asymmetry of the level sets of the principal eigenfunction corresponding to \(\lambda\left(B,\Omega\right)\). This is given as Corollary 4.3. The quantitative isoperimetric inequality plays an essential role.
## 3. The Magnetic Rearrangement Inequality
Standard elliptic theory tells us that the principal eigenfunction corresponding to \(\lambda(B,\Omega)\) is a complex-valued analytic function. The first ingredient in Erdos' proof is a rearrangement inequality. He proved the following.
**Lemma 3.1**.: _Let \(f\), \(\left\|f\right\|_{2}=1\) be a complex-valued analytic function on \(\Omega\) that vanishes on the boundary, and let \(\left|f\right|^{*}\) denote the symmetric decreasing rearrangement of \(\left|f\right|\). Then there exists a vector potential \(\tilde{\alpha}(x)=\frac{a\left(\left|x\right|\right)}{\left|x\right|}\left(- x_{2},x_{1}\right)\), where \(a\left(\left|x\right|\right)\) is a function satisfying \(0\leq a\left(\left|x\right|\right)\leq\frac{B\left|x\right|}{2}\), such that_
\[\int_{\Omega}\left|\left(-i\nabla-\alpha\right)f\right|^{2}dx\geq\int_{D_{R}} \left|\left(-i\nabla-\tilde{\alpha}\right)\left|f\right|^{*}\right|^{2}dx+B- \int_{D_{R}}\text{rot}\left(\tilde{\alpha}\right)\left|f\right|^{*2}dx. \tag{3.1}\]
This is analogous to the celebrated Polya-Szego inequality but with some caveats:
1. The magnetic field on the disk is no longer the same. Our vector potential \(\alpha=\frac{B}{2}\left(-x_{2},x_{1}\right)\) corresponds to a homogeneous field of strength \(B\). Now \(\tilde{\alpha}\) corresponds to a radially symmetric but _inhomogeneous_ field.
2. The potential \(\tilde{\alpha}\) depends on \(f\), because Erdos constructed \(a\left(\left|x\right|\right)\) from the level sets of \(\left|f\right|\);
3. in particular, if \(a\left(\left|x\right|\right)=\frac{B\left|x\right|}{2}\), then the level set \(\left\{\left|f\right|>\left|f\right|^{\ast}\left(x\right)\right\}\) is a disk.
Lemma 3.1 yields a lower bound on \(\lambda(B,\Omega)\). Had the vector potential remained unchanged, (3.1) would have readily implied \(\lambda(B,\Omega)\geq\lambda(B,D_{R})\).
In this section we prove a quantitative version of his rearrangement inequality, and we write the right-hand side more conveniently in terms of polar coordinates.
**Proposition 3.2**.: _Let \(f\), \(\|f\|_{2}=1\) be as in the statement of Lemma 3.1, and \(q\left(\left|x\right|\right):=\left|f\right|^{\ast}\left(x\right)\). Then there exists a bounded function \(a\left(\left|x\right|\right)\), depending on \(f\) and \(B\), such that1_
Footnote 1: We use here the following convention for the interior asymmetry. If the open set \(U\) is not simply connected, we define \(\mathcal{A}_{I}(U)\) to be the asymmetry of the smallest simply connected set containing \(U\). Since \(\Omega\) is simply connected, this will not change the final value of \(\mathcal{A}_{I}(\Omega)\). This convention allows us to use Theorem 2.3 for the level sets of \(\left|f\right|\).
\[\int_{\Omega}\left|\left(-i\nabla-\alpha\right)f\right|^{2}dx\geq B+2\pi\int_{ 0}^{R}\left(q^{\prime}(r)+a(r)q(r)\right)^{2}\left(1+c\mathcal{A}^{2}\left( \left\{\left|f\right|>q(r)\right\}\right)\right)^{2}rdr,\]
_and_
\[0\leq a(r)\leq\frac{Br}{2}\left(1+c\mathcal{A}^{2}\left(\left\{\left|f \right|>q(r)\right\}\right)\right)^{-2}\leq\frac{Br}{2}\,, \tag{3.2}\]
_where \(c>0\) is a universal constant independent of \(B\) and \(\Omega\)._
In the absence of the asymmetry term, the expression on the right-hand side indeed coincides with that of (3.1). See Proof of Lemma A.2 in the appendix.
### The Proof of Proposition 3.2
Erdos proved his rearrangement inequality within the standard Polya-Szego scheme [20] using the coarea formula and the isoperimetric inequality, which we replace with its quantitative version.
To use the coarea formula, first we need a real-valued function. By modifying the magnetic vector potential, we can work with \(\left|f\right|\) instead.
**Lemma 3.3**.: _Let \(f\) be as in the statement of Lemma 3.1, and \(\Omega_{0}:=\Omega\setminus\left\{f=0\right\}\). Let \(\theta:\Omega_{0}\mapsto\left[0,2\pi\right)\) be such that \(f=\left|f\right|e^{i\theta}\). Since \(\Omega_{0}\) has full measure, \(w:=\alpha-\nabla\theta\) is defined almost everywhere and \(rot(w)=B\). Then, with \(w^{\perp}:=\left(-w_{2},w_{1}\right)\),_
\[\int_{\Omega}\left|\left(-i\nabla-\alpha\right)f\right|^{2}dx=B+\int_{\Omega} \left|\nabla\left|f\right|+w^{\perp}\left|f\right|\right|^{2}dx.\]
Proof.: Since \(\left|f\right|\in H_{0}^{1}\left(\Omega\right)\) and \(w\) is real-valued,
\[\int_{\Omega}\left|\left(-i\nabla-\alpha\right)f\right|^{2}dx=\int_{\Omega} \left|\left(-i\nabla-w\right)\left|f\right|\right|^{2}dx=\int_{\Omega}\left( \left|\nabla\left|f\right|\right|^{2}+\left|w^{\perp}\right|^{2}\left|f\right| ^{2}\right)dx.\]
Note \(w\) is smooth a.e. By completing the square and integrating by parts,
\[\int_{\Omega}\left|\left(-i\nabla-\alpha\right)f\right|^{2}dx =\int_{\Omega}\left(\left|\nabla\left|f\right|+w^{\perp}\left|f \right|\right|^{2}-2\left|f\right|w^{\perp}\cdot\nabla\left|f\right|\right)dx\] \[=\int_{\Omega}\left(\left|\nabla\left|f\right|+w^{\perp}\left|f \right|\right|^{2}+\left|f\right|^{2}\operatorname{div}(w^{\perp})\right)dx.\]
Since \(\operatorname{rot}\left(w\right)=B\), the lemma follows.
Then we use the coarea formula and arrive at an expression involving an integral over the level sets of \(\left|f\right|\).
**Lemma 3.4**.: _Let \(f,w^{\perp}\) be as in the statement of Lemma 3.3. Then,_
\[\int_{\Omega}\left|\nabla\left|f\right|+w^{\perp}\left|f\right|\right|^{2}dx\geq \int_{0}^{\infty}dz\,\left(1-B\Phi(z)z\right)^{2}\int_{\left\{\left|f\right|=z \right\}}\left|\nabla\left|f\right|\right|, \tag{3.3}\]
_with_
\[\Phi(z):=\frac{\left|\left\{\left|f\right|>z\right\}\right|}{\int_{\left\{ \left|f\right|=z\right\}}\left|\nabla\left|f\right|\right|}. \tag{3.4}\]
If there is no magnetic field, i.e. \(B=0\), and \(f\) is a positive function, then the relation in (3.3) reduces to the usual coarea formula used in the proof of the Polya-Szego inequality [20].
Proof of Lemma 3.4.: There exists \(w^{\prime}\) orthogonal to \(\nabla\left|f\right|\) and \(\varphi:\Omega\mapsto\mathbb{R}\) such that \(w^{\perp}=-\varphi\nabla\left|f\right|+w^{\prime}\). By the Pythagorean theorem,
\[\int_{\Omega}\left|\nabla\left|f\right|+w^{\perp}\left|f\right| \right|^{2}dx =\int_{\Omega}\left(\left|\left(1-\varphi\left|f\right|\right) \nabla\left|f\right|\right|^{2}+\left|w^{\prime}\left|f\right|\right|^{2} \right)dx\] \[\geq\int_{\Omega}\left|\left(1-\varphi\left|f\right|\right) \nabla\left|f\right|\right|^{2}dx.\]
Now we are in a position to use the coarea formula:
\[\int_{\Omega}\left|\left(1-\varphi\left|f\right|\right)\nabla \left|f\right|\right|^{2}dx =\int_{0}^{\infty}dz\int_{\left\{\left|f\right|=z\right\}}\left( 1-\varphi z\right)^{2}\left|\nabla\left|f\right|\right|\] \[\geq\int_{0}^{\infty}dz\,\frac{\left(\int_{\left\{\left|f \right|=z\right\}}\left(1-\varphi z\right)\left|\nabla\left|f\right|\right| \right)^{2}}{\int_{\left\{\left|f\right|=z\right\}}\left|\nabla\left|f\right| \right|}.\]
We use Stokes' theorem on the level sets. For almost all \(z>0\), the level set \(\left\{\left|f\right|=z\right\}\) is regular by Sard's theorem. Thus
\[B|\{\left|f\right|>z\}|=\int_{\left\{\left|f\right|>z\right\}}\operatorname{ rot}\left(w\right)=\int_{\left\{\left|f\right|=z\right\}}w\cdot\tau,\]
where \(\tau=\frac{\left(\nabla\left|f\right|\right)^{\perp}}{\left|\left(\nabla \left|f\right|\right)^{\perp}\right|}\). Since \(w\cdot\tau=\varphi\left|\nabla\left|f\right|\right|\), we conclude
\[\int_{\Omega}\left|\nabla\left|f\right|+w^{\perp}\left|f\right|\right|^{2}dx \geq\int_{0}^{\infty}dz\,\frac{\left(\int_{\left\{\left|f\right|=z\right\}} \left|\nabla\left|f\right|\right|-Bz\left|\left\{\left|f\right|>z\right\} \right|\right)^{2}}{\int_{\left\{\left|f\right|=z\right\}}\left|\nabla\left|f \right|\right|}.\]
The lemma follows from the definition of \(\Phi\) in (3.4).
With the coarea-type estimate in (3.3), Erdos applied the isoperimetric inequality on the level sets of \(\left|f\right|\) to prove his rearrangement inequality; and when \(B=0\), his argument reduces to the standard proof of the Polya-Szego inequality [20]. Below we instead apply the quantitative isoperimetric inequality on these level sets.
Proof of Proposition 3.2.: From Lemma 3.3, Lemma 3.4 and Holder's inequality
\[\int_{\Omega}\left|\left(-i\nabla-\alpha\right)f\right|^{2}dx\geq B+\int_{0}^{ \infty}dz\,\left(1-B\Phi(z)z\right)^{2}\!\frac{\left|\left\{\left|f\right|=z \right\}\right|^{2}}{\int_{\left\{\left|f\right|=z\right\}}\left|\nabla\left|f \right|\right|^{-1}}\,.\]
By Sard's theorem, the denominator is non-vanishing for almost all \(z>0\). And since \(q\) is the rearrangement of \(\left|f\right|\),
\[q(r)=F^{-1}\left(\pi r^{2}\right)\text{ where }F(z):=\left|\left\{\left|f \right|>z\right\}\right|. \tag{3.5}\]
By the coarea formula, again for almost all \(z>0\)
\[F(z)=\int_{z}^{\infty}d\xi\int_{\{\left|f\right|=\xi\}}\left|\nabla\left|f\right| \right|^{-1}\text{ and }F^{\prime}(z)=-\int_{\{\left|f\right|=z\}}\left|\nabla\left|f\right| \right|^{-1}. \tag{3.6}\]
Then,
\[\int_{\Omega}\left|\left(-i\nabla-\alpha\right)f\right|^{2}dx\geq B-\int_{0}^{ \infty}\left(1-B\Phi(z)z\right)^{2}\left|\{\left|f\right|=z\}\right|^{2}F^{ \prime}(z)^{-1}dz. \tag{3.7}\]
Now we do a change of variable \(z=q(r)\) and apply the isoperimetric inequality, Theorem 2.3, on the level sets: \(\left|\{\left|f\right|=q(r)\}\right|\geq 2\pi r\left(1+c\mathcal{A}^{2}\left( \{\left|f\right|>q(r)\}\right)\right)\). We write \(\mathcal{A}^{2}\) for short. Then,
\[\int_{\Omega}\left|\left(-i\nabla-\alpha\right)f\right|^{2}dx\geq B+\int_{0}^ {R}\left(1-B\Phi\left(q(r)\right)q(r)\right)^{2}\frac{\left(2\pi r\right)^{2} q^{\prime}(r)}{F^{\prime}(q(r))}\left(1+c\mathcal{A}^{2}\right)^{2}dr.\]
Since \(q^{\prime}(r)=2\pi rF^{\prime}(q(r))^{-1}\),
\[\int_{\Omega}\left|\left(-i\nabla-\alpha\right)f\right|^{2}dx\geq B+2\pi\int_{0 }^{R}\left[q^{\prime}(r)-\frac{2\pi rB\Phi(q(r))}{F^{\prime}(q(r))}q(r)\right] ^{2}(1+c\mathcal{A}^{2})^{2}rdr.\]
Writing \(a(r):=-2\pi rBF^{\prime}(q(r))^{-1}\Phi(q(r))\), we deduce our rearrangement inequality.
It remains to prove the upper bound in (3.2). By Holder's inequality
\[-F^{\prime}(q(r))=\int_{\{\left|f\right|=q(r)\}}\left|\nabla\left|f\right| \right|^{-1}\geq\left|\left\{\left|f\right|=q(r)\right\}\right|^{2}\left(\int _{\{\left|f\right|=q(r)\}}\left|\nabla\left|f\right|\right|\right)^{-1},\]
and by the isoperimetric inequality, Theorem 2.3,
\[a(r)\leq 2\pi rB\frac{\left|\{\left|f\right|>q(r)\}\right|}{\left|\{\left|f \right|=q(r)\}\right|^{2}}\leq\frac{Br}{2}\left(1+c\mathcal{A}^{2}\left(\{ \left|f\right|>q(r)\}\right)\right)^{-2}.\]
This concludes the proof of Proposition 3.2.
## 4. The Comparison Lemma
The second ingredient in Erdos' proof is a comparison lemma, which makes it possible to recover from the right-hand side of (3.1) the original potential \(\alpha\) on the disk. In this section we prove a quantitative version of his comparison lemma.
For a potential \(\tilde{\alpha}=\frac{a(\left|x\right|)}{\left|x\right|}\left(-x_{2},x_{1}\right)\), with \(a\in L^{\infty}\left((0,R)\right)\), we consider the ground-state energy of the operator \(\left(-i\nabla-\tilde{\alpha}\right)^{2}-\operatorname{rot}\left(\tilde{ \alpha}\right)\) restricted to radial functions on the disk, again written more conveniently in terms of polar coordinates
\[\mathfrak{e}\left(a(r)\right):=\inf_{q\in H^{1,\text{rad}}_{0}(D_{R})}\frac{2 \pi\int_{0}^{R}\left(q^{\prime}(r)+a(r)q(r)\right)^{2}rdr}{2\pi\int_{0}^{R}q( r)^{2}rdr}, \tag{4.1}\]
where \(H^{1,\text{rad}}_{0}(D_{R}):=\left\{q:[0,R]\rightarrow\mathbb{R}\text{ such that }x\mapsto q(\left|x\right|)\text{ belongs to }H^{1}_{0}(D_{R})\right\}.\)
The function \(a(r)=\frac{Br}{2}\) corresponds to the original potential \(\alpha=\frac{B}{2}\left(-x_{2},x_{1}\right)\), and since \(\operatorname{rot}\left(\alpha\right)=B\),
\[B+\mathfrak{e}\left(Br/2\right)=\inf_{q\in H^{1,\text{rad}}_{0}(D_{R})}\frac{ \int_{D_{R}}\left|\left(-i\nabla-\alpha\right)q(\left|x\right|\right)^{2}dx}{ \int_{D_{R}}q\left(\left|x\right|\right)^{2}dx}\geq\lambda(B,D_{R}). \tag{4.2}\]
We compare the ground-state energies for different potentials on the disk.
**Lemma 4.1**.: _Let \(q_{a}\) be a normalized minimizer for the energy \(\mathfrak{e}\left(a(r)\right)\) in (4.1). Let_
\[u_{a}(r):=\exp\left(-2\int_{0}^{r}a(s)\,ds\right)\ \ \text{and}\ \ p_{a}(r):=q_{a}(r)u_{a}(r)^{-\frac{1}{2}}. \tag{4.3}\]
_Then for \(a,\tilde{a}\in L^{\infty}\left(\left(0,R\right)\right)\),_
\[\mathfrak{e}\left(a(r)\right)\geq\mathfrak{e}\left(\tilde{a}(r)\right)+\frac{ 2\int_{0}^{R}\left(\tilde{a}-a\right)p_{a}\left|p_{a}^{\prime}\right|u_{\tilde {a}}rdr}{\int_{0}^{R}p_{a}^{2}u_{\tilde{a}}rdr}. \tag{4.4}\]
_Remark 4.2_.: Our bound in (4.4) implies Erdos' **comparison lemma**: if \(a\leq\tilde{a}\), then \(\mathfrak{e}\left(a(r)\right)\geq\mathfrak{e}\left(\tilde{a}\left(r\right)\right)\). See Lemma 3.1 in [10].
Proof.: We write
\[\mathfrak{e}\left(a(r)\right)=\inf_{p\in H_{0}^{1,\text{rad}}\left(D_{R} \right)}\frac{\int_{0}^{R}\left(p^{\prime}\right)^{2}u_{a}rdr}{\int_{0}^{R}p^ {2}u_{a}rdr}=\frac{\int_{0}^{R}\left(p_{a}^{\prime}\right)^{2}u_{a}rdr}{\int_ {0}^{R}p_{a}^{2}u_{a}rdr}. \tag{4.5}\]
Since \(p_{a}\) is the minimizer in (4.5), it solves the Euler-Lagrange equation
\[-p_{a}^{\prime\prime}u_{a}r-p_{a}^{\prime}u_{a}^{\prime}r-p_{a}^{\prime}u_{a} =\mathfrak{e}\left(a(r)\right)p_{a}u_{a}r. \tag{4.6}\]
Now we consider \(\mathfrak{e}\left(\tilde{a}(r)\right)\). It follows from the variational principle and (4.6) that
\[\mathfrak{e}\left(\tilde{a}(r)\right) \leq\frac{\int_{0}^{R}(p_{a}^{\prime})^{2}u_{\tilde{a}}rdr}{\int_ {0}^{R}p_{a}^{2}u_{\tilde{a}}rdr}=\frac{\int_{0}^{R}(-p_{a}^{\prime\prime}u_{a }r-p_{a}^{\prime}u_{a}^{\prime}r-p_{a}^{\prime}u_{a})\frac{u_{\tilde{a}}}{u_{ a}}p_{a}-p_{a}^{\prime}p_{a}u_{a}r(\frac{u_{\tilde{a}}}{u_{a}})^{\prime}dr}{ \int_{0}^{R}p_{a}^{2}u_{\tilde{a}}rdr}\] \[=\mathfrak{e}\left(a(r)\right)+\frac{2\int_{0}^{R}p_{a}^{\prime} p_{a}(\tilde{a}-a)u_{\tilde{a}}rdr}{\int_{0}^{R}p_{a}^{2}u_{\tilde{a}}rdr}.\]
Note that \(p_{a}^{\prime}<0\) by Hopf's Lemma.
Proposition 3.2, Lemma 4.1 and the observation in (4.2) allow us to conclude with the following corollary.
**Corollary 4.3**.: _Now let \(f\) be a principal eigenfunction corresponding to \(\lambda(B,\Omega)\) and \(q\left(\left|x\right|\right):=\left|f\right|^{*}\left(x\right)\). Let \(a(r)\) be as in Proposition 3.2 above, and let \(q_{a}\) be a normalized minimizer for the energy \(\mathfrak{e}\left(a(r)\right)\) in (4.1). Then there is a universal constant \(c>0\), independent of \(B\) and \(\Omega\), such that_
\[\lambda(B,\Omega)\geq\lambda(B,D_{R})+c\int_{0}^{R}\left(q^{\prime}(r)+a(r)q( r)\right)^{2}\mathcal{A}^{2}\left(\left\{\left|f\right|>q(r)\right\}\right)rdr, \tag{4.7}\]
_and_
\[\lambda(B,\Omega)\geq\lambda(B,D_{R})+cB\frac{\int_{0}^{R}p_{a}\left|p_{a}^{ \prime}\right|e^{-\frac{Br^{2}}{2}}\mathcal{A}^{2}\left(\left\{\left|f\right|> q(r)\right\}\right)r^{2}dr}{\int_{0}^{R}p_{a}^{2}e^{-\frac{Br^{2}}{2}}rdr}, \tag{4.8}\]
_where \(p_{a}\) is as given in Lemma 4.1 above._
Corollary 4.3 implies \(\lambda(B,\Omega)\geq\lambda(B,D_{R})\). Furthermore if \(\lambda(B,\Omega)=\lambda(B,D_{R})\), then either (4.7) or (4.8) can be used to deduce that almost all of the level sets of \(\left|f\right|\) are disks; and since \(f\) is an analytic function, this implies \(\Omega\) is a disk.
The first bound, given in (4.7), is established with _our quantitative version of the rearrangement inequality_ and with Erdos' comparison lemma. In the absence of a magnetic field, i.e. \(B=0\), this bound reduces to the usual estimate used in all the proofs of the quantitative Faber-Krahn inequality, e.g., [3], [14] and [15].
Our second bound, given in (4.8), is established with Erdos' rearrangement inequality, _our quantitative version of the comparison lemma_ and our estimate in (3.2), which follows from the quantitative isoperimetric inequality. This bound, on the other hand, has no such analog in the absence of a magnetic field.
### Part \(2\). The Quantitative Version
Here we prove Theorem 2.1 from Corollary 4.3 by extracting the asymmetry of the whole domain from the asymmetry of the level sets in (4.7) and (4.8). Let
\[|\{q\left(|x|\right)>s\}|=|\Omega|\left(1-\frac{1}{2}\mathcal{A}\left(\Omega \right)\right). \tag{4.9}\]
Following Hansen and Nadirashvili [15] we split the proof into two cases, depending on whether \(s\) is small or large. Lemma B.1 in the appendix will be useful.
## 5. The First Case: \(s\lesssim e^{-BR^{2}}\mathcal{A}\left(\Omega\right)\)
We assume
\[s\leq\frac{1}{8}\left|\Omega\right|^{-\frac{1}{2}}e^{-\frac{BR^{2}}{4}} \mathcal{A}\left(\Omega\right). \tag{5.1}\]
We use the representation in (4.5), which allows us to adapt the usual strategy for dealing with the Dirichlet Laplacian; and when \(B=0\), the argument reduces to Hansen and Nadirashvili's proof of their quantitative Faber-Krahn inequality [15].
We write \(E(B,\Omega):=\lambda(B,\Omega)-B\). Let \(p:=qu_{a}^{-\frac{1}{2}}\) with \(q,a\) as in Corollary 4.3 and \(u_{a}\) as in (4.3), and let \(\tilde{p}(r):=p(r)-se^{\int_{0}^{q-1\left(s\right)}a(\tau)d\tau}\). Since \(\tilde{p}^{\prime}=p^{\prime}\), it follows from the rearrangement inequality that
\[E(B,\Omega)\geq 2\pi\int_{0}^{R}\left(q^{\prime}+aq\right)^{2}rdr=2\pi\int_{0 }^{R}\left(\tilde{p}^{\prime}\right)^{2}u_{a}rdr\geq 2\pi\int_{0}^{q^{-1} \left(s\right)}\left(\tilde{p}^{\prime}\right)^{2}u_{a}rdr.\]
Since \(\tilde{p}\) vanishes at \(q^{-1}(s)\), it is admissible in the variational problem in (4.5) but on the disk \(\{q>s\}\), and
\[\frac{E(B,\Omega)}{2\pi\int_{0}^{q-1\left(s\right)}\tilde{p}^{2}u_{a}rdr}\geq \inf_{p\in H_{0}^{1,\text{rad}}\left(\{q>s\}\right)}\frac{\int_{0}^{q^{-1} \left(s\right)}\left(p^{\prime}\right)^{2}u_{a}rdr}{\int_{0}^{q^{-1}\left(s \right)}p^{2}u_{a}rdr}\geq E\left(B,\{q>s\}\right),\]
where the last inequality follows from the comparison lemma and the observation in (4.2). Using the scaling property in Remark 2.2 we further estimate
\[\frac{E(B,\Omega)}{2\pi\int_{0}^{q^{-1}\left(s\right)}\tilde{p}^{2}u_{a}rdr} \geq\frac{|\Omega|}{|\{q>s\}|}E\left(B\frac{|\{q>s\}|}{|\Omega|},D_{R}\right) \geq\frac{|\Omega|}{|\{q>s\}|}E(B,D_{R}), \tag{5.2}\]
where the last inequality follows from Lemma A.2 in the appendix and again the comparison lemma. Finally, we estimate the denominator
\[2\pi\int_{0}^{q^{-1}\left(s\right)}\tilde{p}^{2}u_{a}rdr =1-2\pi\int_{q^{-1}\left(s\right)}^{R}q^{2}rdr\] \[\geq 1-s^{2}|\{q<s\}|+s^{2}|\{q>s\}|-2se^{\frac{BR^{2}}{4}}| \Omega|^{\frac{1}{2}}\]
\[\geq cR^{2}\mathcal{A}^{2}\left(\Omega\right)\left(\sqrt{\int_{q^{-1}(s)}^{R}q^{ \prime}(r)^{2}\,r^{-1}dr}-\sqrt{\int_{q^{-1}(s)}^{R}\left(\frac{B}{2}q\right)^{2 }rdr}\right)^{2}\]
\[\geq cR^{2}\mathcal{A}^{2}\left(\Omega\right)\left(\frac{s}{\sqrt{\left|\{q \left(\left|x\right|\right)\leq s\}\right|}}-\frac{B}{2}s\sqrt{\left|\{q\left( \left|x\right|\right)\leq s\}\right|}\right)^{2}\]
\[\geq cR^{-2}\mathcal{A}^{3}\left(\Omega\right)\left(2-B\left|\Omega\right| \right)^{2}\]
\[\geq cR^{-2}\mathcal{A}^{3}\left(\Omega\right),\]
since \(B\leq\frac{1}{|\Omega|}=\frac{1}{\pi R^{2}}\). At the penultimate inequality we also used the assumption in (6.1). Using Lemma A.3, we conclude \(\lambda(B,\Omega)\geq\lambda(B,D_{R})(1+c\mathcal{A}(\Omega)^{3})\).
### Strong Magnetic Fields
We consider \(BR^{2}>\frac{1}{\pi}\) and prove our stability estimate in (2.3); instead of integrating as above on \(\left\{q\left(\left|x\right|\right)\leq s\right\}\), we choose to work closer to the boundary on a smaller annulus whose area is now proportional to the _spectral deficit_ of the domain
\[\mathcal{D}(B,\Omega):=\frac{\lambda(B,\Omega)}{\lambda(B,D_{R})}-1.\]
We treat two cases, depending on whether \(q\) is large or small near the boundary: \(q(R(1-\mathcal{D}\left(B,\Omega\right)^{\alpha}))>R^{-1}\mathcal{D}(B,\Omega) ^{\beta}\) and \(q(R(1-\mathcal{D}\left(B,\Omega\right)^{\alpha}))\leq R^{-1}\mathcal{D}(B, \Omega)^{\beta}\), where \(\alpha=\frac{1}{5}\) and \(\beta=\frac{3}{10}\) are chosen to optimize our result. For proving our estimate in (2.3), we can assume that the spectral deficit is very small
\[\mathcal{D}(B,\Omega)^{\alpha}<\min\left\{\frac{1}{2BR^{2}},\frac{1}{2}\right\}. \tag{6.2}\]
2.1. Suppose \(q(R(1-\mathcal{D}\left(B,\Omega\right)^{\alpha}))>R^{-1}\mathcal{D}(B,\Omega) ^{\beta}\). Then by continuity of \(q\),
\[q(R(1-\mathcal{D}\left(B,\Omega\right)^{\tilde{\alpha}}))=R^{-1}\mathcal{D}(B,\Omega)^{\beta}\text{ for some }\tilde{\alpha}>\alpha. \tag{6.3}\]
If \(q(R(1-\mathcal{D}(B,\Omega)^{\tilde{\alpha}}))\geq s\), our assumption in (6.1) readily yields
\[cR^{-1}e^{-\frac{BR^{2}}{4}}\mathcal{A}\left(\Omega\right)\leq s\leq q(R(1- \mathcal{D}\left(B,\Omega\right)^{\tilde{\alpha}}))=R^{-1}\mathcal{D}(B, \Omega)^{\beta},\]
and therefore
\[\mathcal{D}(B,\Omega)\geq ce^{-\frac{BR^{2}}{4\beta}}\mathcal{A}\left(\Omega \right)^{\frac{1}{\beta}}. \tag{6.4}\]
If \(q(R(1-\mathcal{D}(B,\Omega)^{\tilde{\alpha}}))<s\), then the weak-field argument from Section 6.1 applies mutatis mutandis. From the first bound, given in (4.7), of Corollary 4.3, the relation in (6.3), and Lemma B.1 we have
\[\lambda(B,D_{R})\mathcal{D}\left(B,\Omega\right)\] \[\geq cR^{-2}\mathcal{A}(\Omega)^{2}\mathcal{D}\left(B,\Omega \right)^{2\beta-\tilde{\alpha}}\left(1-BR^{2}\mathcal{D}(B,\Omega)^{\tilde{ \alpha}}\right)^{2}.\]
However, \(\tilde{\alpha}\) depends on \(B\) and \(\Omega\). Fortunately since \(\tilde{\alpha}>\alpha\) and \(\mathcal{D}\left(B,\Omega\right)<1\), we have \(\mathcal{D}\left(B,\Omega\right)^{\tilde{\alpha}}<\mathcal{D}\left(B,\Omega \right)^{\alpha}\); this allows to replace \(\mathcal{D}\left(B,\Omega\right)^{\tilde{\alpha}}\) in the above with \(\mathcal{D}\left(B,\Omega\right)^{\alpha}\). Furthermore, the bound in (6.2) offsets the large \(BR^{2}\) in the parenthetical expression, which thereby remains positive. Using Lemma A.3,
\[\mathcal{D}\left(B,\Omega\right)\geq c\frac{\mathcal{A}(\Omega)^{2}}{R^{2} \lambda(B,D_{R})}\mathcal{D}\left(B,\Omega\right)^{2\beta-\alpha}\geq c\frac {\mathcal{A}(\Omega)^{2}}{1+BR^{2}}\mathcal{D}\left(B,\Omega\right)^{2\beta- \alpha},\]
and therefore
\[\mathcal{D}\left(B,\Omega\right)^{1-2\beta+\alpha}\geq c\frac{\mathcal{A}( \Omega)^{2}}{1+BR^{2}}. \tag{6.5}\]
With our above choice of \(\alpha\) and \(\beta\), the inequalities in (6.4) and (6.5) both yield the same desired estimate in (2.3).
Thus far, we have only used the first bound, given in (4.7), of Corollary 4.3 which is based on the quantitative version of the rearrangement inequality.
2.2. Suppose \(q(R(1-\mathcal{D}\left(B,\Omega\right)^{\alpha}))\leq R^{-1}\mathcal{D}(B,\Omega )^{\beta}\)
If \(q(R(1-\mathcal{D}(B,\Omega)^{\alpha}))\geq s\), again our assumption in (6.1) readily yields
\[cR^{-1}e^{-\frac{BR^{2}}{4}}\mathcal{A}\left(\Omega\right)\leq s\leq q(R(1- \mathcal{D}\left(B,\Omega\right)^{\alpha}))\leq R^{-1}\mathcal{D}(B,\Omega)^ {\beta}\]
and therefore, as above,
\[\mathcal{D}(B,\Omega)\geq ce^{-\frac{BR^{2}}{4\beta}}\mathcal{A}\left(\Omega \right)^{\frac{1}{\beta}}. \tag{6.6}\]
But when \(q(R(1-\mathcal{D}(B,\Omega)^{\alpha}))<s\), the weak-field argument from Section 6.1 is no longer useful: it requires a _lower bound_ on \(q(R(1-\mathcal{D}(B,\Omega)^{\alpha}))\), as above in Section 6.2.1, to be effective. That argument, however, is based wholly on the first bound, given in (4.7), of Corollary 4.3.
Now we instead turn to our second bound, given in (4.8), which is based on our quantitative version of the comparison lemma. Here there is hope: it is possible to bound the remainder term in (4.8) from below _independently of_ \(q\).
**Lemma 6.1**.: _Let \(p_{a}\) be as in Corollary 4.3. Then there exists a universal constant \(c>0\), independent of \(B\) and \(\Omega\), such that for any \(\,0<\varepsilon<\frac{1}{2}\)_
\[\frac{\int_{R(1-\varepsilon)}^{R}\,p_{a}\left|p_{a}^{\prime}\right|e^{-\frac{Br ^{2}}{2}}\mathcal{A}^{2}\left(\left\{\left|f\right|>q(r)\right\}\right)r^{2} dr}{\int_{0}^{R}p_{a}^{2}e^{-\frac{Br^{2}}{2}}rdr}\geq ce^{-\frac{BR^{2}}{2}} \mathcal{M}_{\varepsilon}\varepsilon^{2},\]
_where \(\mathcal{M}_{\varepsilon}:=\inf\left\{\mathcal{A}^{2}\left(\left\{\left|f \right|>q(r)\right\}\right):\,R\left(1-\varepsilon\right)<r<R\right\}\)._
Proof.: Since \(p_{a}^{\prime}<0\),
\[\int_{R(1-\varepsilon)}^{R}\,p_{a}\left|p_{a}^{\prime}\right|e^{-\frac{Br^{2 }}{2}}\mathcal{A}^{2}\left(\left\{\left|f\right|>q(r)\right\}\right)r^{2}dr\]
Furthermore,
\[p_{a}(R(1-\varepsilon))=\int_{R(1-\varepsilon)}^{R}-p_{a}^{\prime}(r)dr\geq \frac{1}{R}\int_{R(1-\varepsilon)}^{R}-p_{a}^{\prime}(r)rdr\geq\frac{ \varepsilon}{R}\int_{0}^{R}-p_{a}^{\prime}(r)rdr,\]
where in the last inequality we used that \(r\mapsto-p_{a}^{\prime}(r)r\) is increasing (see (4.6)). The lemma follows from the Sobolev inequality
\[\int_{0}^{R}-p_{a}^{\prime}(r)rdr\geq c\left(\int_{0}^{R}p_{a}^{2}(r)rdr \right)^{\frac{1}{2}}\geq c\left(\int_{0}^{R}p_{a}^{2}(r)\,e^{-\frac{Br^{2}}{ 2}}rdr\right)^{\frac{1}{2}}.\qed\]
Before proceeding with our argument, we remark that Lemma 6.1 would not have been useful for dealing with the previous situation in Section 6.2.1.
If \(q(R(1-\mathcal{D}(B,\Omega)^{\alpha}))<s\), then we use the above lemma with \(\varepsilon=\mathcal{D}\left(B,\Omega\right)^{\alpha}\). From our second bound, given in (4.8), of Corollary 4.3, Lemma 6.1, and Lemma B.1 we have
\[\lambda(B,D_{R})\mathcal{D}\left(B,\Omega\right)\geq cBe^{-\frac{BR^{2}}{2}} \mathcal{A}\left(\Omega\right)^{2}\mathcal{D}\left(B,\Omega\right)^{2\alpha}.\]
Again using Lemma A.3 and now that \(BR^{2}>\frac{1}{\pi}\),
\[\mathcal{D}\left(B,\Omega\right)\geq c\frac{e^{-\frac{BR^{2}}{2}}}{1+\left( BR^{2}\right)^{-1}}\mathcal{A}\left(\Omega\right)^{2}\mathcal{D}\left(B,\Omega \right)^{2\alpha}\geq ce^{-\frac{BR^{2}}{2}}\mathcal{A}\left(\Omega\right)^{2} \mathcal{D}\left(B,\Omega\right)^{2\alpha},\]
and therefore
\[\mathcal{D}\left(B,\Omega\right)^{1-2\alpha}\geq ce^{-\frac{BR^{2}}{2}}\mathcal{A} \left(\Omega\right)^{2}. \tag{6.7}\]
With our above choice of \(\alpha\) and \(\beta\), the inequalities in (6.6) and (6.7) both yield the same desired estimate in (2.3). This concludes the proof of Theorem 2.1.
## Acknowledgements
We are most grateful to Soren Fournais for encouraging our collaboration. R.G. first suggested this problem in October 2017 to Michael Loss, whom he thanks for the initial encouragement. This paper is based on work partially supported by the Independent Research Fund Denmark via the project grant "Mathematics of the dilute Bose gas" No. 0135-00166B (L.J. & L.M.).
## Appendix A The Magnetic Laplacian on the Disk
It follows from Erdos' rearrangement inequality and comparison lemma, and from the observation in (4.2) that the principal eigenfunction of the magnetic Laplacian on the disk is radially symmetric.
**Theorem A.1**.: _As above, let \(D_{R}\) be a disk of radius \(R\) centered at the origin. Then_
\[\lambda(B,D_{R})=\inf_{q\in H_{0}^{1,\text{rad}}(D_{R})}\frac{\int_{D_{R}} \left|\left(-i\nabla-\alpha\right)q(\left|x\right|)\right|^{2}dx}{\int_{D_{R}} q\left(\left|x\right|\right)^{2}dx},\]
_where \(H_{0}^{1,\text{rad}}(D_{R}):=\left\{q:[0,R]\to\mathbb{R}\text{ such that }x\mapsto q(\left|x\right|)\text{ belongs to }H_{0}^{1}(D_{R})\right\}.\)_
Thus we write \(\lambda(B,D_{R})\) more conveniently in terms of polar coordinates.
**Lemma A.2**.: _Let \(H_{0}^{1,\text{rad}}(D_{R})\) be as in Theorem A.1. Then_
\[\lambda(B,D_{R})=B+\inf_{q\in H_{0}^{1,\text{rad}}(D_{R})}\frac{2\pi\int_{0}^{ R}(q^{\prime}(r)+\frac{B_{R}}{2}q(r))^{2}rdr}{2\pi\int_{0}^{R}q(r)^{2}rdr}=:B+ \mathfrak{e}\left(Br/2\right).\]
Proof.: First we consider a broader class of vector potentials \(\tilde{\alpha}(x):=\frac{a(\left|x\right|)}{\left|x\right|}\left(-x_{2},x_{1}\right)\) on the disk, with \(a\left(\left|x\right|\right)\) bounded. These correspond to radially symmetric but possibly inhomogeneous magnetic fields that show up in the rearrangement inequality. Written in polar coordinates, \(\tilde{\alpha}(r,\theta)=a(r)\left(-\sin\theta,\cos\theta\right)\) and for \(f\in H_{0}^{1}(D_{R})\)
\[\int_{D_{R}}\left|\left(-i\nabla-\tilde{\alpha}\right)f\right|^{2}dx=\int_{0} ^{R}\int_{0}^{2\pi}\left(\left|\partial_{r}f\right|^{2}+\left|\frac{i}{r} \partial_{\theta}f+af\right|^{2}\right)rd\theta dr.\]
Thus for any \(q\in H_{0}^{1,\text{rad}}(D_{R})\),
\[\int_{D_{R}}\left|\left(-i\nabla-\tilde{\alpha}\right)q(\left|x \right|)\right|^{2}dx =2\pi\int_{0}^{R}\left(\left(q^{\prime}(r)^{2}+\left(a(r)q(r) \right)^{2}\right)rdr\right.\] \[=2\pi\int_{0}^{R}\left(q^{\prime}(r)+a(r)q(r)\right)^{2}rdr-2\pi \int_{0}^{R}\left(q^{2}\right)^{\prime}a(r)rdr,\]
and after integrating by parts
\[\int_{D_{R}}\left|\left(-i\nabla-\tilde{\alpha}\right)q(\left|x \right|)\right|^{2}dx =2\pi\int_{0}^{R}\left(q^{\prime}(r)+a(r)q(r)\right)^{2}rdr+2\pi \int_{0}^{R}q^{2}\left(a(r)r\right)^{\prime}dr\] \[=2\pi\int_{0}^{R}\left(q^{\prime}(r)+a(r)q(r)\right)^{2}rdr+\int_ {D_{R}}\operatorname{rot}\left(\tilde{\alpha}\right)q(\left|x\right|)^{2}dx.\]
Returning to the original potential \(\alpha=\frac{B}{2}\left(-x_{2},x_{1}\right)\), the lemma follows from Theorem A.1, the above calculation and that \(\operatorname{rot}\left(\alpha\right)=B\).
Moreover, Erdos proved the following estimates. See Proposition A.1 in [10].
**Lemma A.3**.: _There are universal constants \(C_{1},C_{2}\) such that_
\[B+\frac{C_{1}}{R^{2}}e^{-\frac{3}{4}BR^{2}}\leq\lambda\left(B,D_{R}\right)\leq B +C_{2}B\left(\frac{1}{BR^{2}}+BR^{2}\right)e^{-\frac{1}{8}BR^{2}}.\]
Improving these estimates is an ongoing area of research [2],[9],[16] & ref. therein. In the absence of a magnetic field, \(\lambda(0,D_{R})=j_{0,1}^{2}R^{-2}\) where \(j_{0,1}\approx 2.4048\) is the first zero of the Bessel function of order zero.
## Appendix B Asymmetry of Large Subsets
If a subset is large enough, its asymmetry is comparable to the asymmetry of the whole domain [7],[15].
**Lemma B.1**.: _Let \(U\subseteq\Omega\) with \(\left|U\right|=\pi r^{2}\) and \(\left|\Omega\right|=\pi R^{2}\). If \(\left|U\right|\geq\left|\Omega\right|\left(1-\frac{1}{2}\mathcal{A}\left( \Omega\right)\right)\), then \(r\mathcal{A}\left(U\right)\geq\frac{1}{2}R\mathcal{A}\left(\Omega\right)\)._
Proof.: First we consider the interior asymmetry. From our assumption on the area of \(U\), we have \(\left|U\right|\geq\left|\Omega\right|\left(1-\frac{1}{2}\mathcal{A}_{I}\left( \Omega\right)\right)^{2}\) and thus \(r\geq R\left(1-\frac{1}{2}\mathcal{A}_{I}\left(\Omega\right)\right)\). We then deduce that \(r-\rho_{-}(U)\geq r-\rho_{-}(\Omega)\geq\frac{1}{2}\left(R-\rho_{-}(\Omega)\right)\), which yields the lemma.
Now we turn to the Fraenkel asymmetry. Let \(D_{U}\) and \(D_{\Omega}\) denote two concentric balls such that \(\left|D_{U}\right|=\left|U\right|\) and \(\left|D_{\Omega}\right|=\left|\Omega\right|\). Then, \(\left|D_{\Omega}\triangle\Omega\right|\leq\left|D_{U}\triangle U\right|+2 \left(\left|\Omega\right|-\left|U\right|\right).\) Using this inequality and our assumption on the area of \(U\), we deduce
\[\frac{\left|D_{U}\Delta U\right|}{2\left|U\right|}\geq\frac{\left|D_{\Omega} \Delta\Omega\right|}{2\left|U\right|}-\frac{\left|\Omega\right|-\left|U\right| }{\left|U\right|}\geq\frac{1}{2}\mathcal{A}_{F}(\Omega)\frac{\left|\Omega \right|}{\left|U\right|}\geq\frac{1}{2}\frac{R}{r}\mathcal{A}_{F}(\Omega).\]
Taking the infimum over all translations of \(D_{U}\) concludes the proof.
|
2306.11703 | A Gaussian free field approach to the natural parametrisation of SLE$_4$ | We construct the natural parametrisation of SLE$_4$ using the Gaussian free
field, complementing the corresponding results for SLE$_\kappa$ for $\kappa \in
(0,4)$ by Benoist and for $\kappa \in (4,8)$ by Miller and Schoug. | Vlad Margarint, Lukas Schoug | 2023-06-20T17:33:07Z | http://arxiv.org/abs/2306.11703v2 | # A Gaussian free field approach to the natural parametrisation of \(\mathrm{SLE}_{4}\)
###### Abstract
We construct the natural parametrisation of \(\mathrm{SLE}_{\kappa}\) using the Gaussian free field, complementing the corresponding results for \(\mathrm{SLE}_{\kappa}\) for \(\kappa\in(0,4)\) by Benoist [2] and for \(\kappa\in(4,8)\) by Miller and Schoug [14].
## 1 Introduction
The Schramm-Loewner evolution, \(\mathrm{SLE}_{\kappa}\) (\(\kappa>0\)), introduced in [19] is a one-parameter family of random conformally invariant curves which arise as the scaling limits of statistical mechanics models in two dimensions. When constructing \(\eta\sim\mathrm{SLE}_{\kappa}\) via the Loewner differential equation (see Section 2.2), the curve is parametrised by capacity, that is, if \(K_{t}\) is the set of points in \(\mathbb{H}\) which are separated from \(\infty\) by \(\eta([0,t])\), then \(\mathrm{hcap}(K_{t})=2t\) for each \(t\geq 0\). However, there are other parametrisations which may be equally natural for \(\mathrm{SLE}\). One example is the so-called _natural parametrisation_ of \(\mathrm{SLE}\), constructed in [11] for \(\kappa<4(7-\sqrt{33})\) and then in [13] for all \(\kappa<8\). This is the parametrisation which is conjectured to arise when considering \(\mathrm{SLE}_{\kappa}\) as a scaling limit of a discrete model where the discrete interface is parametrised by the number of vertices it traverses (so far this has only been proved to be the case when considering the scaling limit of loop-erased random walk [12]). It turns out that the natural parametrisation of \(\mathrm{SLE}_{\kappa}\) is in fact (a deterministic constant times) the \(d_{\kappa}\)-dimensional Minkowski content of \(\mathrm{SLE}_{\kappa}\), where \(d_{\kappa}=1+\kappa/8\) is the almost sure Hausdorff dimension of \(\mathrm{SLE}_{\kappa}\)[1].
In [2] the natural parametrisation of \(\mathrm{SLE}_{\kappa}\) for \(\kappa\in(0,4)\) was constructed using the Gaussian free field (GFF). More precisely, it was constructed as a conditional expectation of a certain quantum length measure on \(\mathrm{SLE}_{\kappa}\). The same was done in [14] in the case of \(\kappa\in(4,8)\), when the curves are non-simple. The goal of this paper is to complete this picture by providing the corresponding construction in the case \(\kappa=4\). The difference in this case compared to \(\kappa\neq 4\), is that in order to cut and weld Liouville quantum gravity (LQG) surfaces with \(\eta\sim\mathrm{SLE}_{4}\), one must choose the LQG parameter \(\gamma=2\), which is the critical value.
For a simply connected domain \(D\), denote by \(r_{D}(z)\) the conformal radius of \(D\) as seen from \(z\in D\).
**Theorem 1.1**.: _Let \(\eta\sim\mathrm{SLE}_{4}\) and \((f_{t})\) be its centred Loewner chain. Let \(h^{0}\) be a zero-boundary GFF on \(\mathbb{H}\) independent of \(\eta\) and for each \(t>0\), define \(h^{t}=h^{0}\circ f_{t}^{-1}+2\log|(f_{t}^{-1})^{\prime}|\). We define the measure \(\mu^{0}\) on \(\eta\) by_
\[\mu^{0}|_{\eta([0,t])}(dz)=F(z)\mathbb{E}[\nu_{h^{t}}\,|\,\eta]\circ f_{t}(dz),\]
_where \(F(z)=r_{\mathbb{H}}(z)^{-1/2}\). Then, there exists a (deterministic) constant \(C>0\) such that \(a.s.\)\(C\mu^{0}\) is the natural parametrisation of \(\eta\)._
### Related work
There have been many results concerning natural measures on random fractals. We already mentioned the results on the natural parametrisation of SLE in [11, 13, 9, 2, 14]. Moreover, in [14], a natural conformally covariant measure on \(\mathrm{CLE}_{\kappa}\), \(\kappa\in(8/3,8)\), was constructed and proved to be unique (up to multiplicative constant). Moreover, in [3], natural conformally covariant measures on several fractals (cut points and boundary touching points of \(\mathrm{SLE}_{\kappa}\), \(\kappa>4\), CLE pivotal points and carpet/gasket) were constructed and in [8] the natural measure on cut points of \(\mathrm{SLE}_{\kappa}\) for \(\kappa>4\) was studied further and proved to have bounded moments.
Another related result is the construction of a family of random measures on the so-called two-valued sets (TVS) of the GFF, carried out in [18]. This is done similarly to the measures above, using the imaginary multiplicative chaos (that is, a version of LQG where the real parameter \(\gamma\) is replaced by \(i\sigma\) for some \(\sigma\in(0,\sqrt{2})\)) and it was shown that if the conformal Minkowski contents of the TVS exist, then they are equal (up to deterministic constants) to the constructed measures.
### Outline
Section 2 contains the necessary preliminaries and in Section 3 we prove the main theorem. The latter section begins with the proof of the consistency of the definition of \(\mu^{0}\) for different \(t>0\), after which the rest of the section is divided into two subsections. In Section 3.1 we prove that \(\mu^{0}\) is almost surely locally finite as a measure on \(\mathbb{H}\) and in Section 3.2 we prove that \(\mu^{0}\) is conformally covariant and use this together with the local finiteness of \(\mu^{0}\) on \(\mathbb{H}\) to deduce the local finiteness of \(\mu^{0}\) as a measure on \(\mathrm{SLE}_{4}\). The main result then follows.
### Notation
For any quantities \(a,b\), we write \(a\lesssim b\) to mean \(a\leq Cb\) for some constant \(C>0\) which does not depend on any of the parameters of interest. Moreover, we write \(a\gtrsim b\) if \(b\lesssim a\). Finally, we write \(a\asymp b\) if \(a\lesssim b\) and \(a\gtrsim b\).
We denote by \(\mathbb{R}\) the real line and \(\mathbb{C}\) the complex plane. Moreover, we write \(\mathbb{H}\) for the upper half-plane \(\{z\in\mathbb{C}\colon\mathrm{Im}(z)>0\}\) and \(\mathbb{D}\) for the unit disk \(\{z\in\mathbb{C}\colon|z|<1\}\).
We denote two-dimensional Lebesgue measure by \(dz\).
### Acknowledgements
We thank Hao Wu for valuable input. LS was supported by the Finnish Academy Centre of Excellence FiRST and the ERC starting grant 804166 (SPRS). VM acknowledges the support of University of Colorado Boulder.
## 2 Preliminaries
### Random measures
A random measure is a random element in a space of measures on some Borel space, in our case \(\mathbb{C}\). For a random measure \(\xi\), we define the intensity \(\mathbb{E}[\xi]\) of \(\xi\) by \(\mathbb{E}[\xi](A)=\mathbb{E}[\xi(A)]\) for all Borel sets \(A\subset\mathbb{C}\).
Moreover, for a \(\sigma\)-algebra \(\mathscr{G}\), we define the conditional intensity of \(\xi\) given \(\mathscr{G}\) to be the random measure given by \(\mathbb{E}[\xi\,|\mathscr{G}|](A)=\mathbb{E}[\xi(A)\,|\mathscr{G}]\) for each Borel set \(A\subset\mathbb{C}\). For more on random measures, see [6] (note that while they require local finiteness as a part of the definition, this property will be shown to hold for the measure we construct).
### Schramm-Loewner evolution
Consider the Loewner differential equation
\[\partial_{t}g_{t}(z)=\frac{2}{g_{t}(z)-W_{t}},\quad g_{0}(z)=z, \tag{1}\]
with \(W_{t}=\sqrt{\kappa}B_{t}\) where \(\kappa>0\) and \(B_{t}\) is a standard Brownian motion. There exists a solution \(g_{t}(z)\) to (1) for each time \(t\in[0,T_{z})\) where \(T_{z}=\inf\{t\geq 0\colon g_{t}(z)-W_{t}=0\}\). We let \(K_{t}=\{z\in\mathbb{H}\colon T_{z}\leq t\}\) and note that \((g_{t})_{t\geq 0}\) is a family of conformal maps \(g_{t}\colon\mathbb{H}\setminus K_{t}\to\mathbb{H}\) such that \(\lim_{z\to\infty}g_{t}(z)-z=0\), called the SLE\({}_{\kappa}\) Loewner chain. By [16] (for \(\kappa\neq 8\)) and [10] (for \(\kappa=8\)) there exists a continuous curve \(\eta\) such that \(K_{t}\) is the set of points separated from infinity by \(\eta([0,t])\). This curve \(\eta\) is an SLE\({}_{\kappa}\) in \(\mathbb{H}\) from \(0\) to \(\infty\). The behaviour of \(\eta\) is heavily influenced by the value of \(\kappa\): if \(\kappa\leq 4\), \(\eta\) is a.s. simple, if \(\kappa\in(4,8)\), \(\eta\) a.s. intersects its past as well as the boundary and if \(\kappa\geq 8\), the \(\eta\) is a.s. space-filling. It turns out that \(\eta\) has a.s. Hausdorff dimension \(d_{\kappa}=1+\kappa/8\) for \(\kappa\in(0,8)\), see [1].
Denote by \((f_{t})_{t\geq 0}\) the centred Loewner chain of \(\eta\sim\text{SLE}_{\kappa}\), that is, for each \(t\geq 0\) and all \(z\in\mathbb{H}\), \(f_{t}(z)=g_{t}(z)-W_{t}\). An important property of SLE is that the law of \(\eta\) is scale invariant and satisfies the domain Markov property, that is, for any a.s. finite stopping time \(\tau\), the law of \(\eta^{\tau}(u)=f_{\tau}(\eta(\tau+u))\) is SLE\({}_{\kappa}\).
#### 2.2.1 Natural parametrisation of SLE
We now recall some basic facts about the natural parametrisation of SLE. The natural parametrisation of SLE was constructed in [11, 13] and in [9] shown to be equal to (a deterministic constant times) the Minkowski content of SLE which they also proved exists (we shall discuss briefly what this means below). We recall that the \(d\)-dimensional Minkowski content of a set \(A\subset\mathbb{H}\) is (if it exists) defined as the limit
\[\text{Cont}_{d}(A)=\lim_{r\to 0}r^{d-2}\text{Area}(\{z\in\mathbb{H}\colon \text{dist}(z,A)\leq r\}).\]
For \(\eta\sim\text{SLE}_{\kappa}\) we let \(\mathcal{M}\) denote the \(d_{\kappa}\)-dimensional Minkowski content of \(\eta\) (where \(d_{\kappa}=1+\kappa/8\)), that is, \(\mathcal{M}\) is the measure \(\mathcal{M}(D)=\text{Cont}_{d_{\kappa}}(\eta\cap D)\). Typically, the Minkowski content of SLE is viewed as a parametrisation of \(\eta\), that is, one can parametrise \(\eta\) so that \(\mathcal{M}(\eta([s,t]))=t-s\) for all \(0<s<t\).
Furthermore, it is proved that if we denote by \((f_{t})\) the centred Loewner chain for \(\eta\), then
\[\mathbb{E}[\mathcal{M}(D)]=\int_{D}G_{\kappa}(z)dz,\quad\mathbb{E }[\mathcal{M}(D)^{2}]=\iint_{D^{2}}G_{\kappa}(z,w)dzdw,\] \[\mathbb{E}[\mathcal{M}(D)\,|\,\eta([0,t])]=\mathcal{M}|_{\eta([0,t])}(D)+\int_{D}|f^{\prime}_{t}(z)|^{2-d}G_{\kappa}(f_{t}(z))dz,\]
where \(G_{\kappa}(z)=c_{\kappa}\sin^{\frac{8}{\kappa}-1}(\arg z)\text{Im}(z)^{d-2}\) and \(c_{\kappa}>0\) and
\[G_{\kappa}(z,w)=\lim_{r\to 0}r^{2(d-2)}\mathbb{P}(\text{dist}(z,\eta)\leq r,\ \text{ dist}(w,\eta)\leq r).\]
Next, we recall a characterisation of the natural parametrisation which we will use to identify the measure that we will construct with the natural parametrisation. Again, let \((f_{t})\) be the centred Loewner chain of \(\eta\sim\mathrm{SLE}_{\kappa}\) and for each \(t>0\), let \(\psi_{t}=f_{t}^{-1}\). For each \(t>0\), we define the "unzipped" curve \(\eta^{t}\) by
\[\eta^{t}(u)=f_{t}(\eta(t+u)),\quad u>0, \tag{2}\]
write \(\eta^{0}=\eta\) and recall that \(\eta^{t}\sim\mathrm{SLE}_{\kappa}\). Moreover, for a \(d_{\kappa}\)-dimensional volume measure \(\mu\) on \(\eta\), we define
\[\mu^{t}(dz)=|\psi_{t}(z)|^{-d_{\kappa}}\mu\circ\psi_{t}(dz), \tag{3}\]
and note that \(\mu^{0}=\mu\). The following uniqueness result was proved in [11].
**Theorem 2.1**.: _Fix \(\kappa\in(0,8)\), set \(d_{\kappa}=1+\kappa/8\), let \(\eta\sim\mathrm{SLE}_{\kappa}\) and let \(\mu\) be an a.s. locally finite measure on \(\eta\). If \((\mu^{t},\eta^{t})\stackrel{{ d}}{{=}}(\mu,\eta)\) for all \(t\geq 0\), then there is a deterministic constant \(C>0\) such that \(C\mu\) is the natural parametrisation of \(\eta\)._
### Gaussian free field
Let \(D\) be a Jordan domain and let \(H_{0}(D)\) be the Hilbert space closure of the space \(C_{0}^{\infty}(D)\) of smooth functions with support compactly contained in \(D\), under the Dirichlet inner product
\[(f,g)_{\nabla}=\frac{1}{2\pi}\int_{D}\nabla f(z)\cdot\nabla g(z)dz.\]
Let \((\phi_{n})_{n\geq 1}\) be an orthonormal basis of \(H_{0}(D)\) and consider a sequence \((\alpha_{n})_{n\geq 1}\) of i.i.d. \(N(0,1)\) random variables. The zero-boundary GFF \(h\) on \(D\) is defined by the sum \(h=\sum_{n\geq 1}\alpha_{n}\phi_{n}\). The law of \(h\) does not depend on the choice of orthonormal basis and it is conformally invariant in the sense that if \(\varphi\colon\widetilde{D}\to D\) is a conformal map, then \(\widetilde{h}=h\circ\varphi\) is a zero-boundary GFF in \(\widetilde{D}\).
The zero-boundary GFF satisfies a domain Markov property. Indeed, if \(U\subset D\) is open, then we have the orthogonal decomposition \(H_{0}(D)=H_{0}(U)\oplus H_{0}^{\perp}(U)\), where \(H_{0}^{\perp}(U)\) is the space of functions \(f\in H_{0}(D)\) which are harmonic on \(U\). It follows that we can decompose \(h\) as \(h=h_{U}+h_{U}^{\perp}\), where \(h\) is a zero-boundary GFF on \(U\) and \(h_{U}^{\perp}\) is a distribution which agrees with \(h\) on \(D\setminus U\), is harmonic on \(U\) and is independent of \(h_{U}\). We think of \(h_{U}^{\perp}|_{U}\) as the harmonic extension of the the values of \(h\) on \(\partial U\) to \(U\). With this in mind, one can also define a GFF with boundary data \(F\) as \(h+f\), where \(f\) is the harmonic extension of \(F\) to \(D\).
One can, equivalently, define the zero-boundary GFF as a centred Gaussian process \(h:H_{0}(D)\to L^{2}(D)\) with correlation kernel given by the Dirichlet Green's function for \(D\). This means that \((h,f)\), \(f\in H_{0}(D)\), is a collection of centred Gaussians with covariance given by \(\mathbb{E}[(h,f)(h,g)]=\int_{D\times D}f(z)G_{D}(z,w)g(w)dzdw\), where \(G_{D}\) is the Green's function on \(D\) for Dirichlet boundary data.
We let \(H(D)\) denote the Hilbert space closure of the set of functions \(f\in C^{\infty}(D)\) with \((f,f)_{\nabla}<\infty\) such that \(\int_{D}fdz=0\) (we do not require that the functions have compact support), with respect to \((\cdot,\cdot)_{\nabla}\). A free boundary GFF is defined in the same way as a zero-boundary GFF, with an orthonormal basis of \(H(D)\) replacing that of \(H_{0}(D)\). Similarly, the law of the free boundary GFF is independent of the choice of orthonormal basis and it is conformally invariant.
In the interior of the domain, the laws of a zero-boundary GFF and a free boundary GFF are mutually absolutely continuous. In fact, we may decompose a free boundary GFF \(h^{f}\) as \(h^{f}=h^{0}+\mathfrak{h}\), where \(h^{0}\) is a zero-boundary GFF and \(\mathfrak{h}\) is a harmonic function, independent of \(h^{0}\).
**Remark 2.2** (Radial/lateral decomposition in \(\mathbb{H}\)).: We have the orthogonal decomposition \(H(\mathbb{H})=H_{\mathrm{R}}(\mathbb{H})\oplus H_{\mathrm{L}}(\mathbb{H})\), where \(H_{\mathrm{R}}(\mathbb{H})\) (resp. \(H_{\mathrm{L}}(\mathbb{H})\)) is the space of functions in \(H(\mathbb{H})\) which are constant (resp. has mean zero) on each semicircle \(\partial B(0,s)\).
If \(h\) is a free boundary GFF on \(\mathbb{H}\), then the projection of \(h\) onto \(H_{\mathrm{R}}(\mathbb{H})\) is called the radial part of \(h\) and if \(h_{\mathrm{R}}(z)=h_{|z|}(0)\) denotes the average value of \(h\) on \(\partial B(0,|z|)\), then \((h_{\mathrm{R}}(e^{-t}))_{t\in\mathbb{R}}\) has the same law as \((B_{2t})_{t\in\mathbb{R}}\) where \(B\) is a two-sided Brownian motion with \(B_{0}=0\). The projection of \(h\) onto \(H_{\mathrm{L}}(\mathbb{H})\) is called the lateral part of \(h\).
### Liouville quantum gravity
Fix \(\gamma\in(0,2]\). A \(\gamma\)-Liouville quantum gravity (\(\gamma\)-LQG) surface is a law on equivalence classes of pairs \((D,h)\) where \(D\) is a simply connected domain and \(h\) a distribution on \(D\), such that \((D,h)\) and \((\widetilde{D},\widetilde{h})\) are equivalent if there exist a conformal map \(\psi\colon\widetilde{D}\to D\) such that
\[\widetilde{h}=h\circ\psi+Q_{\gamma}\log|\psi^{\prime}|,\quad Q_{\gamma}=\frac{ 2}{\gamma}+\frac{\gamma}{2}. \tag{4}\]
Typically, \(h\) is a random distribution which looks locally like a GFF. One can also define quantum surfaces with marked points. We say that \((D,h,z_{1},\ldots,z_{n})\) and \((\widetilde{D},\widetilde{h},\widetilde{z}_{1},\ldots,\widetilde{z}_{n})\) are equivalent if \((D,h)\) and \((\widetilde{D},\widetilde{h})\) are equivalent as quantum surfaces and \(\psi(\widetilde{z}_{j})=z_{j}\) for all \(1\leq j\leq n\).
An LQG surface comes naturally equipped with an area measure and a length measure, but we shall only need the latter. Consider a 2-LQG surface embedded into \(\mathbb{H}\) and for \(\varepsilon>0\) and \(x\in\mathbb{R}\), we denote by \(h_{\varepsilon}(x)\) the average value of \(h\) on \(\partial B(x,\varepsilon)\cap\mathbb{H}\) (this makes sense for any \(h\) which is locally absolutely continuous with respect to a GFF). Then we define the boundary length measure to be
\[\nu_{h}(dx)=\lim_{\varepsilon\to 0}\varepsilon\bigg{(}\log(1/\varepsilon)- \frac{h_{\varepsilon}(x)}{2}\bigg{)}e^{h_{\varepsilon}(x)}dx. \tag{5}\]
By the conformal coordinate change (4) one can then define the quantum length of boundaries in arbitrary simply connected domains. We note that while \(\nu_{h}\) is a measure on \(\partial D\), it can actually be used to measure the length of curves inside on \(D\) as well. Indeed, if \(\eta\) is a curve in \(D\) and \(f\colon D\setminus\eta\to D\) is a conformal map which extends to the boundary (in the sense of prime ends), then letting \(\eta^{L}\) (resp. \(\eta^{R}\)) denote the left (resp. right) side of \(\eta\) we define the length of \(\eta^{q}\) as \(\nu_{\widetilde{h}}(f(\eta^{q}))\), where \(\widetilde{h}=h\circ f^{-1}+2\log|(f^{-1})^{\prime}|\) and \(q\in\{L,R\}\) (note here that \(Q_{2}=2\)).
Next, we recall the definition of a quantum wedge.
**Definition 2.3**.: A \((\gamma,\alpha)\)-quantum wedge, \(\alpha\in(-\infty,Q_{\gamma})\) is a doubly marked \(\gamma\)-LQG surface \(\mathcal{W}=(\mathbb{H},h,0,\infty)\) such that the projection \(h_{\mathrm{L}}\) of \(h\) on \(H_{\mathrm{L}}(\mathbb{H})\) has the law of the lateral part of a free boundary GFF on \(\mathbb{H}\) and such that if \(X_{t}\) denotes the average value of \(h\) on \(\partial B(0,e^{-t})\), then \(X\) is as follows.
* \((X_{t})_{t\geq 0}\) has the law of \((B_{2t}+\alpha t)_{t\geq 0}\), where \(B\) is a standard Brownian motion with \(B_{0}=0\), conditioned so that \(B_{2t}+\alpha t\leq Q_{\gamma}t\) for all \(t\geq 0\).
* \((X_{t})_{t\leq 0}\) has the law of \((\widehat{B}_{-2t}+\alpha t)_{t\leq 0}\), where \(\widehat{B}\) is a standard Brownian motion with \(\widehat{B}_{0}=0\).
* \(h_{\mathrm{L}}\), \((X_{t})_{t\leq 0}\) and \((X_{t})_{t\geq 0}\) are independent.
We then write \((\mathbb{H},h,0,\infty)\sim\mathsf{QWedge}^{\boldsymbol{\alpha}=\alpha}_{ \boldsymbol{\gamma}=\gamma}\). The particular embedding in this definition is called the last exit parametrisation.
**Remark 2.4**.: Sometimes it is useful to parametrise a quantum wedge by the strip instead. This is done using the conformal coordinate change (4) with \(\psi(z)=e^{-z}\). Then, a last exit parametrisation embedding of \((h,\mathscr{S},-\infty,+\infty)\) can be sampled by letting \(h=X+h_{\mathrm{L}}\) where \(h_{\mathrm{L}}\), the projection of \(h\) onto \(H_{\mathrm{L}}(\mathscr{S})\), has the law of the lateral part of a free boundary GFF in \(\mathscr{S}\) and \(X_{t}\) is the average value of \(h\) on \(\{t\}\times[0,\pi]\), defined as follows.
* \((X_{t})_{t\geq 0}\) has the law of \((B_{2t}-(Q_{\gamma}-\alpha)t)_{t\geq 0}\), where \(B\) is a standard Brownian motion with \(B_{0}=0\), conditioned so that \(B_{2t}-(Q_{\gamma}-\alpha)t\leq 0\) for all \(t\geq 0\).
* \((X_{t})_{t\leq 0}\) has the law of \((\widetilde{B}_{-2t}-(Q_{\gamma}-\alpha)t)_{t\leq 0}\), where \(\widetilde{B}\) is a standard Brownian motion with \(\widetilde{B}_{0}=0\).
* \(h_{\mathrm{L}}\), \((X_{t})_{t\leq 0}\) and \((X_{t})_{t\geq 0}\) are independent.
**Remark 2.5**.: Some care has to be taken when conditioning on a zero probability event. Fix \(a>0\) and let \((U_{t})_{t\geq 0}\) be \((B_{t}-at)_{t\geq 0}\) "conditioned to stay negative". By this we mean the weak limit as \(\varepsilon\to 0\) of the processes \((B_{t}-at)_{t\geq 0}\) conditioned to stay below \(\varepsilon\). A sample from this law can be drawn by sampling a standard Brownian motion \(\widehat{B}_{t}\), letting \(\tau=\sup\{t\geq 0:\widehat{B}_{t}-at\geq 0\}\) (which is a.s. finite) and setting \(U_{t}=\widehat{B}_{\tau+t}-a(\tau+t)\).
## 3 Natural measure
For any field \(h\) we let \(\nu_{h}\) denote the 2-LQG boundary measure associated with \(h\), defined in (5). Moreover, we let \(\eta\) be an SLE\({}_{4}\) in \(\mathbb{H}\) from \(0\) to \(\infty\) and \((f_{t})\) its centred Loewner chain. For \(t\geq 0\) we denote by \(\eta^{t}\) the curve \(\eta^{t}(u)=f_{t}(\eta(t+u))\) and note that \(\eta^{0}=\eta\). Moreover, we write \(\psi_{t}=f_{t}^{-1}\) and let \(f_{s,t}=f_{t}\circ\psi_{s}\) and \(\psi_{s,t}=f_{s,t}^{-1}=f_{s}\circ\psi_{t}\).
We let \(h^{0}\) be a zero-boundary GFF independent of \(\eta\) and for each \(t>0\) let
\[h^{t}=h^{0}\circ\psi_{t}+2\log|\psi_{t}^{\prime}| \tag{6}\]
be the field on \(\mathbb{H}\) formed by unzipping \(h^{0}\) along \(\eta^{0}\). Then, we define the measure \(\mu^{0}\) on \(\eta^{0}\) by
\[\mu^{0}|_{\eta^{0}([0,t])}(dz)=F(z)\mathbb{E}[\nu_{h^{t}}\,|\,\eta^{0}]\circ f _{t}(dz), \tag{7}\]
where \(F(z)=r_{\mathbb{H}}(z)^{-1/2}\), and \(r_{\mathbb{H}}(z)\) denotes the conformal radius of \(\mathbb{H}\) seen from \(z\). An identity that we will use several times is the following. If \(\varphi\) be a conformal map, then
\[\varphi^{\prime}(\varphi^{-1}(z))=\frac{1}{(\varphi^{-1})^{\prime}(z)}. \tag{8}\]
We begin by proving that this definition of \(\mu^{0}\) is consistent for different values of \(t\).
**Lemma 3.1**.: _The definition of \(\mu^{0}\) given in (7) is consistent for different values of \(t\). That is, if \(0<s<t\), then for each \(A\subset[0,s]\), we a.s. have that_
\[\int_{\eta^{0}(A)}F(z)\mathbb{E}[\nu_{h^{s}}\,|\,\eta^{0}]\circ f_{s}(dz)=\int _{\eta^{0}(A)}F(z)\mathbb{E}[\nu_{h^{t}}\,|\,\eta^{0}]\circ f_{t}(dz).\]
Proof.: If \(0<s<t\), then \(h^{t}=h^{s}\circ\psi_{s,t}+2\log|\psi^{\prime}_{s,t}|\). Thus,
\[\nu_{h^{t}}\circ f_{t}(dz) =\nu_{h^{s}\circ\psi_{s,t}+2\log|\psi^{\prime}_{s,t}|}\circ f_{t}(dz)\] \[=\left[\lim_{\varepsilon\to 0}\varepsilon\left(\log(1/ \varepsilon)-\frac{(h^{s}\circ\psi_{s,t})_{\varepsilon}+2\log|\psi^{\prime}_{s,t}|}{2}\right)e^{(h^{s}\circ\psi_{s,t})_{\varepsilon}+2\log|\psi^{\prime}_{s,t}|}\right]\circ f_{s,t}\circ f_{s}(dz)\] \[=\left[\lim_{\varepsilon\to 0}\varepsilon\Bigg{(}\log(1/ \varepsilon)-\frac{h^{s}_{\varepsilon/|f^{\prime}_{s,t}|}-2\log|f^{\prime}_{s,t}|}{2}\Bigg{)}e^{h^{s}_{\varepsilon/|f^{\prime}_{s,t}|}-2\log|f^{\prime}_{s,t}|}\right]\circ f_{s}(dz)\] \[=\left[\lim_{\varepsilon\to 0}\frac{\varepsilon}{|f^{\prime}_{s,t}|} \Bigg{(}\log(|f^{\prime}_{s,t}|/\varepsilon)-\frac{h^{s}_{\varepsilon/|f^{ \prime}_{s,t}|}}{2}\Bigg{)}e^{h^{s}_{\varepsilon/|f^{\prime}_{s,t}|}}\right] \circ f_{s}(dz)\] \[=\nu_{h^{s}}\circ f_{s}(dz),\]
where in the third equality we used (8) and the fact that as \(\varepsilon\to 0\), the preimages of circles of radius \(\varepsilon\) around \(f_{t}(z)\) under \(f_{s,t}\) are (roughly) circles of radius \(\varepsilon/|f^{\prime}_{s,t}(f_{s}(z))|\) around \(f_{s}(z)\). This concludes the proof.
The rest of this section is divided into two subsections, the first of which is focused on establishing that \(\mu^{0}\) is locally finite as a measure on \(\mathbb{H}\), that is, for each compact \(K\subset\mathbb{H}\), we have that \(\mu^{0}(K)<\infty\) a.s. The second subsection is devoted to proving the conformal covariance of \(\mu^{0}\) and then to deducing the local finiteness of \(\mu^{0}\) as a measure on \(\eta\), that is, for each compact \(I\subset(0,\infty)\), we have that \(\mu^{0}(\eta^{0}(I))<\infty\) a.s.
### Local finiteness on \(\mathbb{H}\)
**Lemma 3.2**.: _Almost surely, \(\mu^{0}\) is locally finite as a measure on \(\mathbb{H}\). That is, \(\mathbb{P}[\mu^{0}(K)<\infty]=1\) for each \(K\subset\mathbb{H}\) compact._
We begin by noting that we can decompose a \((2,1)\)-quantum wedge as the sum of a zero-boundary GFF and a harmonic function. For the proof, we refer to [8] (the parametrisation is slightly different but the proof is the same).
**Lemma 3.3**.: _Let \((\mathbb{H},h^{w},0,\infty)\sim\mathsf{QWedge}_{\gamma=2}^{\boldsymbol{\alpha} =1}\) have the last exit parametrisation. Then, we can write \(h^{w}=h^{0}+\mathfrak{h}\), where \(h^{0}\) is a zero-boundary GFF on \(\mathbb{H}\) and \(\mathfrak{h}\) is a harmonic function on \(\mathbb{H}\) which is independent of \(h^{0}\)._
The following moment bound is the key ingredient in the proof of Lemma 3.2. The reason for the choice of quantum surface is that \(\mathsf{QWedge}_{\gamma=2}^{\boldsymbol{\alpha}=1}\) is natural in the context of \(\mathrm{SLE}_{4}\), as if we unzip such a surface along an \(\mathrm{SLE}_{4}\) curve for a finite time, then the resulting law is still that of a \(\mathsf{QWedge}_{\gamma=2}^{\boldsymbol{\alpha}=1}\), see [5, Theorem 1.5].
**Lemma 3.4**.: _Let \(\eta^{0}\sim\mathrm{SLE}_{4}\) and let \((\mathbb{H},h^{w},0,\infty)\sim\mathsf{QWedge}_{\gamma=2}^{\boldsymbol{\alpha} =1}\) have the last passage parametrisation and be independent of \(\eta^{0}\). Set \(h^{w,t}=h^{w}\circ\psi_{t}+2\log|\psi^{\prime}_{t}|\). Then there exists \(p\in(0,1)\) such that for each \(t\geq 0\),_
\[\mathbb{E}[\nu_{h^{w,t}}(f_{t}(\mathbb{D}\cap\mathbb{H}))^{p}]<\infty.\]
Before proving Lemma 3.4, we begin by providing some moment bounds.
**Lemma 3.5**.: _Let \((\mathscr{S},h,-\infty,+\infty)\sim\mathsf{QWedge}_{\gamma-2}^{\boldsymbol{ \alpha}=1}\) have the last exit parametrisation and let for each \(k\in\mathbb{Z}\), \(I_{k}=[k-1,k]\times\{0,\pi\}\). For each \(p\in(0,1)\) there exists a constant \(C_{p}>0\) such that if \(k\geq 1\), then \(\mathbb{E}[\nu_{h}(I_{k})^{p}]\leq C_{p}\) and if \(k\leq 0\), then_
\[\mathbb{E}[\nu_{h}(I_{k})^{p}]\leq C_{p}e^{-(4p^{2}+2p)k}.\]
Proof.: We let \(h=X+h_{\mathrm{L}}\) be the decomposition into the average on vertical lines process \(X\) and the lateral part \(h_{\mathrm{L}}\) of \(h\). By [7, Lemma A.4] we have that \(\mathbb{E}[\nu_{h_{\mathrm{L}}}(I_{k})^{p}]<\infty\) for all \(k\in\mathbb{Z}\) and \(p\in(0,1)\). Moreover, we have that
\[\nu_{h}(I_{k})\leq\nu_{h_{\mathrm{L}}}(I_{k})\exp\Bigl{(}2\sup_{ t\in[k-1,k]}X_{t}\Bigr{)}. \tag{9}\]
Since \(X_{t}\) is non-positive whenever \(t\geq 0\), the first assertion follows. Consider now the case \(k\leq 0\), so that the Brownian motion in the process \(X\) is not conditioned on any event. Then,
\[\sup_{t\in[k-1,k]}X_{t}\leq 1-k+\sup_{t\in[k-1,k]}\widetilde{B}_{2t}=1-k+ \widetilde{B}_{-2k}+\sup_{t\in[k-1,k]}\widetilde{B}_{-2t}-\widetilde{B}_{-2k}.\]
Thus, since \(\widetilde{B}\) has stationary and independent increments and for a Brownian motion \(\widehat{B}\) and any \(c>0\), the expectation \(\mathbb{E}[\exp(c\sup_{t\in[0,2]}\widehat{B}_{t})]\) is finite, it follows from (9) that for \(p\in(0,1)\),
\[\mathbb{E}[\nu_{h}(I_{k})^{p}] \leq\mathbb{E}[\nu_{h_{\mathrm{L}}}(I_{k})^{p}]\mathbb{E}\Biggl{[} \exp\Biggl{(}2p\sup_{t\in[k-1,k]}X_{t}\Biggr{)}\Biggr{]}\] \[=e^{2p(1-k)}\mathbb{E}[\nu_{h_{\mathrm{L}}}(I_{k})^{p}]\mathbb{E }[\exp(2pB_{-2k})]\mathbb{E}\Biggl{[}\exp\Biggl{(}2p\sup_{t\in[0,2]}\widehat{B }_{t}\Biggr{)}\Biggr{]}\] \[\leq C_{p}e^{-(4p^{2}+2p)k}\]
where \(C_{p}\) is some positive constant depending only on \(p\).
Proof of Lemma 3.4.: Let \(\tau=\sup\{t\geq 0:\eta^{0}(t)\in\mathbb{D}\}\). We note that by conformal invariance,
\[\nu_{h^{w,*}}(f_{s}(\mathbb{D}\cap\mathbb{H}))\leq\nu_{h^{w,*}}(f _{\tau}(\mathbb{D}\cap\mathbb{H}))=\nu_{h^{w,t}}(f_{t}((\mathbb{D}\cap\mathbb{H }))\]
whenever \(s\leq\tau\leq t\). By [20, Theorem 4.1] we have that
\[\mathbb{P}\biggl{[}\sup_{0\leq t\leq\tau}|\eta^{0}(t)|\geq R \biggr{]}\lesssim R^{-2}. \tag{10}\]
We let \(\tau_{R}=\inf\{t\geq 0:|\eta^{0}(t)|\geq R\}\) and note that for all \(0\leq t\leq\tau_{R}\), \(|f_{t}(0^{-})|,|f_{t}(0^{+})|\leq 4R\). Indeed, this follows immediately from [17, Equation (8)] and a comparison with the compact \(\mathbb{H}\)-hull \(R\overline{\mathbb{D}}\cap\mathbb{H}\). We shall now bound moments of the quantum length of \([-4R,4R]\).
We first bound \(p\)th moment of the quantum length of \([-4R,-1]\cup[1,4R]\). As above, let \(I_{k}=[k-1,k]\times\{0,\pi\}\). Then, letting \((\mathscr{S},h^{S},-\infty,+\infty)\sim\mathsf{QWedge}_{\gamma=2}^{\boldsymbol{ \alpha}=1}\), this corresponds to bounding \(\mathbb{E}[\nu_{h^{S}}([-\log 4R,0]\times\{0,\pi\})^{p}]\). By Lemma 3.5 and the inequality \((\sum_{j}x_{j})^{p}\leq\sum_{j}x_{j}^{p}\) (for \(x_{j}>0\), \(0<p<1\)),
\[\mathbb{E}[\nu_{h^{S}}([-\log 4R,0]\times\{0,\pi\})^{p}] \leq\sum_{k=0}^{\lceil\log 4R\rceil}\mathbb{E}[\nu_{h^{S}}(I_{-k})^{p} ]\leq C_{p}\sum_{k=0}^{\lceil\log 4R\rceil}e^{(4p^{2}+2p)k}\] \[\lesssim\int_{0}^{\lceil\log 4R\rceil}e^{(4p^{2}+2p)x}dx \lesssim R^{4p^{2}+2p},\]
where the implicit constants depend on \(p\). Moreover, let as in Remark 2.5, \(\widehat{B}\) be a standard Brownian motion, \(\tau=\sup\{t\geq 0:\widehat{B}_{t}-t\}\) and such that \(X_{t}=\widehat{B}_{\tau+t}-(\tau+t)\), and let \(M=\sup_{t\geq 0}\widehat{B}_{t}-t\). Then, letting \(X_{k}^{*}=\sup_{t\in[k-1,k]}X_{t}\) and \(k^{*}=\arg\max_{t\in[k-1,k]}\widehat{B}_{t}-t\), we have that
\[\mathbb{E}[\nu_{h^{s}}([0,\infty)\times\{0,\pi\})^{p}] \leq\mathbb{E}\!\left[\sum_{k=1}^{\infty}\exp(2pX_{k}^{*})\,\nu_{ h_{\mathrm{L}}}(I_{k})^{p}\right]\lesssim\mathbb{E}\!\left[\sum_{k=1}^{ \infty}\exp(2pX_{k}^{*})\right]\] \[\leq\mathbb{E}\!\left[\sum_{k=1}^{\infty}\exp\!\left(2p(\widehat{ B}_{k^{*}}-k^{*})\right)\right]=\mathbb{E}\!\left[\mathbb{E}\!\left[\sum_{k=1}^{ \infty}\exp\!\left(2p(\widehat{B}_{k^{*}}-k^{*})\right)\!\left|M\right]\right]\] \[\lesssim\mathbb{E}\!\left[\exp(2pM)\right]\quad(\text{\@@cite [cite]{[\@@bibref{}{L
and consequently, \(\{\mu^{0}(K)=\infty\}\) is contained in the union of \((E_{T}^{*})^{c}\) and a zero probability event. This, however, is a contradiction since \(\mathbb{P}[(E_{T}^{*})^{c}]\leq p_{0}/2\). Hence \(\mu^{0}(K)\) is a.s. finite.
We now consider a general compact \(\widetilde{K}\subset\mathbb{H}\) and let \(R=\inf\{r>0\colon\widetilde{K}\subset B(0,r/2)\}\) and \(\delta=\operatorname{dist}(\widetilde{K},\partial\mathbb{H})>0\). Let \(\varphi_{R}(z)=z/R\) and note that \(K\coloneqq\varphi_{R}(\widetilde{K})\subset\frac{1}{2}\overline{\mathbb{D}} \cap\mathbb{H}\). Moreover, we note that \(\varphi_{R}(\eta^{0})\sim\operatorname{SLE}_{4}\), \(\widetilde{h}^{0}\coloneqq h^{0}\circ\varphi_{R}^{-1}\) is a zero-boundary GFF on \(\mathbb{H}\). Furthermore, we let \(\sigma\) be the last exit time of \(B(0,R)\) for \(\eta^{0}\) and \((\widetilde{f}_{t})\) be the Loewner chain corresponding to the curve \(\widetilde{\eta}^{0}(t)\coloneqq\varphi_{R}(\eta^{0}(t))\), that is, \(\widetilde{f}_{t}(z)=\frac{1}{R}f_{t}(Rz)\), and we let \(\widetilde{\psi}_{t}(z)=\widetilde{f}_{t}^{-1}(z)=\frac{1}{R}\psi_{t}(Rz)\) and \(\widetilde{h}^{t}=\widetilde{h}^{0}\circ\widetilde{\psi}_{t}+2\log|\widetilde {\psi}_{t}^{\prime}|\). Then by the LQG coordinate change and the scale invariance of \(\eta^{0}\) and \(h^{0}\), it follows that a.s.
\[\nu_{h^{s}}(f_{\sigma}(\widetilde{K}))=\nu_{\widetilde{h}^{s}+2\log R}( \widetilde{f}_{\sigma}(K))\stackrel{{ d}}{{=}}\nu_{h^{s}+2\log R}(f_{ \tau}(K))=R^{2}\nu_{h^{s}}(f_{\tau}(K)).\]
Thus, it follows analogously to the above that \(\mathbb{E}[\nu_{h^{T}}(f_{T}(\widetilde{K}))^{p}]\) is almost surely finite for \(T>0\) and hence that \(\mu^{0}(\widetilde{K})\) is as well.
### Conformal covariance
Fix \(s>0\) and let \(\widetilde{h}^{s}\) be a zero-boundary GFF on \(\mathbb{H}\), independent of \(\eta^{0}\). This then gives us a field \(\widetilde{h}^{0}=\widetilde{h}^{s}\circ f_{s}+2\log|f_{s}^{\prime}|\) by zipping up along \(\eta^{0}\), and hence a family of fields \((\widetilde{h}^{t})_{t\geq 0}\) by letting \(\widetilde{h}^{t}=\widetilde{h}^{0}\circ\psi_{t}+2\log|\psi_{t}^{\prime}|\). We recall that \(f_{s,t}=f_{t}\circ\psi_{s}\colon\mathbb{H}\setminus\eta^{s}([0,t-s])\to \mathbb{H}\) and define the measure \(\widetilde{\mu}^{s}\) by
\[\widetilde{\mu}^{s}|_{\eta^{s}([0,t-s])}(dz)=F(z)\mathbb{E}[\nu_{ \widetilde{h}^{t}}\,|\,\eta^{s}]\circ f_{s,t}(dz). \tag{13}\]
Recall from (3) that \(\mu^{t}(dz)=|\psi_{t}^{\prime}(z)|^{-d_{4}}(\mu^{0}\circ\psi_{t})(dz)\), where \(d_{4}=3/2\).
**Lemma 3.6**.: _Almost surely, \(\widetilde{\mu}^{s}=\mu^{s}\), that is,_
\[\mu^{0}|_{\eta^{0}([s,\infty))}\circ\psi_{s}(dz)=|\psi_{s}^{\prime}(z)|^{3/2} \widetilde{\mu}^{s}(dz).\]
Proof.: Since \(\widetilde{h}^{s}\) is a zero-boundary GFF in \(\mathbb{H}\), we have that \(\widetilde{h}^{s}\circ f_{s}\) is a zero-boundary GFF in \(\mathbb{H}\setminus\eta^{0}([0,s])\). Thus we may assume that \(\widetilde{h}^{s}\) and \(h^{0}\) are coupled together in such a way that we can define a Gaussian field \(\mathfrak{H}\), which is conditionally independent of \(\widetilde{h}^{s}\) given \(\eta^{0}\) and such that \(h^{0}=\widetilde{h}^{s}\circ f_{s}+\mathfrak{H}\). Then the covariance kernel of \(\mathfrak{H}\) is given by
\[G_{\mathbb{H}}(z,w)-G_{\mathbb{H}\setminus\eta^{0}([0,s])}(z,w)=G_{\mathbb{H} }(z,w)-G_{\mathbb{H}}(f_{s}(z),f_{s}(w))\]
and hence the variance of \(\mathfrak{H}\) at a point \(z\) is well-defined and equal to
\[\operatorname{Var}\mathfrak{H}(z)=\lim_{\varepsilon\to 0}\operatorname{Var} \mathfrak{H}_{\varepsilon}(z)=\log r_{\mathbb{H}}(z)-\log r_{\mathbb{H}}(f_{s} (z))+\log|f_{s}^{\prime}(z)|\]
(here \(\mathfrak{H}_{\varepsilon}(z)\) denotes the average value of \(\mathfrak{H}\) on the circle \(\partial B(z,\varepsilon)\)). The term \(\log|f_{s}^{\prime}(z)|\) comes from the change of variables dilating the ball of radius \(\varepsilon\) roughly by a factor of \(|f_{s}^{\prime}(z)|\).
We note that \(h^{t}=h^{0}\circ\psi_{t}+2\log|\psi_{t}^{\prime}|=\widetilde{h}^{t}+\mathfrak{H }\circ\psi_{t}+2\log|\psi_{s}^{\prime}|\circ\psi_{s,t}\) and that since \(\mathfrak{H}\) is centred and \(\operatorname{Var}\mathfrak{H}(z)\) is finite, it follows from [15, Lemma 2.1] that
\[\mathbb{E}\biggl{[}\lim_{\varepsilon\to 0}\varepsilon\biggl{(}\frac{( \mathfrak{H})\circ\psi_{t})_{\varepsilon}}{2}+\log|\psi_{s}^{\prime}|\circ\psi_{s,t}\biggr{)}\,e^{\widetilde{h}^{t}_{\varepsilon}+(\mathfrak{H}\circ\psi_{t})_{ \varepsilon}+2\log|\psi_{s}^{\prime}|\circ\psi_{s,t}}\,\biggl{|}\,\eta^{0} \biggr{]}\circ f_{t}(dz)\]
is the zero measure. It follows that
\[\mu^{0}|_{\eta^{0}([0,t])}(dz)=\] \[=F(z)|f_{s}(z)|^{-2}\mathbb{E}[e^{\mathfrak{H}(z)}\,|\,\eta^{0}] \mathbb{E}\Bigg{[}\lim_{\varepsilon\to 0}\varepsilon\Bigg{(}\log\left(\frac{1}{ \varepsilon}\right)-\frac{\widetilde{h}_{\varepsilon}^{t}}{2}\Bigg{)}\,e^{ \widetilde{h}_{\varepsilon}^{t}}\,\Bigg{|}\,\eta^{0}\Bigg{]}\circ f_{t}(dz),\]
where (8) was used in the last inequality. Since the conditional law of \(\mathfrak{H}(z)\) given \(\eta^{0}\) is that of a centred Gaussian, we have that
\[\mathbb{E}[e^{\mathfrak{H}(z)}\,|\,\eta^{0}]=r_{\mathbb{H}}(z)^{1/2}r_{ \mathbb{H}}(f_{s}(z))^{-1/2}|f_{s}^{\prime}(z)|^{1/2}=\frac{F(f_{s}(z))}{F(z)} |f_{s}^{\prime}(z)|^{1/2}\]
and hence
\[\mu^{0}|_{\eta^{0}([0,t])}(dz)=F(f_{s}(z))|f_{s}^{\prime}(z)|^{-3/2}\mathbb{E }\Big{[}\lim_{\varepsilon\to 0}\varepsilon(\log(1/\varepsilon)-\widetilde{h}_{ \varepsilon}^{t}/2)e^{\widetilde{h}_{\varepsilon}^{t}}\,\Big{|}\,\eta^{0} \Big{]}\circ f_{t}(dz).\]
Consequently,
\[\mu^{0}|_{\eta^{0}([s,t])}\circ\psi_{s}(dz) =F(z)|f_{s}^{\prime}(\psi_{s}(z))|^{-3/2}\mathbb{E}\Big{[}\lim_ {\varepsilon\to 0}\varepsilon(\log(1/\varepsilon)-\widetilde{h}_{ \varepsilon}^{t}/2)e^{\widetilde{h}_{\varepsilon}^{t}}\,\Big{|}\,\eta^{0} \Big{]}\circ f_{s,t}(dz)\] \[=|\psi_{s}^{\prime}(z)|^{3/2}F(z)\mathbb{E}[\nu_{\widetilde{h}^{ t}}\,|\,\eta^{s}]\circ f_{s,t}(dz)\] \[=|\psi_{s}^{\prime}(z)|^{3/2}\widetilde{\mu}^{s}|_{\eta^{s}([0,t -s])}(dz),\]
where (8) was used in the second equality. Thus, the proof is complete.
The following lemma is proved in the same way as Lemma 3.6, and hence the proof is omitted.
**Lemma 3.7**.: _Let \(\phi_{a}(z)=az\) for \(a>0\). Then,_
\[\mathbb{E}[\mu^{0}]\circ\phi_{a}(dz)=a^{3/2}\mathbb{E}[\mu^{0}](dz).\]
We need that the intensity is absolutely continuous with respect to two-dimensional Lebesgue measure. In essence, that the randomness of the curve causes an averaging of the measure \(\mathbb{E}[\mu^{0}]\), so that its mass is spread out over a \(\mathbb{H}\), rather than in a subset of zero Lebesgue measure. The following was proved for the measures on \(\mathrm{SLE}_{\kappa}\) for \(\kappa\in(4,8)\), see [14, Lemma 3.6], but the exact same proof works with the measure \(\mu^{0}\) on \(\mathrm{SLE}_{4}\).
**Lemma 3.8**.: _The measure \(\mathbb{E}[\mu^{0}]\) is absolutely continuous with respect to Lebesgue measure._
Next, we recall the following lemma of [14].
**Lemma 3.9** (Lemma 3.7 of [14]).: _For each \(a>0\), let \(\phi_{a}(z)=az\). Let \(m\) be a measure on \(\mathbb{H}\) which is absolutely continuous with respect to Lebesgue measure and satisfies_
\[m\circ\phi_{a}(dz)=a^{d}m(dz)\]
_for some \(d>1\) and all \(a>0\). Then there exists some function \(H(z)=H(\arg z)\) such that_
\[m(dz)=H(\arg z)\mathrm{Im}(z)^{d-2}dz.\]
By Lemmas 3.7 and 3.8 the conditions of Lemma 3.9 are satisfied for \(\mathbb{E}[\mu^{0}]\). Next, we note the form of \(H\) in the case of \(m=\mathbb{E}[\mu^{0}]\).
**Lemma 3.10**.: _There is some constant \(c>0\) such that_
\[\mathbb{E}[\mu^{0}](dz)=c\sin(\arg z)\mathrm{Im}(z)^{-1/2}dz.\]
Proof.: This is proved in the exact same way as [14, Lemma 3.8].
We wrap up this section by deducing the following. The proof is the same as that of [14, Lemma 3.9], but it is very short so we repeat it here.
**Lemma 3.11**.: _Almost surely, \(\mu^{0}\) is locally finite. That is, almost surely,_
\[\mu^{0}(\eta([s,t]))<\infty\quad\text{for all}\quad 0<s<t.\]
Proof.: Note that by Lemma 3.10, \(\mathbb{E}[\mu^{0}(B(0,R))]<\infty\) for each \(R>0\), so that a.s., \(\mu^{0}(B(0,R))\) is finite. Moreover, for each \(0\leq s\leq t\), we have that \(\mathbb{P}(\eta([s,t])\subset B(0,R))\to 1\) as \(R\to\infty\). Finally, since \(\mu^{0}(\eta([s,t]))\leq\mu^{0}(B(0,R))\) on the event \(\{\eta([s,t])\subset B(0,R)\}\), it follows that for any \(0\leq s\leq t\), \(\mu([s,t])\) is a.s. finite.
Proof of Theorem 1.1.: By Lemmas 3.6 and 3.11 and Theorem 2.1 the conclusion of the theorem holds.
|
2304.07718 | Data-OOB: Out-of-bag Estimate as a Simple and Efficient Data Value | Data valuation is a powerful framework for providing statistical insights
into which data are beneficial or detrimental to model training. Many
Shapley-based data valuation methods have shown promising results in various
downstream tasks, however, they are well known to be computationally
challenging as it requires training a large number of models. As a result, it
has been recognized as infeasible to apply to large datasets. To address this
issue, we propose Data-OOB, a new data valuation method for a bagging model
that utilizes the out-of-bag estimate. The proposed method is computationally
efficient and can scale to millions of data by reusing trained weak learners.
Specifically, Data-OOB takes less than 2.25 hours on a single CPU processor
when there are $10^6$ samples to evaluate and the input dimension is 100.
Furthermore, Data-OOB has solid theoretical interpretations in that it
identifies the same important data point as the infinitesimal jackknife
influence function when two different points are compared. We conduct
comprehensive experiments using 12 classification datasets, each with thousands
of sample sizes. We demonstrate that the proposed method significantly
outperforms existing state-of-the-art data valuation methods in identifying
mislabeled data and finding a set of helpful (or harmful) data points,
highlighting the potential for applying data values in real-world applications. | Yongchan Kwon, James Zou | 2023-04-16T08:03:58Z | http://arxiv.org/abs/2304.07718v3 | # Data-OOB: Out-of-bag Estimate as a Simple and Efficient Data Value
###### Abstract
Data valuation is a powerful framework for providing statistical insights into which data are beneficial or detrimental to model training. Many Shapley-based data valuation methods have shown promising results in various downstream tasks, however, they are well known to be computationally challenging as it requires training a large number of models. As a result, it has been recognized as infeasible to apply to large datasets. To address this issue, we propose Data-OOB, a new data valuation method for a bagging model that utilizes the out-of-bag estimate. The proposed method is computationally efficient and can scale to millions of data by reusing trained weak learners. Specifically, Data-OOB takes less than \(2.25\) hours on a single CPU processor when there are \(10^{6}\) samples to evaluate and the input dimension is \(100\). Furthermore, Data-OOB has solid theoretical interpretations in that it identifies the same important data point as the infinitesimal jackknife influence function when two different points are compared. We conduct comprehensive experiments using 12 classification datasets, each with thousands of sample sizes. We demonstrate that the proposed method significantly outperforms existing state-of-the-art data valuation methods in identifying mislabeled data and finding a set of helpful (or harmful) data points, highlighting the potential for applying data values in real-world applications.
Machine Learning, Data-OOB, Data-OOB, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-OOB, Out-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-bag Estimate, Data-of-of-bag Estimate, Data-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-bag Estimate, Data-of-of-bag Estimate, Data-of-of-bag Estimate, Data-of-bag Estimate, Data-of-bag Estimate, Data-of-of-bag Estimate, Data-of-bag Estimate, Data-of-bag Estimate, Data-of-of-bag Estimate, Data-of-bag Estimate, Data-of-bag Estimate, Data-of-of-bag Estimate Estimate, Data-of-bag Estimate, Data-of-bag Estimate, Data-of-bag Estimate, Data-of-bag Estimate, Data-of-bag Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate, Data-of-bag Estimate, Data-of-bag Estimate, Data-of-bag Estimate, Data-of-bag Estimate, Data-of-of-bag Estimate Estimate, Data-of-bag Estimate, Data-of-of-bag Estimate, Data-of-bag Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate, Data-of-of-bag Estimate Estimate, Data-of-of-bag Estimate Estimate, Data-of-bag Estimate, Data-of-bag Estimate Estimate, Data-of-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-of-bag Estimate Estimate, Data-of-bag Estimate, Data-of-bag Estimate Estimate, Data-of-of-bag Estimate Estimate, Data-of-bag Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate, Data-of-bag Estimate, Data-of-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-of-bag Estimate Estimate, Data-of-of-bag Estimate Estimate, Data-of-bag Estimate, Data-of-bag Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate Estimate, Data-of-bag Estimate Estimate Estimate, Data-of-bag Estimate Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate Estimate, Data-of-bag Estimate Estimate Estimate, Data-of-bag Estimate Estimate Estimate, Data-of-bag Estimate Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate Estimate, Data-of-bag Estimate Estimate Estimate, Data-of-bag Estimate Estimate Estimate, Data-of-bag Estimate Estimate, Data-of-bag Estimate Estimate Estimate, Data-
value and its variants have critical limitations. They are based on the fair division axioms in cooperative game theory, but the axioms' relevance to machine learning applications is unclear (Sim et al., 2022; Rozemberczki et al., 2022).
Our contributionsIn this paper, we propose Data-OOB, a new data valuation framework for a bagging model that uses the out-of-bag (OOB) estimate as illustrated in Figure 1. Our framework is computationally efficient by leveraging trained weak learners and is even faster than KNN-Shapley which has a closed-form expression. Furthermore, Data-OOB is statistically interpretable in that under mild assumptions it identifies the same important data point as the infinitesimal jackknife influence function when two different points are compared. Our comprehensive experiments demonstrate that the proposed method significantly better identifies mislabeled data and determines which data points are beneficial or detrimental for a model's performance than existing state-of-the-art data valuation methods.
## 2 Preliminaries
For \(d\in\mathbb{N}\), we denote an input space and an output space by \(\mathcal{X}\subseteq\mathbb{R}^{d}\) and \(\mathcal{Y}\subseteq\mathbb{R}\), respectively. We denote a training dataset by \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{n}\) where \(x_{i}\in\mathcal{X}\) and \(y_{i}\in\mathcal{Y}\) are an input and its label for the \(i\)-th datum. We denote a utility function by \(U\), which takes as input a subset of the training dataset \(\mathcal{D}\) and outputs a model's performance that is trained on that subset. In classification, for instance, a common choice for \(U\) is the test classification accuracy of an empirical risk minimizer trained on a subset of \(\mathcal{D}\), _i.e._, \(U(S)=\mathbb{E}[\mathds{1}(Y=\hat{f}_{S}(X))]\) where \(\mathds{1}(A)\) is an indicator whose value is one if a statement \(A\) is true and zero otherwise, \(\hat{f}_{S}:=\operatorname*{argmin}_{f\in\mathcal{F}}\sum_{j\in S}\mathds{1 }(y_{j}\neq f(x_{j}))\) for some class of models \(\mathcal{F}=\{f|f:\mathcal{X}\rightarrow\mathcal{Y}\}\), and the expectation \(\mathbb{E}\) is taken with respect to a data distribution or is often approximated by a finite holdout validation dataset. When \(S=\{\}\) is the empty set, \(U(S)\) is set to be the performance of the best constant model by convention. A utility function depends on the choice of learning algorithms and a class \(\mathcal{F}\), but we suppress its dependency as our main focus is on comparing the functional form of data values. For a set \(S\), we denote its power set by \(2^{S}\) and its cardinality by \(|S|\). We set \([j]:=\{1,\ldots,j\}\) for \(j\in\mathbb{N}\).
A standard approach for quantifying data values is to use the marginal contribution, which measures the average change in a utility function when a particular datum is removed from a subset of the entire training dataset \(\mathcal{D}\).
**Definition 2.1** (Marginal contribution).: For a utility function \(U:2^{\mathcal{D}}\rightarrow\mathbb{R}\) and \(j\in[n]\), the marginal contribution of \(z\in\mathcal{D}\) with respect to \(j\) samples is defined as follows.
\[\Delta_{j}(z,U):=\frac{1}{\binom{n-1}{j-1}}\sum_{S\in\mathcal{D} \setminus z}U(S\cup\{z\})-U(S),\]
where \(\mathcal{D}\setminus z:=\{S\subseteq\mathcal{D}\setminus\{z\}:|S|=j-1\}\).
Many data valuation methods can be expressed as a function of the marginal contribution. The LOO method is \(\Delta_{n}(z,U)\), measuring the changes when one particular datum \(z\) is removed from the entire dataset \(\mathcal{D}\). LOO includes the Cook's distance and the approximate empirical influence function (Cook and Weisberg, 1980; Koh and Liang, 2017). Another example is Data Shapley (Ghorbani and Zou, 2019), which is expressed as a simple average of marginal contributions \(\psi_{\mathrm{Shap}}(z,U):=n^{-1}\sum_{j=1}^{n}\Delta_{j}(z,U)\). As its extension, Beta Shapley proposed by Kwon and Zou (2022) is expressed as a weighted mean of marginal contributions.
\[\psi_{\mathrm{Beta}}(z,U,\beta):=\sum_{j=1}^{n}\beta_{j}\Delta_{j} (z,U), \tag{1}\]
where \(\beta=(\beta_{1},\ldots,\beta_{n})\) is a predefined weight vector such that \(\sum_{j=1}^{n}\beta_{j}=1\) and \(\beta_{j}\geq 0\) for all \(j\in[n]\). A functional form of Equation (1) is also known as semivalues in cooperative game theory.
The LOO method is known to be computationally feasible, but it often assigns erroneous values that are close to zero (Basu et al., 2020). Data Shapley and Beta Shapley are empirically shown to be more effective than LOO in many downstream tasks such as mislabeled data detection in classification settings (Ghorbani and Zou, 2019; Kwon and Zou, 2022). However, their computational complexity is well known to be expensive, making it infeasible to apply to large datasets (Bachrach et al., 2010; Jia et al., 2019; Wang and Jia, 2022). As a result, most existing work has focused on small datasets, _e.g._, \(n\leq 1000\).
Figure 1: Illustration of the proposed data valuation method. The OOB stands for the out-of-bag set. For each bootstrap sampling procedure, we evaluate an estimate \(T_{b}(\star)\) if the datum \(\star\) is in the OOB set. Here, \(T_{b}(\star)\) is a score of the model trained with the \(b\)-th bootstrap dataset evaluated at \(\star\). The proposed data value summarizes scores \(T_{b}(\star)\) from the \(B\) bootstrap datasets. Details are provided in Section 3.
Many methods have been proposed to reduce computational costs. Wu et al. (2022) proposed a stratified sampling to optimize the number of utility evaluations, and Jia et al. (2019) and Kwon et al. (2021) derived a closed-form expression of the Shapley value. However, these methods still have difficulties in scaling to large datasets or require unusual assumptions on the utility function. For instance, Jia et al. (2019) used a utility function that does not take into account the majority voting in classification settings: for \(S\in\mathcal{D}\), \(U(S)=k^{-1}\sum_{(x_{i},y_{i})\in\mathcal{N}(S)}\mathds{1}(\tilde{y}=y_{i})\) where \(\mathcal{N}(S)\) is a set of \(\min(k,|S|)\) nearest neighbors of \(\tilde{x}\), and \((\tilde{x},\tilde{y})\in\mathcal{X}\times\mathcal{Y}\) is a test datum. Kwon et al. (2021) considered a commonly used utility function (_e.g._, the negative Euclidean distance), but it is limited to linear regression models, which may not be the most favorable in real-world data analysis.
Recently, Lin et al. (2022) proposed an efficient algorithm to estimate a class of data values called the average marginal effect (AME) given as follows.
\[\psi_{\mathrm{AME}}(z,U):=\mathbb{E}_{S}[U(S\cup\{z\})-U(S)],\]
where the expectation \(\mathbb{E}_{S}\) is taken over a random set \(S\) with a user-defined distribution defined on the discrete space \(\cup_{j=1}^{n}\mathcal{D}_{j}^{\setminus z}\). They showed that AME can include the Shapley value and semivalues as a special case, and it can be approximated by the linear coefficient of a LASSO model. Specifically, AME is estimated by a minimizer of the following objective function.
\[\mathrm{argmin}_{\gamma\in\mathbb{R}^{n}}\frac{1}{|\mathcal{S}|}\sum_{S\in \mathcal{S}}\left(U(S)-g(\mathds{1}_{S})^{T}\gamma\right)^{2}+\lambda\sum_{i= 1}^{n}|\gamma_{i}|,\]
where \(\lambda>0\) is a regularization parameter, \(g:\{0,1\}^{n}\rightarrow\mathbb{R}^{n}\) is a predefined transformation function, \(\mathcal{S}=\{S:S\subseteq\mathcal{D}\}\) is a set of subsets of \(\mathcal{D}\) and \(\mathds{1}_{S}\in\{0,1\}^{n}\) is \(n\)-dimensional vector whose element is one if its index is an element of \(S\), zero otherwise. Their algorithm is shown to have better computational efficiency in the semivalue estimation than existing methods. However, it requires some sparsity assumption that is difficult to verify, and also, it needs training a LASSO model.
Ilyas et al. (2022) proposed a similar idea called datamodels that use a LASSO model to predict a test data point's prediction, _i.e._, the utility \(U(S)\) is evaluated at a particular test data point. Due to its dependency on a particular datum, it is not suitable for capturing the influence on the model performance. Moreover, it needs computational costs for training a LASSO model similar to Lin et al. (2022).
Besides the computational issue, marginal contribution-based methods have another critical issue with theoretical interpretation. Motivated by cooperative game theory, they provide mathematical justifications that are seemingly solid. However, the fair division axioms used in the Shapley value have not been statistically examined, and it raises a fundamental question about the appropriateness of these axioms in machine learning problems (Kumar et al., 2020; Sim et al., 2022; Rozemberczki et al., 2022).
In the following section, we propose a novel data valuation framework for a bagging model that can address aforementioned issues. Our method is computationally efficient by recycling trained weak learners and does not rely on the fair division axioms that can be less relevant to machine learning applications.
## 3 Data-OOB: Out-Of-Bag Estimate as Data Value
Suppose we have a trained bagging model that consists of \(B\) weak learner models. For \(b\in[B]\), we denote the \(b\)-th weak learner by \(\hat{f}_{b}:\mathcal{X}\rightarrow\mathcal{Y}\), which is trained on the \(b\)-th bootstrap dataset. It can be expressed as a minimizer of a weighted risk as follows.
\[\hat{f}_{b}:=\mathrm{argmin}_{f\in\mathcal{F}}\frac{1}{n}\sum_{j=1}^{n}w_{bj} \ell(y_{j},f(x_{j})),\]
where \(\ell:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}\) is a loss function and \(w_{bj}\in\mathbb{Z}\) is the number of times the \(j\)-th datum \((x_{j},y_{j})\) is selected in the \(b\)-th bootstrap dataset. We set \(w_{b}:=(w_{b1},\ldots,w_{bn})\) for all \(b\in[B]\). For \(i\in[n]\) and \(\Theta_{B}:=\{(w_{b},\hat{f}_{b})\}_{b=1}^{B}\), we propose to use the following quantity as data values.
\[\psi((x_{i},y_{i}),\Theta_{B}):=\frac{\sum_{b=1}^{B}\mathds{1}(w_{bi}=0)T(y_{i },\hat{f}_{b}(x_{i}))}{\sum_{b=1}^{B}\mathds{1}(w_{bi}=0)}. \tag{2}\]
where \(T:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}\) is a score function that represents the goodness of a weak learner \(\hat{f}_{b}\) at the \(i\)-th datum \((x_{i},y_{i})\). For instance, we can use the correctness function \(T(y_{i},\hat{f}_{b}(x_{i}))=\mathds{1}(y_{i}=\hat{f}_{b}(x_{i}))\) in classification settings and the negative Euclidean distance \(T(y_{i},\hat{f}_{b}(x_{i}))=-(y_{i}-\hat{f}_{b}(x_{i}))^{2}\) in regression settings.
Our proposed data value in Equation (2) measures the average score when the datum \((x_{i},y_{i})\) is not selected in the bootstrap dataset. Accordingly, it can be interpreted as a partition of the OOB estimate, which is originally introduced to estimate the prediction error (Efron, 1992; Efron and Tibshirani, 1997). Specifically, the OOB estimate is given as
\[\frac{1}{n}\sum_{i=1}^{n}\frac{\sum_{b=1}^{B}\mathds{1}(w_{bi}=0)T(y_{i},\hat{ f}_{b}(x_{i}))}{\sum_{b=1}^{B}\mathds{1}(w_{bi}=0)},\]
and it is equal to the simple average of the proposed data values \(\frac{1}{n}\sum_{i=1}^{n}\psi((x_{i},y_{i}),\Theta_{B})\). Motivated by this relationship, we call our data valuation method Data-OOB.
Data-OOB has several advantages in computational efficiency. In contrast to existing marginal contribution-based data valuation methods, Data-OOB can leverage trained weak learners \(\hat{f}_{b}\). In other words, Data-OOB does not require training multiple models for the utility evaluation and is readily obtained when there is a trained bagging model. Moreover, it has sample efficiency because it does not use additional validation data points for the utility evaluation that can greatly affect the quality of data values.
Theoretical interpretationWe rigorously examine the statistical implications of our proposed method. We show that Data-OOB identifies the same set of important data points as Jaeckel's infinitesimal jackknife influence function (Jaeckel, 1972; Mallows, 1975). We denote the empirical distribution of \(\mathcal{D}\) by \(\hat{\mathbb{P}}=\frac{1}{n}\sum_{j=1}^{n}\delta_{(x_{j},y_{j})}\) where \(\delta_{(x,y)}\) is the Dirac delta measure on \((x,y)\in\mathcal{X}\times\mathcal{Y}\). We reformulate the OOB estimate as a functional defined on a probability measure space: for a probability measure \(\mathbb{Q}\) defined on \(\mathcal{D}\),
\[h(\mathbb{Q}):=\int\psi((x,y),\Theta_{B})d\mathbb{Q}(x,y).\]
Then, the infinitesimal jackknife influence function of \(h\) is defined as its derivative. For \(-1/(n-1)<\varepsilon<1\),
\[\psi_{\text{IJ}}(x_{i},y_{i}):=\left.\frac{\partial h(\hat{\mathbb{P}}_{ \varepsilon,i})}{\partial\varepsilon}\right|_{\varepsilon=0}=\lim_{\varepsilon \to 0}\frac{h(\hat{\mathbb{P}}_{\varepsilon,i})-h(\hat{\mathbb{P}})}{ \varepsilon},\]
where \(\hat{\mathbb{P}}_{\varepsilon,i}=(1-\varepsilon)\hat{\mathbb{P}}+\varepsilon \delta_{(x_{i},y_{i})}\). The infinitesimal jackknife influence function \(\psi_{\text{IJ}}(x_{i},y_{i})\) quantifies how fast the OOB estimate \(h\) changes when the weight on the \(i\)-th datum \((x_{i},y_{i})\) is changed. Given that the OOB estimate \(h\) approximates the test performance, \(\psi_{\text{IJ}}(x_{i},y_{i})\) is expected to capture the influence of individual data points on the test performance.
Although the name suggests similarity, it is important to note that the \(\psi_{\text{IJ}}\) is distinct from the empirical influence function widely studied in the machine learning literature (Koh and Liang, 2017; Basu et al., 2020; Feldman and Zhang, 2020). Specifically, \(\psi_{\text{IJ}}\) measures how fast the OOB estimate changes, but the ordinary influence function measures how fast the test accuracy evaluated on several test data points changes. Moreover, when it comes to the functional form, \(\psi_{\text{IJ}}\) is defined without dependency on test data points, but the ordinary influence function requires them. We also highlight that the OOB estimate has not been studied much in the field of data valuation, even though its derivative \(\psi_{\text{IJ}}(x_{i},y_{i})\) intuitively describes the influence of individual data points on the model performance.
The following proposition shows that the influence function and the proposed method identify the same important data point among two different data points under a mild condition. To begin with, we set \(V_{B}:=B^{-1}\sum_{b=1}^{B}(q_{b}-\bar{q})^{2}\) where \(q_{b}=\frac{1}{n}\sum_{j=1}^{n}\mathds{1}(w_{bj}=0)T(y_{j},\hat{f}_{b}(x_{j}))\) is the normalized OOB score for the \(b\)-th bootstrap dataset and \(\bar{q}:=B^{-1}\sum_{b=1}^{B}q_{b}\).
**Proposition 3.1** (Order consistency between Data-OOB and the infinitesimal influence function).: _For \(i\neq j\in[n]\), if \(\psi_{\text{IJ}}(x_{i},y_{i})>\psi_{\text{IJ}}(x_{j},y_{j})+4\sqrt{2}V_{B}^{1/2}\), then \(\psi((x_{i},y_{i}),\Theta_{B})>\psi((x_{j},y_{j}),\Theta_{B})\)._
A proof is given in Appendix C. Proposition 3.1 provides new statistical insights when two data points are compared: The proposed method and the infinitesimal jackknife influence function have the order if one data point has a large enough influence function value than the other. Here, \(V_{B}\) is the variance of OOB scores \(q_{b}\) across different bootstrap datasets, and it is expected to be very small (_e.g._, \(O(n^{-1})\)) when \(n\) and \(B\) are large enough (Efron, 1979). In short, when there is a large enough gap between two influence function values, the proposed method will have the same ordering. Given that many applications of data valuation mainly focus on the order of data points, this theoretical result highlights the potential efficacy of the method in downstream tasks.
## 4 Experiments
In this section, we systematically investigate the practical effectiveness of the proposed data valuation method Data-OOB through three sets of experiments, which are frequently used in previous studies: time comparison, mislabeled data detection, and point removal experiment. We demonstrate that our method is computationally efficient and highly effective in identifying mislabeled data. Furthermore, compared to state-of-the-art data valuation methods, Data-OOB better determines which data points are beneficial or detrimental for model training.
Experimental settingsWe use 12 classification datasets that are publicly available in OpenML (Feurer et al., 2021) or the Python package'scikit-learn' (Pedregosa et al., 2011), and have at least \(15000\) samples. Also, we note that many of these datasets were used in previous data valuation papers (Ghorbani and Zou, 2019; Kwon and Zou, 2022). We compare Data-OOB with the following four data valuation methods: KNN Shapley (Jia et al., 2019), Data Shapley (Ghorbani and Zou, 2019), Beta Shapley (Kwon and Zou, 2022), and AME (Lin et al., 2022). We set the training sample size to \(n\in\{1000,10000\}\), but Data Shapley and Beta Shapley are computed only when \(n=1000\) due to their low computational efficiency. All methods except for Data-OOB require additional validation data to evaluate the utility function \(U\). We set the validation sample size to 10% of the training sample size \(n\). As for Data-OOB, we use a random forest model with \(B=800\) decision trees. To make our comparison fair, we use the same number or a
greater number of utility evaluations for Data Shapley, Beta Shapley, and AME compared to Data-OOB. Implementation details are provided in Appendix A.
### Elapsed Time Comparison
We first assess the computational efficiency of Data-OOB using a synthetic binary classification dataset. For \(d\in\{10,100\}\), an input \(X\in\mathbb{R}^{d}\) is randomly generated from a multivariate Gaussian distribution with zero mean and an identity covariance matrix, and an output \(Y\in\{0,1\}\) is generated from a Bernoulli distribution with a success probability \(p(X)\). Here, \(p(X):=1/(1+\exp(-X^{T}\eta))\) and each element of \(\eta\in\mathbb{R}^{d}\) is generated from a standard Gaussian distribution. We only generate \(\eta\) once, and the same \(\eta\) is used to generate different data points. A set of sample sizes \(n\) is \(\{10^{4},2.5\times 10^{4},5\times 10^{4},10^{5},2.5\times 10^{5},5\times 10^{5}\}\). We measure the elapsed time with a single Intel Xeon E5-2640v4 CPU processor. For a fair comparison, the elapsed time for Data-OOB includes the training time for the random forest.
As Figure 2 shows, Data-OOB achieves better computational efficiency than existing methods KNN Shapley and AME in various \(n\) and \(d\). Specifically, Data-OOB is \(54\) times faster than KNN Shapley when \((n,d)=(10^{5},10)\). Interestingly, we find KNN Shapley is slow despite having the closed-form expression because it needs to sort \(n\) data points for each validation data point. When \((n,d)=(5\times 10^{5},100)\) and the validation sample size is \(10^{4}\), KNN Shapley exceeds 24 hours. For this reason, we exclude this setting from Figure 2. KNN Shapley can be more efficient if the validation size is smaller, but it would cost the quality of data values. In comparison with AME, Data-OOB does not require training LASSO models, achieving better computational efficiency.
As for the algorithmic complexity, when a random forest is used, the computational complexity of Data-OOB will be \(O(Bdn\log(n))\) where \(B\) is the number of trees, \(d\) is the number of features and \(n\) is the number of data points in the training dataset. This is because the computational cost of Data-OOB is mainly from training a random forest model, and its computational complexity is \(O(Bdn\log(n))\)(Hassine et al., 2019). Meanwhile, the computational complexity of KNN Shapley will be \(O(n^{2}\log(n))\) when the number of data points in the validation dataset is \(O(n)\) (e.g. 10% of \(n\)). These results support why the elapsed time for Data-OOB increases linearly and that of the KNN-Shapley increases polynomially in Figure 2. In addition, it shows that ours can be beneficial when \(n\) is increasing but \(B\) and \(d\) are fixed.
Our method is highly efficient and it takes less than 2.25 hours when \((n,d)=(10^{6},100)\) on a single CPU processor. The proposed method can be more efficient with the use of trained multiple weak learners. For instance, when \((n,d)=(10^{5},10)\), the computation of Data-OOB takes only 13% of the entire training time for a random forest.
### Mislabeled Data Detection
Since mislabeled data often negatively affect the model performance, it is desirable to assign low values to these data points. To see the detection ability of Data-OOB, we conduct mislabeled data detection experiment. We randomly choose 10% of the entire data points and change its label to one of other labels. We first compute data values as if the contaminated dataset is the original dataset, and then we evaluate the precision and the recall of data valuation methods. Note that every method is not provided with an annotation about which data point is mislabeled.
Figure 3 compares the precision-recall curves of different data valuation methods. AME is not displayed because it assigns the exactly zero value for most data points, resulting in meaningless precision and recall values. Data-OOB shows better or comparable performance than existing marginal contribution-based methods in various settings. Additional results using different datasets are provided in Appendix B.1, where Data-OOB consistently shows competitive performance over Data Shapley, Beta Shapley, and KNN Shapley.
We further assess the detection ability of different data valuation methods. Following the mislabeled data detection task in Kwon and Zou (2022), we apply the K-means algorithm to data values and divide data points into two clusters. (Arthur and Vassilvitskii, 2007). We regard data points in a cluster with a lower mean as the prediction for mislabeled data points. Then, the F1-score is evaluated by comparing the prediction with its actual annotations. Table 1 shows the F1-score of different data valuation methods for the twelve
Figure 2: Elapsed time comparison between KNN Shapley, AME, and Data-OOB. We use a synthetic binary classification dataset with (left) \(d=10\) and (right) \(d=100\). We exclude the setting \((n,d)=(5\times 10^{5},100)\) as KNN Shapley exceeds 24 hours. The error bar indicates a 95% confidence interval based on 5 independent experiments. Data-OOB is significantly faster than KNN Shapley and AME. The time for training the random forest is included in the time for Data-OOB.
classification datasets. Overall, Data-OOB significantly outperforms other state-of-the-art methods. In particular, when dataset is 'pol' and \(n=10000\), Data-OOB achieves \(3.1\) and \(8.7\) times greater F1-score than KNN Shapley and AME, respectively. As noted by Lin et al. (2022), the F1-score for AME can be improved if the Model-X Knock-off procedure is incorporated Candes et al. (2018). However, it requires additional training LASSO models with dummy variables, resulting in extra computational costs. We demonstrate that Data-OOB shows strong performance in detecting mislabeled data points without such procedures.
### Point removal experiment
Data valuation methods can be useful in identifying a small set of the training dataset that is helpful or harmful to a model's performance. To see this, we conduct the point removal experiment. Following Ghorbani and Zou (2019), we remove data points from the entire dataset one by one starting from lowest to largest. At each time a datum is removed, we fit a logistic regression model with the remaining data and evaluate the test accuracy of the logistic regression model. We include the random removal as one of baseline methods, and use the same contaminated datasets in Section 4.2. All the test accuracy results are evaluated on the fixed holdout dataset with \(3000\) data points.
Figure 4 shows the test accuracy curves on the four datasets when \(n\in\{1000,10000\}\). Overall, Data-OOB achieves significantly better test accuracy than other baseline methods in various settings, showing a steeper increase in the first 20% of data removal. We suggest that this increase in performance is due to the mislabeled data detection performance as mislabeled data points are likely to be removed
\begin{table}
\begin{tabular}{l|c c c c c|c c c} \hline \hline \multirow{2}{*}{Dataset} & \multicolumn{4}{c|}{\(n=1000\)} & \multicolumn{2}{c}{\(n=1000\)} & \multicolumn{2}{c}{\(n=10000\)} \\ & KNN Shapley & Data Shapley & Beta Shapley & AME & Data-OOB & KNN Shapley & AME & Data-OOB \\ \hline pol & \(0.28\pm 0.003\) & \(0.50\pm 0.011\) & \(0.46\pm 0.010\) & \(0.09\pm 0.009\) & \(\mathbf{0.73\pm 0.004}\) & \(0.28\pm 0.000\) & \(0.10\pm 0.012\) & \(\mathbf{0.88\pm 0.000}\) \\ jamis & \(0.25\pm 0.004\) & \(0.23\pm 0.003\) & \(0.24\pm 0.003\) & \(0.09\pm 0.012\) & \(\mathbf{0.30\pm 0.001}\) & \(0.28\pm 0.001\) & \(0.06\pm 0.012\) & \(\mathbf{0.33\pm 0.000}\) \\ lawschool & \(0.45\pm 0.014\) & \(0.94\pm 0.003\) & \(0.94\pm 0.003\) & \(0.10\pm 0.009\) & \(\mathbf{0.96\pm 0.002}\) & \(0.39\pm 0.005\) & \(0.08\pm 0.012\) & \(\mathbf{0.95\pm 0.000}\) \\ fried & \(0.28\pm 0.005\) & \(0.32\pm 0.003\) & \(0.32\pm 0.004\) & \(0.09\pm 0.011\) & \(\mathbf{0.44\pm 0.004}\) & \(0.35\pm 0.001\) & \(0.08\pm 0.012\) & \(\mathbf{0.54\pm 0.001}\) \\ vehicle\_sensTT & \(0.20\pm 0.004\) & \(0.37\pm 0.006\) & \(0.39\pm 0.006\) & \(0.07\pm 0.011\) & \(\mathbf{0.49\pm 0.004}\) & \(0.21\pm 0.004\) & \(0.09\pm 0.012\) & \(\mathbf{0.52\pm 0.001}\) \\ electricity & \(0.26\pm 0.006\) & \(0.32\pm 0.004\) & \(0.34\pm 0.004\) & \(0.08\pm 0.010\) & \(\mathbf{0.35\pm 0.002}\) & \(0.29\pm 0.001\) & \(0.08\pm 0.012\) & \(\mathbf{0.43\pm 0.001}\) \\ lagplanes & \(0.30\pm 0.007\) & \(0.57\pm 0.006\) & \(0.54\pm 0.006\) & \(0.10\pm 0.009\) & \(0.58\pm 0.004\) & \(0.42\pm 0.004\) & \(0.10\pm 0.012\) & \(\mathbf{0.61\pm 0.001}\) \\ creditcard & \(\mathbf{0.43\pm 0.004}\) & \(0.36\pm 0.006\) & \(\mathbf{0.43\pm 0.005}\) & \(0.09\pm 0.011\) & \(0.41\pm 0.003\) & \(0.42\pm 0.002\) & \(0.12\pm 0.012\) & \(\mathbf{0.44\pm 0.001}\) \\ covertype & \(\mathbf{0.51\pm 0.021}\) & \(0.37\pm 0.004\) & \(0.41\pm 0.003\) & \(0.12\pm 0.011\) & \(0.40\pm 0.002\) & \(\mathbf{0.66\pm 0.003}\) & \(0.08\pm 0.012\) & \(0.47\pm 0.001\) \\ nomoma & \(0.47\pm 0.013\) & \(0.65\pm 0.005\) & \(\mathbf{0.66\pm 0.005}\) & \(0.08\pm 0.009\) & \(0.65\pm 0.004\) & \(0.51\pm 0.012\) & \(0.09\pm 0.012\) & \(\mathbf{0.75\pm 0.001}\) \\ weldu\_wKA & \(0.39\pm 0.003\) & \(0.38\pm 0.006\) & \(\mathbf{0.43\pm 0.006}\) & \(0.07\pm 0.010\) & \(0.40\pm 0.003\) & \(0.38\pm 0.000\) & \(0.12\pm 0.011\) & \(\mathbf{0.41\pm 0.001}\) \\ MiniBooNE & \(0.33\pm 0.008\) & \(0.40\pm 0.006\) & \(0.41\pm 0.006\) & \(0.09\pm 0.011\) & \(\mathbf{0.54\pm 0.004}\) & \(0.41\pm 0.003\) & \(0.09\pm 0.012\) & \(\mathbf{0.63\pm 0.001}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: F1-score of different data valuation methods on the twelve datasets when (left) \(n=10000\) and (right) \(n=10000\). The average and standard error of the F1-score based on 50 independent experiments are denoted by ‘average\(\pm\)standard error’. Bold numbers denote the best method. In almost all situations, the proposed Data-OOB outperforms other methods in detecting mislabeled data.
Figure 3: Precision-recall curves of different data valuation methods on the four datasets when (top) \(n=1000\) and (bottom) \(n=10000\). The larger area under the curve is, the better method is. The proposed method shows superior or comparable identification performance in various settings. Additional results using different datasets are provided in Appendix B.1.
first in the first 20% of data removal.
After the 80% of data removal interval, _i.e._, with top 20% helpful data points, the proposed method maintains the test accuracy similar to the level of the entire dataset. For instance, when the dataset is 'jannis' and \(n=10000\), the test accuracy after removing 80% of the dataset is 72.2%, which is only 0.3% degradation compared to the entire data set. Surprisingly, in case of the 'pol' dataset with \(n=10000\), the accuracy is improved to 90.9% from 87.9% by removing unhelpful data points. Our experiments demonstrate that Data-OOB can be a practical solution for not only selecting detrimental data but finding a pivotal set that can maintain or improve test accuracy with a smaller dataset.
As for the AME, we find that it performs similarly to the random removal. This is because the LASSO model in AME assigns exactly zero values for most of the data points, leading to a similar behavior to the random selection. The sparsity is usually regarded as a desired property in high dimensional data analysis, but it shows that it may not be critical in data valuation problems. As for the KNN Shapley, it shows the worst performance when \(n=1000\) and \(n=10000\), and the case \(n=10000\) is excluded from Figure 4 for a better presentation. This poor performance may result from the \(k\)-nearest neighbors algorithm that is essential for KNN Shapley. The \(k\)-nearest neighbors algorithm only uses data points in a local neighborhood, and as a result, it can fail to capture the information characterized by the data distribution (_i.e._, margin from a classification boundary).
To further investigate the effect of data removal on the model performance, we illustrate a distribution of the remaining data points when 20%, 50%, or 80% of data points are removed. We compare Data-OOB with KNN Shapley and AME. We use the 'MiniBooNE' dataset with \(n=1000\). As its input dimension is \(50\), we display data points using their first two principal component scores.
Figure 5 shows the four different snapshots, namely 0%, 20%, 50%, and 80% of data removal. Data-OOB shows an increased test accuracy even after 50% of data removal by effectively removing unhelpful data points. When 80% of data are removed, it clearly shows a region of each class, giving an intuitive classifier boundary. The test accuracy for KNN Shapley after 80% of data removal is not measured as there are no blue class data points in top 20%. This is anticipated in that KNN Shapley overly focuses on the local neighbors and tends to assign large values if there is a homogeneous local neighbors. AME shows a better test accuracy than KNN Shapley, but it almost randomly removes data points due to sparse data values. As a result, it does not give meaningful insights into class regions. In Appendix B.3, we show additional experiments with the datasets 'electricity' and 'fried'. Data-OOB consistently identifies informative data points, showing a better capability in finding beneficial data points than KNN Shapley and AME.
We emphasize that Data-OOB is insensitive to different choices of the number of weak learners \(B\). We conduct a mislabeled data detection experiment with \(B\in\{400,800,3200\}\), showing that the F1-score for
Figure 4: Test accuracy curves as a function of the percentage of data removed. We consider the four datasets when (top) \(n=1000\) and (bottom) \(n=10000\). We remove data points one by one starting from the lowest to the largest. Higher curves indicate better performance for data valuation. The error bar indicates a 95% confidence interval based on 50 independent experiments. Additional results using 8 different datasets are provided in Appendix B.2.
Data-OOB remains stable. It shows that our experimental results continue to hold with a smaller \(B\). We provide this result in Appendix B.4.
## 5 Related Works
BaggingBootstrap aggregation, which is also known as bagging, is an ensemble technique that trains multiple weak learners where each learner is trained using a bootstrap dataset (Breiman, 1996). One popular and powerful bagging model is the random forest in which multiple numbers of decision trees are trained with a randomly selected set of features (Breiman, 2001; Wager et al., 2014; Athey et al., 2019). While the primary usage of bagging is to improve a model's performance by decreasing the variance of its predictions, the proposed Data-OOB presents a distinct application of bagging.
Marginal contribution-based methods in machine learningMarginal contribution-based methods have been studied and applied to various machine learning problems, for instance, feature attribution problems (Lundberg and Lee, 2017; Covert et al., 2021; Kwon and Zou, 2022b), model explanation (Stier et al., 2018; Ghorbani and Zou, 2020), collaborative learning (Sim et al., 2020; Xu et al., 2021), and federated learning (Wang, 2019; Wang et al., 2020). The Shapley value is one of the most widely used marginal contribution-based methods, and many alternative approaches have been studied by relaxing some of the underlying fair division axioms (Yan and Procaccia, 2021; Kwon and Zou, 2022a; Wang and Jia, 2022; Rozemberczki et al., 2022). Alternatively, there have been approaches that are independent of marginal contributions. In the data valuation literature, for instance, Yoon et al. (2020) proposed a data value estimator model using reinforcement learning and Ilyas et al. (2022) proposed data
Figure 5: Distribution after data removal for (top) Data-OOB, (middle) KNN Shapley, and (bottom) AME. We remove data points one by one starting from the lowest to the largest. We illustrate data points with their first two principal components. The color indicates a class and the black solid line indicates a classifier boundary obtained by training the remaining dataset. We intentionally repeat the three figures in the first column to ease the comparison. The proposed method shows a better capability in finding beneficial data points.
models that capture the influence via predicting a model's prediction as Lin et al. (2022).
## 6 Concluding Remarks
In this paper, we propose Data-OOB that is suitable for any tabular machine learning datasets as it is easy to train a random forest model on such datasets. With comprehensive numerical experiments, we demonstrate that Data-OOB is significantly powerful in identifying helpful and harmful data points for model training. Our method does not require additional validation points and is computationally efficient by reusing trained weak learners. Data-OOB is statistically interpretable, showing it has the order consistency with the infinitesimal jackknife influence function.
While Data-OOB has shown promising results in various classification datasets, there are several limitations and it opens several future avenues of research. One potential extension of Data-OOB is to leverage weak learners in boosting models instead of bagging models. We find that boosting models should be treated differently from a regular bagging model. This is because a weak learner in boosting predicts the residuals obtained from the previous optimization steps, not predicting the ground truth labels. In other words, a weak learner in boosting is sequentially dependent on other weak learners, making a direct application of Data-OOB challenging in downstream machine learning tasks. We believe computing data values with a trained boosting model could be very influential as boosting often performs better than a random forest in practice.
One potential caveat is that Data-OOB can assign erroneously high values for detrimental data points if there are many duplicates. This is because when there are multiple duplicate data, the OOB estimate becomes similar to the training accuracy, not the test accuracy. We believe a simple removal of duplicates can address this issue, but we encourage the community to develop a more principled method for duplicate data.
## Acknowledgements
The authors would like to thank all anonymous reviewers for their helpful comments. We also would like to thank Mert Yukekgonul and Young-geun Kim for their constructive feedback.
|
2307.06156 | It takes two spectral sequences | We study the representation theory of the Lie superalgebra
$\mathfrak{gl}(1|1)$, constructing two spectral sequences which eventually
annihilate precisely the superdimension zero indecomposable modules in the
finite-dimensional category. The pages of these spectral sequences, along with
their limits, define symmetric monoidal functors on $\mathrm{Rep}
(\mathfrak{gl}(1|1))$. These two spectral sequences are related by
contragredient duality, and from their limits we construct explicit
semisimplification functors, which we explicitly prove are isomorphic up to a
twist. We use these tools to prove branching results for the restriction of
simple modules over Kac-Moody and queer Lie superalgebras to
$\mathfrak{gl}(1|1)$-subalgebras. | Inna Entova-Aizenbud, Vera Serganova, Alexander Sherman | 2023-07-12T13:30:08Z | http://arxiv.org/abs/2307.06156v1 | # It takes two spectral sequences
###### Abstract.
We study the representation theory of the Lie superalgebra \(\mathfrak{gl}(1|1)\), constructing two spectral sequences which eventually annihilate precisely the superdimension \(0\) indecomposable modules in the finite-dimensional category. The pages of these spectral sequences, along with their limits, define symmetric monoidal functors on \(\operatorname{Rep}\mathfrak{gl}(1|1)\). These two spectral sequences are related by contragredient duality, and from their limits we construct explicit semisimplification functors, which we explicitly prove are isomorphic up to a twist. We use these tools to prove branching results for the restriction of simple modules over Kac-Moody and queer Lie superalgebras to \(\mathfrak{gl}(1|1)\)-subalgebras.
Dedicated to the memory of Georgia Benkart.
## 1. Introduction
Consider the general linear Lie superalgebra \(\mathfrak{gl}(1|1)\) presented as matrices of the form
\[\begin{bmatrix}c+h/2&x\\ y&c-h/2\end{bmatrix}.\]
Given a \(\mathfrak{gl}(1|1)\)-module \(V\) on which \(h\) and \(c\) act semisimply, if we take invariants \(\operatorname{nder}c\) we obtain a super vector space \(V^{c}\) with a supercommuting action of \(x\) and \(y\), along with a semisimple action of \(h\). The supercommuting actions of \(x\) and \(y\) allow one to write down a natural double complex whose terms are all \(M:=V^{c}\), and then consider the two spectral sequences associated to it. Let us consider the one in which we take cohomology with respect to \(x\) first. All entries of this spectral sequence are the same at any given page, so picking out what happens at a fixed position, we obtain a sequence of semisimple \(\mathbb{C}\langle h\rangle\)-modules which we write as \(DS^{n}_{x,y}M\). Here \(DS\) refers to Duflo-Serganova, in reference to the connection of our functors to the Duflo-Serganova functors in the representation theory of Lie superalgebras.
As one might expect, the assignment \(M\mapsto DS^{n}_{x,y}M\) is functorial; in fact it defines a symmetric monoidal functor to \(\operatorname{Rep}\mathbb{C}\langle h,d_{1-2n}\rangle\), which is the category of finite-dimensional representations of the Lie superalgebra with even generator \(h\) acting semisimply, odd generator \(d_{1-2n}\), and such that
\[[h,d_{1-2n}]=(1-2n)d_{1-2n},\quad[d_{1-2n},d_{1-2n}]=0.\]
Thus we have a sequence of symmetric monoidal functors, where \(DS^{1}_{x,y}M=M_{x}:=DS_{x}M\), and \(DS^{2}_{x,y}M=(M_{x})_{y}:=DS_{y}(DS_{x}M)\). Here \(DS_{x}\) and \(DS_{y}\) are Duflo-Serganova functors, see [10].
In general, we have \(DS^{n+1}_{x,y}M\) is the cohomology of \(d_{1-2n}\) on \(DS^{n}_{x,y}M\). These functors detect, to some degree, the indecomposable components of \(M\) as a \(\mathfrak{gl}(1|1)\)-module. In particular if \(M\) is finite-dimensional, the spectral sequence converges, and we write
the object it converges to as \(DS^{\infty}_{x,y}M\), where \(DS^{\infty}_{x,y}\) itself defines a symmetric monoidal functor. We of course may do the same procedure with the other spectral sequence, whose terms we write as \(DS^{n}_{y,x}\), which will define symmetric monoidal functors to \(\operatorname{Rep}\mathbb{C}\langle h,d_{2n-1}\rangle\).
### Relations between functors
The functors \(DS^{\infty}_{x,y}\) and \(DS^{\infty}_{y,x}\) are useful on their own, but they also have precise, useful connections to the functor \(DS_{x+y}\) and semisimplification functors on \(\operatorname{Rep}_{\mathfrak{g}_{m}}\mathfrak{gl}(1|1)\), i.e. the category of \(\mathfrak{gl}(1|1)\)-modules with semisimple action of \(h\) and \(c\). We summarize these connections in the diagram below.
Note that \(\operatorname{Rep}_{\mathfrak{g}_{m}}\mathfrak{gl}(1|1)\) is an abelian rigid symmetric monoidal category, and (as we will show) its semisimplification is \(\operatorname{Rep}\mathfrak{g}_{m}\times\operatorname{Rep}\mathbb{G}_{m}\). We construct two semisimplification functors as depicted, \(DS^{ss}_{x,y}\) and \(DS^{ss}_{y,x}\); we show that these two functors are equivalent up to an autoequivalence \(\Phi_{\phi}\), which comes from twisting by the automorphism \(\phi\) of \(\mathbb{C}\langle h\rangle\times\operatorname{Lie}\mathbb{G}_{m}\) given by \((h,a)\mapsto(h+2a,a)\).
A little more on notation: \(\operatorname{Rep}\mathfrak{g}_{m}^{fil}\) denotes the category of \(\mathbb{Z}\)-filtered modules in \(\operatorname{Rep}\mathfrak{g}_{m}\), and \(\operatorname{Fil}^{\mathbb{Z}\times\mathbb{C}}\) is a certain category of filtered super vector spaces (see Section 4.4). The functors \(\operatorname{Gr}\) and \(\operatorname{Gr}_{a_{i}}\) are functors given by taking an associated graded space.
Note that all smaller, enclosed triangles in the diagram are commutative in the sense that we have a natural isomorphism of symmetric monoidal functors. Further, contra-gredient duality acts on the picture by reflecting about the central vertical axis
_Remark 1.1_.: We note that the top half of the above diagram can be thought of morally as a consequence of the theory of spectral sequences. This theory tells us that the cohomology of the total complex, which should correspond to \(DS_{x+y}\) in the above picture, has two filtrations whose associated graded (\(\operatorname{Gr}_{a_{1}}\) and \(\operatorname{Gr}_{a_{2}}\) in the above) give the last pages of the two spectral sequences (\(DS^{\infty}_{x,y}\) and \(DS^{\infty}_{y,x}\)).
Written in more explicit terms we have the following theorem summarizing the situation:
**Theorem 1.2**.:
1. _The functors_ \(DS^{\infty}_{x,y}\) _and_ \(DS^{\infty}_{y,x}\) _define full, essentially surjective functors_ \(\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}\mathfrak{gl}(1|1)\to \operatorname{Rep}\mathfrak{g}_{m}^{fil}\) _such that_ \(DS^{\infty}_{x,y}\circ(-)^{\vee}\cong(-)^{\vee}\circ DS^{\infty}_{y,x}\)_, where_ \((-)^{\vee}\) _denotes the contragredient duality functor._
2. _If we denote by_ \(\operatorname{Gr}:\operatorname{Rep}\mathfrak{g}_{m}^{fil}\to\operatorname{Rep }\mathfrak{g}_{m}\times\operatorname{Rep}\mathbb{G}_{m}\) _the functor of taking associated graded, then_ \(DS^{ss}_{x,y}:=\operatorname{Gr}\circ DS^{\infty}_{x,y}\) _and_ \(DS^{ss}_{y,x}:=\operatorname{Gr}\circ DS^{\infty}_{y,x}\) _define semisimplification functors, and we have an isomorphism of functors_ \[DS^{ss}_{x,y}\cong\Phi_{\phi}\circ DS^{ss}_{y,x}.\]
3. _The functor_ \(DS_{x+y}\) _defines a full, essentially surjective functor_ \(\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}\mathfrak{gl}(1|1)\to \operatorname{Fil}^{\mathbb{Z}\times\mathbb{C}}\)_._
4. _If we denote by_ \(\operatorname{Gr}_{a_{1}}\)_,_ \(\operatorname{Gr}_{a_{2}}:\operatorname{Fil}^{\mathbb{Z}\times\mathbb{C}}\to \operatorname{Rep}\mathfrak{g}_{m}^{fil}\) _the two associated graded functors (defined in Section_ 4_), we have natural isomorphisms of functors_ \[DS^{\infty}_{x,y}\cong\operatorname{Gr}_{a_{1}}\circ DS_{x+y},\ \ \ \ \ DS^{\infty}_{y,x}\cong \operatorname{Gr}_{a_{2}}\circ DS_{x+y}.\]
### Applications
Using the functors defined above and their connections to one another, we prove results about branching to a subalgebra \(\mathfrak{gl}(1|1)\).
**Theorem 1.3**.: _Let \(\mathfrak{g}\) be a finite-type Kac-Moody Lie superalgebra or \(\mathfrak{q}(n)\), and consider a diagonal \(\mathfrak{gl}(1|1)\)-subalgebra of \(\mathfrak{g}\) (see Definition 5.3). Assume that \(L\) is a simple finite-dimensional module, and if \(\mathfrak{g}=\mathfrak{q}(n)\) also assume that its highest weight is integral: then the restriction to \(\mathfrak{gl}(1|1)\) contains no non-projective indecomposable summands of superdimension zero._
We note that for \(\mathfrak{q}(n)\)-modules of half-integral weight, non-projective indecomposable summands of superdimension \(0\) do appear; however for certain subalgebras \(\mathfrak{gl}(1|1)\) we are able to compute which ones appear, and with what multiplcity (see Section 5.3).
For a Lie superalgebra \(\mathfrak{g}\), write \(\operatorname{Rep}^{+}_{\mathfrak{g}_{\overline{0}}}(\mathfrak{g})\) for the Karoubian monoidal subcategory of \(\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}(\mathfrak{g})\) generated by all simple modules, where \(\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}(\mathfrak{g})\) is the symmetric monoidal category of \(\mathfrak{g}\)-modules that are semisimple over \(\mathfrak{g}_{\overline{0}}\).
**Theorem 1.4**.: _If \(\mathfrak{g}=\mathfrak{gl}(m|n)\) or \(\mathfrak{osp}(2m|2n)\), and \(\mathfrak{gl}(1|1)\) is a root subalgebra of \(\mathfrak{g}\), then for a simple finite-dimensional \(\mathfrak{g}\)-module \(L\) we have \(\operatorname{Res}_{\mathfrak{gl}(1|1)}L\in\operatorname{Rep}^{+}_{\mathfrak{ g}_{\overline{0}}}(\mathfrak{gl}(1|1))\)._
See Theorem 5.9 for the proof; we expect Theorem 1.4 to extend to \(\mathfrak{osp}(2m+1|2n)\), however we do not yet see a way to prove this using our techniques.
### Summary of article
In Section 2 we recall facts about \(\mathfrak{gl}(1|1)\)-modules, including listing all indecomposables, which will be used throughout the article. Section 3 defines the spectral sequences and gives the main results, namely that we obtain two sequences of symmetric monoidal functors \(DS^{n}_{x,y}\) and \(DS^{n}_{y,x}\) for all \(n\in\mathbb{N}\cup\{\infty\}\). In addition, contragredient duality is shown to interchange \(DS^{n}_{x,y}\) and \(DS^{n}_{y,x}\). Section 4 uses \(DS^{\infty}_{x,y}\) and \(DS^{\infty}_{y,x}\) to explicitly realize the semisimplification of \(\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}(\mathfrak{gl}(1|1))\), and proves directly the important relationship between the two semisimplification functors \(DS^{ss}_{x,y}\) and \(DS^{ss}_{y,x}\). Further, the functor \(DS_{x+y}\) is studied and its relationship to \(DS^{\infty}_{x,y}\) and \(DS^{\infty}_{y,x}\) is explained. In Section 5 we give applications of our constructions, in particular proving results about branching simple \(\mathfrak{g}\)-modules to \(\mathfrak{gl}(1|1)\)-submodules when \(\mathfrak{g}\) is finite-type Kac Moody or \(\mathfrak{q}(n)\). Also for \(\mathfrak{q}(n)\) we explicitly compute the functors \(DS^{n}_{x,y}\) on simple
modules when \(x,y\) are opposite root vectors. Finally, the appendix gives all details of the spectral sequence.
### Acknowledgements
The authors would like to thank Maria Gorelik, Thorsten Heidersdorf, and Vladimir Hinich for many helpful discussions. The authors were supported by the NSF-BSF grant 2019694.
## 2. Preliminaries and notation: \(\mathfrak{gl}(1|1)\), and \(\mathfrak{pgl}(1|1)\)
### General notation
We work over field \(\mathbb{C}\) of complex numbers; all categories and functors will be \(\mathbb{C}\)-linear. We will denote by \(\mathtt{sVec}\) the category of \(\mathbb{C}\)-linear finite-dimensional vector superspaces and grading preserving maps. For a vector superspace \(V\) we write \(V=V_{\overline{0}}\oplus V_{\overline{1}}\) for its \(\mathbb{Z}_{2}\)-grading.
We will be considering quasireductive Lie superalgebras \(\mathfrak{g}\), i.e. those with \(\mathfrak{g}_{\overline{0}}\) reductive and \(\mathfrak{g}_{\overline{1}}\) a semisimple \(\mathfrak{g}_{\overline{0}}\) module. We will write \(\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}(\mathfrak{g})\) for the category of finite-dimensional \(\mathfrak{g}\)-modules which are semisimple over \(\mathfrak{g}_{\overline{0}}\).
We also denote by \(\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}^{+}(\mathfrak{g})\) the Karoubian symmetric monoidal subcategory of \(\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}(\mathfrak{g})\) generated by irreducible modules.
### The category \(\operatorname{Rep}\mathfrak{g}_{m}\)
We denote by \(\operatorname{Rep}\mathfrak{g}_{m}\) the category of semisimple finite-dimensional \(\mathbb{C}\langle h\rangle\)-modules, and by \(\operatorname{Rep}\mathfrak{g}_{m}^{fil}\) the category of \(\mathbb{Z}\)-filtered, semisimple finite-dimensional \(\mathbb{C}\langle h\rangle\)-modules.
Note that we do not require that \(h\) act by integer eigenvalues.
### Lie superalgebra \(\mathfrak{gl}(1|1)\)
Consider the Lie superalgebra \(\mathfrak{g}=\mathfrak{gl}(1|1)\) presented with basis \(h,c,x,y\) such that
\[c\text{ is central},\quad[h,x]=x,\quad[h,y]=-y,\quad[x,y]=c.\]
We see that \([\mathfrak{g},\mathfrak{g}]=\langle c,x,y\rangle\), so that \(\mathfrak{g}/[\mathfrak{g},\mathfrak{g}]\cong\mathbb{C}\langle h\rangle\). Write \(\mathbf{str}:\mathfrak{g}\to\mathbb{C}\) for the character sending \(h\) to \(1\), so that for \(r\in\mathbb{C}\) we have \(r\mathbf{str}:\mathfrak{g}\to\mathbb{C}\) with \(r\mathbf{str}(h)=r\). Then given a \(\mathfrak{g}\)-module \(V\), we write \(V_{r}\) for its twist by the character \(r\mathbf{str}\).
The blocks of \(\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}(\mathfrak{g})\) either have a nontrivial action of \(c\), in which case they consist of one simple projective module of dimension \((1|1)\) along with its parity shift, or \(c\) acts trivially. When \(c\) acts by \(0\), we obtain the representations of \(\mathfrak{pgl}(1|1)=\mathfrak{gl}(1|1)/\langle c\rangle\). Observe that we have an exact functor
\[\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}\mathfrak{gl}(1|1)\to \operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}\mathfrak{pgl}(1|1),\quad \quad V\mapsto V^{c},\]
where \(V^{c}\) denotes taking invariance under \(c\).
The blocks on which \(c\) acts trivially are indexed by \(\mathbb{C}/2\mathbb{Z}\), corresponding the possible eigenvalues of \(h\) on even vectors, modulo \(2\mathbb{Z}\). The principal block corresponds to \(0\in\mathbb{C}/2\mathbb{Z}\). Factoring out by \(2\mathbb{Z}\) corresponds to the fact that twisting by \(2\mathbf{str}\) preserves blocks with trivial central character.
### Lie superalgebra \(\mathfrak{pgl}(1|1)\) and its indecomposable representations
From our presentation of \(\mathfrak{gl}(1|1)\) above, we have that \(\mathfrak{pgl}(1|1)\) has generators \(h\) even, \(x,y\) odd, satisfying:
\[[h,x]=x,\quad\ [h,y]=-y,\quad\ [x,y]=0.\]
We present below the finite-dimensional indecomposable modules (up to parity shift and weight shift) of \(\mathfrak{pgl}(1|1)\) with semisimple action of \(h\), which follows from [10] or [11]; note that as explained above these are also the finite-dimensional modules over \(\mathfrak{gl}(1|1)\) with trivial action of \(c\). When we refer to the weights of the module, we mean the eigenvalues of \(h\).
* The projective indecomposable \(P\) on the trivial module.
* \(X(n)\), \(n\in\mathbb{Z}_{>0}\), a \({}^{\prime}Z^{\prime}\)-module: dimension \((n|n)\), lowest weight is \(-n+1/2\) and even, highest weight is \(n-1/2\) and odd.
* \(Y(n)\), \(n\in\mathbb{Z}_{>0}\), a \({}^{\prime}Z^{\prime}\)-module: dimension \((n|n)\), lowest weight is \(-n+1/2\) and even, highest weight is \(n-1/2\) and odd.
* \(W(n)\) for \(n\geq 0\), a \({}^{\prime}W^{\prime}\)-module: dimension \((n+1|n)\), lowest weight is \(-n\), highest weight is \(n\), and both are even.
* \(W(-n):=W(n)^{*}\) for \(n\geq 0\), a \({}^{\prime}W^{\prime}\)-module: dimension \((n+1|n)\), lowest weight is \(-n\), highest weight is \(n\), and both are even.
Observe that \(W(0)\) is isomorphic to the trivial module. Further we can twist any of these above modules by \(r\mathbf{str}\), or take their parity shift. This will give all finite-dimensional indecomposables in \(\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}\mathfrak{pgl}(1|1)\).
### Contragredient duality
On \(\mathfrak{gl}(1|1)\) we have a Chevalley automorphism \(\sigma=\sigma_{\mathfrak{gl}(1|1)}\) given by minus supertranspose, i.e.
\[\sigma(h)=-h,\ \ \sigma(c)=-c,\ \ \sigma(x)=-y,\ \ \sigma(y)=x.\]
Further, \(\sigma\) descends to an automorphism of \(\mathfrak{pgl}(1|1)\). This automorphism induces contragredient duality \(V\mapsto V^{\vee}:=(V^{*})^{\sigma_{\mathfrak{gl}(1|1)}}\). It acts on indecomosables in the following way (up to isomorphism):
\[P\mapsto P,\ \ \ \ X(n)\mapsto Y(n),\ \ \ \ Y(n)\mapsto X(n),\ \ \ \ W(n) \mapsto W(-n).\]
For any finite-dimensional \(\mathfrak{gl}(1|1)\)-module we have a natural isomorphism
\[V\mapsto(V^{\vee})^{\vee},\]
given by
\[v\mapsto(-1)^{\overline{v}}v.\]
### Lemma on properties of certain morphisms
**Lemma 2.1**.: _Let \(r,s\in\mathbb{C}\)._
1. _Suppose that_ \(m<n\leq 0\)_. Then any map_ \(W(m)_{r}\to W(n)_{s}\) _must contain the socle of_ \(W(m)_{r}\) _in its kernel._
2. _Suppose that_ \(0\leq m<n\)_. Then the image of any map_ \(W(m)_{r}\to W(n)_{s}\) _is contained in the radical (equivalently, the socle) of_ \(W(n)_{s}\)_._
3. _For_ \(m\leq 0\)_,_ \(n>0\)_, any maps_ \(X(n)_{r}\to W(m)_{s}\) _or_ \(Y(n)_{r}\to W(m)_{s}\) _must vanish on the socle._
4. _For_ \(m\geq 0\)_,_ \(n>0\)_, any maps_ \(W(m)_{s}\to X(n)_{r}\) _or_ \(W(m)_{s}\to Y(n)_{r}\) _must have image contained in the radical (equivalently, the socle)._
Proof.: Notice that (1) \(\iff\) (2) via contragredient duality. To prove (1), assume without loss of generality that \(r=0\), and \(s\in\mathbb{Z}\) and let \(\phi:W(m)\to W(n)_{s}\) be a morphism with \(m<n\leq 0\). Write \(v_{m},\ldots,v_{-m}\) for a weight basis of \(W(m)\). Then necessarily one of \(v_{m}\) or \(v_{-m}\) must lie in the kernel of \(\phi\); let us assume that \(\phi(v_{m})=0\), with the other case being a similar argument. Then necessarily \(y\phi(v_{m+1})=0\); we see that from the structure of \(W(n)_{s}\) this forces \(\phi(v_{m+1})\) to lie in the socle of \(W(n)_{s}\), which implies that \(\phi(v_{m+2})=x\phi(v_{m+1})=0\). Thus we may continue inductively to obtain that \(\phi(v_{m+2i})=0\) for all \(i\), proving (1).
Again by contragredient duality we have that (3) \(\iff\) (4), so we show (3). Further, since \(X(n)^{\sigma}\cong Y(n)\) and \(W(n)^{\sigma}\cong W(n)\), it suffices to deal with \(X(n)\). Assume WLOG that \(r=0\) and \(s\in\mathbb{Z}\), and let \(\phi:X(n)\to W(m)_{s}\) be a morphism. Write \(v_{-n+1/2},\ldots,v_{n-1/2}\) for a weight basis of \(X(n)\); then since \(y\phi(v_{-n+1/2})=0\), we must have \(\phi(v_{-n+1/2})\) lies in the socle, implying that \(\phi(v_{-n+3/2})=x\phi(v_{-n+1/2})=0\). This in turn implies that \(y\phi(v_{-n+5/2})=0\), and we may again continue the argument inductively.
### Duflo-Serganova functors
**Lemma 2.2**.: _Let \(r,s\in\mathbb{C}\)._
1. _Suppose that_ \(m<n\leq 0\)_. Then any map_ \(W(m)_{r}\to W(n)_{s}\) _must contain the socle of_ \(W(m)_{r}\) _in its kernel._
2. _Suppose that_ \(0\leq m<n\)_. Then the image of any map_ \(W(m)_{r}\to W(n)_{s}\) _is contained in the radical (equivalently, the socle) of_ \(W(n)_{s}\)_._
3. _For_ \(m\leq 0\)_,_ \(n>0\)_, any maps_ \(X(n)_{r}\to W(m)_{s}\) _or_ \(Y(n)_{r}\to W(m)_{s}\) _must vanish on the socle._
4. _For_ \(m\geq 0\)_,_ \(n>0\)_, any maps_ \(W(m)_{s}\to X(n)_{r}\) _or_ \(W(m)_{s}\to Y(n)_{r}\) _must have image contained in the radical (equivalently, the socle)._
Proof.: Notice that (1) \(\iff\) (2) via contragredient duality. To prove (1), assume without loss of generality that \(r=0\), and \(s\in\mathbb{Z}\) and let \(\phi:W(m)\to W(n)_{s}\) be a morphism with \(m<n\leq 0\). Write \(v_{m},\ldots,v_{-m}\) for a weight basis of \(W(m)\). Then necessarily one of \(v_{m}\) or \(v_{-m}\) must lie in the kernel of \(\phi\); let us assume that \(\phi(v_{m})=0\), with the other case being a similar argument. Then necessarily \(y\phi(v_{m+1})=0\); we see that from the structure of \(W(n)_{s}\) this forces \(\phi(v_{m+1})\) to lie in the socle of \(W(n)_{s}\), which implies that \(\phi(v_{m+2})=x\phi(v_{m+1})=0\). Thus we may continue inductively to obtain that \(\phi(v_{m+2i})=0\) for all \(i\), proving (1).
Again by contragredient duality we have that (3) \(\iff\) (4), so we show (3). Further, since \(X(n)^{\sigma}\cong Y(n)\) and \(W(n)^{\sigma}\cong W(n)\), it suffices to deal with \(X(n)\). Assume WLOG that \(r=0\) and \(s\in\mathbb{Z}\), and let \(\phi:X(n)\to W(m)_{s}\) be a morphism. Write \(v_{-n+1/2},\ldots,v_{n-1/2}\) for a weight basis of \(X(n)\); then since \(y\phi(v_{-n+1/2})=0\), we must have \(\phi(v_{-n+1/2})\) lies in the socle, implying that \(\phi(v_{-n+3/2})=x\phi(v_{-n+1/2})=0\). This in turn implies that \(y\phi(v_{-n+5/2})=0\), and we may again continue the argument inductively.
### Duflo-Serganova functors
**Lemma 2.3**.: _Let \(r,s\in\mathbb{C}\). Then any map \(W(m)_{r}\to W(n)_{s}\) must contain the socle of \(W(m)_{r}\) in its kernel._
2. _Suppose that_ \(0\leq m<n\)_. Then the image of any map_ \(W(m)_{r}\to W(n)_{s}\) _is contained in the radical (equivalently, the socle) of_ \(W(n)_{s}\)_._
3. _For_ \(m\leq 0\)_,_ \(n>0\)_, any maps_ \(X(n)_{r}\to W(m)_{s}\) _or_ \(Y(n)_{r}\to W(m)_{s}\) _must vanish on the socle._
4. _For_ \(m\geq 0\)_,_ \(n>0\)_, any maps_ \(W(m)_{s}\to X(n)_{r}\) _or_ \(W(m)_{s}\to Y(n)_{r}\) _must have image contained in the radical (equivalently, the socle) of_ \(W(n)_{s}\)_._
Proof.: Notice that (1) \(\iff\) (2) via contragredient duality. To prove (1), assume without loss of generality that \(r=0\), and \(s\in\mathbb{Z}\) and let \(\phi:W(m)\to W(n)_{s}\) be a morphism with \(m<n\leq 0\). Write \(v_{m},\ldots,v_{-m}\) for a weight basis of \(W(m)\). Then necessarily one of \(v_{m}\) or \(v_{-m}\) must lie in the kernel of \(\phi\); let us assume that \(\phi(v_{m})=0\), with the other case being a similar argument. Then necessarily \(y\phi(v_{m+1})=0\); we see that from the structure of \(W(n)_{s}\) this forces \(\phi(v_{m+1})\) to lie in the socle of \(W(n)_{s}\), which implies that \(\phi(v_{m+2})=x\phi(v_{m+1})=0\). Thus we may continue inductively to obtain that \(\phi(v_{m+2i})=0\) for all \(i\), proving (1).
Again by contragredient duality we have that (3) \(\iff\) (4), so we show (3). Further, since \(X(n)^{\sigma}\cong Y(n)\) and \(W(n)^{\sigma}\cong W(n)\), it suffices to deal with \(X(n)\). Assume WLOG that \(r=0\) and \(s\in\mathbb{Z}\), and let \(\phi:X(n)\to W(m)_{s}\) be a morphism. Write \(v_{-n+1/2},\ldots,v_{n-1/2}\) for a weight basis of \(X(n)\); then since \(y\phi(v_{-n+1/2})=0\), we must have \(\phi(v_{-n+1/2})=0\). This in turn implies that \(y\phi(v
#### 2.7.1. Definition
The Duflo-Serganova functors were introduced in [10]; see the recent survey [11] for more on the role of this functor in the representation theory of Lie superalgebras.
We consider a slight extension of this functor. For a Lie superalgebra \(\mathfrak{g}\), set \(x^{2}:=\frac{1}{2}[x,x]\) for \(x\in\mathfrak{g}_{\overline{1}}\), and define
\[\mathfrak{g}_{\overline{1}}^{hom}:=\{x\in\mathfrak{g}:\text{ad}(x^{2})\text{ is semisimple}\}.\]
Let \(x\in\mathfrak{g}_{\overline{1}}^{hom}\), and take \(M\) to be a \(\mathfrak{g}\)-module on which \(x^{2}\) acts semisimply. Then we define
\[DS_{x}M:=M_{x}=\frac{\ker(x:M^{x^{2}})}{\text{Im}(x:M^{x^{2}})}.\]
In particular \(\mathfrak{g}_{x}\) will be a Lie superalgebra, and \(M_{x}\) will have the natural structure of a \(\mathfrak{g}_{x}\)-module.
This defines a functor
\[DS_{x}:\text{Rep}(\mathfrak{g})\to\text{Rep}(\mathfrak{g}_{x})\]
which is symmetric monoidal and \(\mathbb{C}\)-linear, but not exact: given a short exact sequence of \(\mathfrak{g}\)-modules
\[0\to M^{\prime}\to M\to M^{\prime\prime}\to 0\]
we obtain an exact sequence
\[M_{x}^{\prime}\to M_{x}\to M_{x}^{\prime\prime}\]
which is not necessarily exact on either side (see [11]). We call this property of the Duflo-Serganova functors "middle exactness".
#### 2.7.2. \(Ds\) functors on \(\mathfrak{gl}(1|1)\)-modules
For \(\mathfrak{g}=\mathfrak{gl}(1|1)\) we have \(\mathfrak{g}_{\overline{1}}^{hom}=\mathfrak{g}_{\overline{1}}\). The following lemma is straightforward.
**Lemma 2.2**.: _We have the following:_
1. \(DS_{u}Q=0\) _for all non-zero_ \(u\in\mathfrak{gl}(1|1)_{\overline{1}}\) _and all projective modules_ \(Q\) _._
2. \(DS_{x}X(n)=0\)_,_ \(DS_{y}X(n)\) _is_ \((1|1)\)_-dimensional._
3. \(DS_{y}Y(n)=0\)_,_ \(DS_{x}Y(n)\) _is_ \((1|1)\)_-dimensional._
4. \(DS_{u}W(n)\) _is_ \((1|0)\)_-dimensional for all_ \(n\in\mathbb{Z}\) _and all non-zero_ \(u\in\mathfrak{gl}(1|1)_{\overline{1}}\)
## 3. A spectral sequence
### The setting
Let \(M\) be an \(\mathfrak{pgl}(1|1)\)-module, of any dimension. Then we have a double complex
We obtain two spectral sequences from this, induced by taking either the horizontal filtration or vertical filtration. Let us consider the spectral sequence obtained from the horizontal filtration. The first page is given by
where operator is the induced action of \(x\), which we write as \(d_{1}\). Notice that \(h\) acts on \(M_{y}\) because it normalizes \(y\), and we have \([h,d_{1}]=d_{1}\). The next page is given by
We call this odd operator \(d_{3}\) on \((M_{y})_{x}\), and again \(h\) will act on \((M_{y})_{x}\), such that \(d_{3}\) increases the weight of \(h\) by \(3\).
In general on the \(n\)th page we obtain the same super vector space at every position, which we write as \(DS^{n}_{y,x}M\), and which admits an action of \(h\) such that the differential \(d_{2n-1}\) increases the \(h\)-weight by \(2n-1\).
### The functors \(DS^{n}_{y,x}\) and \(DS^{n}_{x,y}\)
By picking out one position in the above spectral sequence, we obtain a sequence of spaces which admit a semisimple action of the even operator \(h\), and the action of a square-zero, odd differential \(d_{2n-1}\) such that
\[[h,d_{2n-1}]=(2n-1)d_{2n-1}.\]
Similarly, if we take the spectral sequence associated to the vertical filtration, we obtain a sequence of spaces which admit a semisimple action of \(h\) and the action of the square-zero, odd operator \(d_{1-2n}\) such that \([h,d_{1-2n}]=(1-2n)d_{1-2n}\).
In general, define functors
\[DS^{n}_{y,x}:\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}\mathfrak{gl}(1 |1)\to\operatorname{Rep}\mathbb{C}\langle h,d_{2n-1}\rangle,\]
where \(DS^{n}_{y,x}M\) is the \((0,0)\)-position on the \(n\)th page of the spectral sequence via the horizontal filtration obtained from \(M^{c}\). Similarly, we have
\[DS^{n}_{x,y}:\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}\mathfrak{gl}(1 |1)\to\operatorname{Rep}\mathbb{C}\langle h,d_{1-2n}\rangle,\]
obtained as the \((0,0)\)-position on the \(n\)th page of the spectral sequence via the vertical filtration on \(M^{c}\).
In particular:
\[DS^{0}_{y,x}M=M^{c},\hskip 14.226378ptDS^{1}_{y,x}M=M_{y},\hskip 14.226378ptDS^{2 }_{y,x}M=(M_{y})_{x}.\]
We say these spectral sequences stabilize if the differential vanishes after some page. In this case we write
\[DS^{\infty}_{y,x}M,\hskip 14.226378ptDS^{\infty}_{x,y}M\]
for what they stabilize to, and these spaces will admit a semisimple action of \(h\). The spectral sequences clearly stabilize whenever the weights of \(M\) under the action of \(h\) are bounded; in particular this holds in the case when \(M\) is finite-dimensional. Thus we obtain functors
\[DS^{\infty}_{x,y},DS^{\infty}_{y,x}:\operatorname{Rep}_{\mathfrak{g}_{ \overline{0}}}\mathfrak{gl}(1|1)\to\operatorname{Rep}\mathbb{C}\langle h\rangle\]
The proof of the following theorem, along with all technical details surrounding the spectral sequence, are given in the appendix.
**Theorem 3.1**.: _The functors:_
* \(DS^{n}_{y,x}:\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}\mathfrak{gl}(1 |1)\to\operatorname{Rep}\mathbb{C}\langle h,d_{2n-1}\rangle\)_,_
* \(DS^{n}_{x,y}:\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}\mathfrak{gl}(1 |1)\to\operatorname{Rep}\mathbb{C}\langle h,d_{1-2n}\rangle\)_,_
* \(DS^{\infty}_{x,y},DS^{\infty}_{y,x}:\operatorname{Rep}_{\mathfrak{g}_{ \overline{0}}}\mathfrak{gl}(1|1)\to\operatorname{Rep}\mathbb{C}\langle h\rangle\)_,_
_are symmetric monoidal functors for all \(n\in\mathbb{N}\)._
_Remark 3.2_ (Caution).: Unlike \(DS^{1}_{y,x}=DS_{y}\), which is middle exact, \(DS^{k}_{y,x}\) will not be middle exact for \(k>1\).
### Relationship to contragredient duality
Recall the order \(4\) automorphism \(\sigma_{\mathfrak{gl}(1|1)}\) of \(\mathfrak{gl}(1|1)\) given by \(X\mapsto-X^{st}\), where \(X^{st}\) denotes the supertranspose. In particular, \(\sigma_{\mathfrak{gl}(1|1)}(x)=\pm y\).
Write \(\sigma_{n}:\langle h,d_{2n-1}\rangle\to\langle h,d_{1-2n}\rangle\) for the isomorphism of Lie superalgebras given by \(h\mapsto-h\) and \(d_{2n-1}\mapsto d_{1-2n}\). Then we may twist a \(\mathbb{C}\langle h,d_{2n-1}\rangle\)-module by \(\sigma_{n}\) to obtain a \(\mathbb{C}\langle h,d_{1-2n}\rangle\)-module.
**Lemma 3.3**.: _We have a natural isomorphism of functors_
\[DS^{i}_{x,y}\circ(-)^{\sigma_{\mathfrak{gl}(1|1)}}\cong(-)^{\sigma_{n}}\circ DS^ {i}_{y,x}.\]
Proof.: See Section 6.5 of the appendix.
Now define the contragredient duality functors \((-)^{\vee}\) as the composition of dualizing with twisting by either \(\sigma_{\mathfrak{gl}(1|1)}\) or \(\sigma_{h}\). Note that the functor \((-)^{\vee}\) has already been defined and discussed for \(\operatorname{Rep}\mathfrak{gl}(1|1)\) in Section 2.5. Even though for any object \(V\) in \(\operatorname{Rep}\mathfrak{g}_{m}\) we have an isomorphism \(V\cong V^{\vee}\), this isomorphism is not natural, so it is important not to identify these modules.
**Theorem 3.4**.: _For \(n\in\mathbb{N}\cup\{\infty\}\), we have a canonical isomorphism in \(\operatorname{Rep}\mathfrak{g}_{m}\):_
\[DS^{n}_{x,y}(M^{\vee})\cong(DS^{n}_{y,x}M)^{\vee}.\]
Proof.: Combine the isomorphisms from Proposition 6.11 and Lemma 3.3.
### Action of \(d_{n}\) on indecomposable \(\mathfrak{pgl}(1|1)\)-modules
We now describe the action of \(DS^{n}_{y,x}\) and \(DS^{n}_{x,y}\) on indecomposable \(\mathfrak{pgl}(1|1)\)-modules. For \(r\in\mathbb{C}\), we write \(\mathbb{C}_{r}\) for the one-dimensional even \(\mathbb{C}\langle h\rangle\)-module on which \(h\) acts by \(r\).
**Lemma 3.5**.:
1. \(DS^{n}_{y,x}Q=DS^{n}_{x,y}Q=0\) _for all_ \(n>0\) _and for any projective module_ \(Q\in Rep(\mathfrak{pgl}(1|1))\)_._
2. \(DS^{n}_{y,x}W(m)=\mathbb{C}_{-m}\) _for all_ \(n>0\) _and all_ \(m\in\mathbb{Z}\)_._
3. \(DS^{n}_{x,y}W(m)=\mathbb{C}_{m}\) _for all_ \(n>0\) _for all_ \(m\in\mathbb{Z}\)_._
4. \(DS^{n}_{y,x}Y(m)=0\) _for all_ \(n>0\)_._
5. \(DS^{n}_{x,y}X(m)=0\) _for all_ \(n>0\)_._
6. \(DS^{n}_{y,x}X(m)=\mathbb{C}_{-m+1/2}\oplus\Pi\mathbb{C}_{m-1/2}\) _for_ \(n\leq m\)_,_ \(DS^{n}_{y,x}X(m)=0\) _for_ \(n>m\)_. If we write_ \(\mathbb{C}_{-m+1/2}=\langle u_{-m+1/2}\rangle\) _and_ \(\Pi\mathbb{C}_{m-1/2}=\langle u_{m-1/2}\rangle\)_, then_ \(d_{i}=0\) _for_ \(i\neq 0,m\) _and_ \(d_{m}(u_{-m+1/2})=u_{m-1/2}\)_._
7. \(DS^{n}_{x,y}Y(m)=\mathbb{C}_{-m+1/2}\oplus\Pi\mathbb{C}_{m-1/2}\) _for_ \(n\leq m\)_,_ \(DS^{n}_{x,y}X(m)=0\) _for_ \(n>m\)_. If we write_ \(\mathbb{C}_{-m+1/2}=\langle u_{-m+1/2}\rangle\) _and_ \(\Pi\mathbb{C}_{m-1/2}=\langle u_{m-1/2}\rangle\)_, then_ \(d_{i}=0\) _for_ \(i\neq 0,m\) _and_ \(d_{m}(u_{m-1/2})=u_{-m+1/2}\)_._
Proof.: See Section 6.6.
### Tensor product rules for \(\operatorname{Rep}\mathfrak{gl}(1|1)\)
Using Lemma 3.5 and the fact that \(DS^{r}_{y,x}\) and \(DS^{r}_{x,y}\) are symmetric monoidal functors, one can easily determine the tensor products of modules in \(\operatorname{Rep}\mathfrak{gl}(1|1)\) up to projective summands (see [10] for the precise formulas which include the projective summands).
In the next proposition, we write \(M\cong N\oplus Proj\) whenever \(M\) is isomorphic to a direct sum of \(N\) with some projective \(\mathfrak{gl}(1|1)\)-module.
**Proposition 3.6**.: _We have the following tensor product rules on \(\operatorname{Rep}\mathfrak{gl}(1|1)\) up to a projective summand:_
1. _For_ \(m,n\in\mathbb{Z}\) _we have_ \(W(m)\otimes W(n)\equiv W(m+n)\oplus Proj\)_;_
2. _For_ \(n\in\mathbb{Z}\)_,_ \(m\geq 0\)_, we have_ \(W(n)\otimes X(m)\cong X(m)_{-n}\oplus Proj\)_;_
3. _For_ \(n\in\mathbb{Z}\)_,_ \(m\geq 0\)_, we have_ \(W(n)\otimes Y(m)\cong Y(m)_{n}\oplus Proj\)_;_
4. _For_ \(0<m\leq n\)_, we have_ \(X(m)\otimes X(n)\cong X(m)_{-n+1/2}\oplus\Pi X(m)_{n-1/2}\oplus Proj\)_;_
_._
5. _For_ \(0<m\leq n\)_, we have_ \(Y(m)\otimes Y(n)\cong Y(m)_{-n+1/2}\oplus\Pi Y(m)_{n-1/2}\oplus Proj\)_;_
6. _For_ \(m,n\in\mathbb{Z}\)_,_ \(X(m)\otimes Y(n)\) _is projective._
Proof.: In each case we apply \(DS^{\infty}_{x,y}\) and \(DS^{\infty}_{y,x}\), using that these are symmetric monoidal functors, and determine the decomposition.
From part (1) of Proposition 3.6, one can compute the semisimplification of \(\operatorname{Rep}_{\mathfrak{g_{0}}}\mathfrak{gl}(1|1)\), however we prove this directly later on.
_Remark 3.7_.: One could determine the full decomposition of the tensor products in Proposition 3.6 using the \(\mathbb{C}\langle h\rangle\)-character of the tensor product and the description of the indecomposable projectives of \(\mathfrak{pgl}(1|1)\) given in Section 2.4.
### Application: tensor products in \(\operatorname{Rep}_{\mathfrak{g_{0}}}\mathfrak{gl}(1|n)\)
Let \(\mathfrak{g}=\mathfrak{gl}(1|n)\). Choose an isotropic root \(\alpha\) and a corresponding root subalgebra \(\mathfrak{gl}(1|1)\subseteq\mathfrak{gl}(1|n)\). We write \(x\in\mathfrak{g}_{\alpha}\), \(y\in\mathfrak{g}_{-\alpha}\), \(c=[x,y]\), and \(h\) as usual. We are interested in the stable category of \(\operatorname{Rep}_{\mathfrak{g_{0}}}\mathfrak{gl}(1|n)\): this is the category obtained from \(\operatorname{Rep}_{\mathfrak{g_{0}}}\mathfrak{gl}(1|n)\) after quotienting by the ideal of morphisms which factor through a projective \(\mathfrak{gl}(1|n)\)-module. The blocks of this category, up to parity, are indexed by irreducible representations of \(\mathfrak{gl}(n-1)\), see [10]. Write this set as \(\mathcal{I}_{n-1}\), and denote by \(\mathcal{B}^{\prime}_{V}\) the block corresponding to \(V\in\mathcal{I}_{n-1}\). The block \(\mathcal{B}^{\prime}_{V}\) is equivalent to the principal block of \(\operatorname{Rep}(\mathfrak{gl}(1|1))\) (see [10]).
Thus for each \(V\in\mathcal{I}_{n-1}\), we may consider a sum of blocks \(\mathcal{B}_{V}:=\mathcal{B}^{\prime}_{V}\oplus\Pi\mathcal{B}^{\prime}_{V}\) of \(\operatorname{Rep}_{\mathfrak{g_{0}}}\mathfrak{gl}(1|n)\). We may index the simple modules of \(\mathcal{B}_{V}\) up to parity with integers, and we write \(L_{V}(n)\) with \(n\in\mathbb{Z}\) for the unique simple module in \(\mathcal{B}_{V}\) such that \(DS_{x}L_{V}(n)=V\) and the \(h\) action on \(DS_{x}L_{V}(n)\) is multiplication by the scalar \(n\).
Given \(n_{1},n_{2}\in\mathbb{Z}\), we set \(W_{V}(n_{1};n_{2})\) to be the module in \(\mathcal{B}_{V}\) corresponding to the \(\mathfrak{gl}(1|1)\)-module \(W(n_{1})_{n_{2}}\). Define \(X_{V}(n_{1};n_{2})\), \(Y_{V}(n_{1};n_{2})\) when \(n_{1}\in\mathbb{N}\), \(n_{2}\in\mathbb{Z}\) to be the modules corresponding to \(X(n_{1})_{n_{1}-1/2+n_{2}}\) and \(Y(n_{1})_{n_{1}-1/2+n_{2}}\). Because the functors \(DS^{m}_{x,y}\) and \(DS^{m}_{y,x}\) commute with translation functors, we are able to use similar methods as in for instance [10], to obtain the following:
1. \(DS^{m}_{x,y}W_{V}(n_{1};n_{2})=DS_{x}W_{V}(n_{1};n_{2})=V_{n_{1}+n_{2}}\) for all \(m>0\);
2. \(DS^{m}_{y,x}W_{V}(n_{1};n_{2})=DS_{y}W_{V}(n_{1};n_{2})=V_{n_{1}-n_{2}}\) for all \(m>0\);
3. \(DS^{m}_{x,y}X_{V}(n_{1};n_{2})=0=DS^{m}_{y,x}Y_{V}(n_{1};n_{2})\) for \(m>0\);
4. \(DS^{m}_{y,x}X_{V}(n_{1};n_{2})=V_{n_{2}}\oplus\Pi V_{2n_{1}-1+n_{2}}\) for \(m\leq n_{1}\);
5. \(DS^{m}_{y,x}X_{V}(n_{1};n_{2})=V_{n_{2}}\oplus\Pi V_{2n_{1}-1+n_{2}}\) for \(m>n_{1}\);
6. \(DS^{m}_{x,y}Y_{V}(n_{1};n_{2})=V_{n_{2}}\oplus\Pi V_{2n_{1}-1+n_{2}}\) for \(m\leq n_{1}\);
7. \(DS^{m}_{x,y}Y_{V}(n_{1};n_{2})=V_{n_{2}}\oplus\Pi V_{2n_{1}-1+n_{2}}\) for \(m>n_{1}\).
For irreducible representations \(V(\lambda),V(\mu),V(\gamma)\) of \(\mathfrak{gl}(n-1)\) we denote by \(c^{\gamma}_{\lambda\mu}\) the multiplicity of \(V(\gamma)\) in \(V(\lambda)\otimes V(\mu)\) (the Littlewood-Richardson coefficient): in other words, \(V(\lambda)\otimes V(\mu)=\bigoplus\limits_{\gamma}V(\gamma)^{c^{\gamma}_{ \lambda\mu}}\).
**Theorem 3.8**.: _We have the following tensor product relations in the stable category of \(\operatorname{Rep}_{\mathfrak{g_{0}}}\mathfrak{gl}(1|n)\):_
1. \(W_{V(\lambda)}(n_{1};n_{2})\otimes W_{V(\mu)}(m_{1};m_{2})\equiv\bigoplus \limits_{\gamma}W_{V(\gamma)}(n_{1}+m_{1};n_{2}+m_{2})^{c^{\gamma}_{\lambda\mu}}\)_;_
2. \(W_{V(\lambda)}(n_{1};n_{2})\otimes X_{V(\mu)}(m_{1};m_{2})\equiv\bigoplus \limits_{\gamma}X_{V(\gamma)}(m_{1};m_{2}-n_{1}+n_{2})^{c^{\gamma}_{\lambda\mu}}\)
3. \(W_{V(\lambda)}(n_{1};n_{2})\otimes Y_{V(\mu)}(m_{1};m_{2})\equiv\bigoplus\limits_{ \gamma}Y_{V(\gamma)}(m_{1};m_{2}+n_{1}+n_{2})^{c^{\gamma}_{\lambda\mu}}\)_;_
4. \(0<n_{1}\leq m_{1}\)_:_ \[X_{V(\lambda)}(n_{1};n_{2})\otimes X_{V(\mu)}(m_{1};m_{2})\equiv\bigoplus \limits_{\gamma}X_{V(\gamma)}(n_{1};n_{2}+m_{2})^{c^{\gamma}_{\lambda\mu}} \oplus\Pi X_{V(\gamma)}(n_{1};n_{2}+m_{2}+2m_{1}-1)^{c^{\gamma}_{\lambda\mu}};\]
5. \(0<n_{1}\leq m_{1}\)_:_ \[Y_{V(\lambda)}(n_{1};n_{2})\otimes Y_{V(\mu)}(m_{1};m_{2})\equiv\bigoplus \limits_{\gamma}Y_{V(\gamma)}(n_{1};n_{2}+m_{2})^{c^{\gamma}_{\lambda\mu}} \oplus\Pi Y_{V(\gamma)}(n_{1};n_{2}+m_{2}+2m_{1}-1)^{c^{\gamma}_{\lambda\mu}}.\]
From the above relations, we see that the semisimplification of the category \(\operatorname{Rep}_{\mathfrak{g}_{\mathbb{G}}}\mathfrak{gl}(1|n)\) is \(\operatorname{Rep}\mathfrak{g}_{m}\times\operatorname{Rep}GL(n-1)\times \operatorname{Rep}(G_{m})\). This computation was originally done in [119].
## 4. Realizing the semisimplification
Recall the category \(\operatorname{Rep}\mathfrak{g}_{m}\) of semisimple representations of \(\mathbb{C}\langle h\rangle\). So far we have constructed, for each \(n\in\mathbb{N}\cup\{\infty\}\), symmetric monoidal functors
\[DS^{n}_{y,x}:\operatorname{Rep}_{\mathfrak{g}_{\mathbb{G}}}\mathfrak{gl}(1|1) \to\operatorname{Rep}\mathbb{C}\langle h,d_{2n-1}\rangle,\]
\[DS^{n}_{x,y}:\operatorname{Rep}_{\mathfrak{g}_{\mathbb{G}}}\mathfrak{gl}(1|1) \to\operatorname{Rep}\mathbb{C}\langle h,d_{1-2n}\rangle,\]
where \(d_{\pm\infty}=0\), and which are interchanged by contragredient duality. For the following, recall that for a \(\mathfrak{gl}(1|1)\)-module \(V\) and \(r\in\mathbb{C}\), we write \(V_{r}\) for the tensor product of \(V\) by the one dimensional module of character \(r\mathbf{str}\).
**Lemma 4.1**.: _Let \(m,n\in\mathbb{Z}\), \(r,s\in\mathbb{C}\)._
1. _The_ \(\mathbb{C}\langle h\rangle\)_-module_ \(DS^{\infty}_{x,y}W(n)_{r}\) _is one-dimensional, with_ \(h\) _acting by the scalar_ \(n+r\)_. We have_ \[DS^{\infty}_{x,y}\operatorname{Hom}_{\mathfrak{gl}(1|1)}(W(n)_{r},W(m)_{s})= \begin{cases}\mathbb{C}&\text{ if }s-r=n-m\geq 0\\ 0&\text{ else}\end{cases}.\]
2. _The_ \(\mathbb{C}\langle h\rangle\)_-module_ \(DS^{\infty}_{y,x}W(n)_{r}\) _is one-dimensional, with_ \(h\) _acting by the scalar_ \(-n+r\)_. We have_ \[DS^{\infty}_{y,x}\operatorname{Hom}_{\mathfrak{gl}(1|1)}(W(n)_{r},W(m)_{s})= \begin{cases}\mathbb{C}&\text{ if }s-r=m-n\leq 0\\ 0&\text{ else}\end{cases}.\]
Proof.: The statements about the modules \(DS^{\infty}_{x,y}W(n)_{r}\), \(DS^{\infty}_{y,x}W(n)_{r}\) follow from Lemma 3.5. To compute the images of the \(\operatorname{Hom}\)-spaces, recall from Proposition 3.6 that
\[\operatorname{Hom}_{\mathfrak{gl}(1|1)}(W(n)_{r},W(m)_{s}) \cong\operatorname{Hom}_{\mathfrak{gl}(1|1)}(W(n),W(m)_{s-r}) \cong\operatorname{Hom}_{\mathfrak{gl}(1|1)}(\mathbb{C},W(-n)\otimes W(m)_{s})\] \[\cong\operatorname{Hom}_{\mathfrak{gl}(1|1)}(\mathbb{C},W(m-n)_{s} \oplus Q)\]
for some projective \(\mathfrak{gl}(1|1)\)-module \(Q\). Since the functors \(DS^{\infty}_{x,y}\), \(DS^{\infty}_{y,x}\) are symmetric monoidal and send \(Q\) to zero, we conclude that it is enough to prove the statements in the case \(n=r=0\). In this case the non-zero \(\mathfrak{gl}(1|1)\)-maps \(W(0)\cong\mathbb{C}\to W(m)_{s}\) are embeddings, so the required statements follow immediately from the definitions of the functors.
The following diagram illustrates the morphisms between such objects \(DS^{\infty}_{y,x}W(n)_{r}\), \(DS^{\infty}_{y,x}W(n)_{r}\) on which \(h\) acts with integral eigenvalues. We write \(\overline{(-)}\) for the images of the indecomposable \(\mathfrak{gl}(1|1)\)-modules in \(\operatorname{Rep}\mathfrak{g}_{m}\); the red (respectively, blue) arrows show the images of \(\mathfrak{gl}(1|1)\)-morphisms under the functor \(DS^{\infty}_{y,x}\) (respectively, \(DS^{\infty}_{y,x}\)).
_Remark 4.2_.: In particular, although the objects \(DS^{\infty}_{x,y}W(n)_{r}\), \(DS^{\infty}_{x,y}W(n+1)_{r-1}\) are isomorphic as \(\mathbb{C}\langle h\rangle\)-modules, the inverse map is not in the image of \(DS^{\infty}_{x,y}\). Thus we see that the functors \(DS^{\infty}_{y,x}\) and \(DS^{\infty}_{y,x}\) are not full.
### Filtrations on \(DS^{\infty}_{x,y}\) and \(DS^{\infty}_{y,x}\)
#### 4.2.1. Definition
The functors \(DS^{\infty}_{x,y}V\), \(DS^{\infty}_{y,x}V\) as defined before had a significant downside: they sent non-isomorphic indecomposable \(\mathfrak{gl}(1|1)\)-modules to isomorphic one-dimensional \(\mathbb{C}\langle h\rangle\)-modules. In order to remedy this, we will now show that these functors are naturally equipped with additional structure.
We define natural filtrations on our two functors, both of which we denote by \(F_{\bullet}\). Infomrally, the filtration works as follows: for \(V\in\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}\mathfrak{gl}(1|1)\), we filter the object \(DS^{\infty}_{x,y}V\) using the above diagram by images of indecomposable summands \(\overline{W(k)}_{\bullet}\) lying to the left of some vertical line, i.e. such that \(k\leq n\) for some corresponding \(n\).
_Example 4.3_.: If \(V=W(n)_{s}\), we see from the above diagram that the \((1|0)\)-dimensional module \(DS^{\infty}_{x,y}W(n)_{s}\) has a natural filtration, which is essentially "2-step": \(F_{k}DS^{\infty}_{x,y}W(n)_{s}=0\) for \(k<n\), and \(F_{k}DS^{\infty}_{x,y}W(n)_{s}=DS^{\infty}_{x,y}W(n)_{s}\) for \(k\geq n\).
More precisely, for \(V\in\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}\mathfrak{gl}(1|1)\) and \(n\in\mathbb{Z}\), set
\[F_{n}DS^{\infty}_{x,y}V:=\operatorname{Im}(\operatorname{Hom}_{\mathfrak{sl}( 1|1)}(W(n),V)\otimes DS^{\infty}_{x,y}W(n)\to DS^{\infty}_{x,y}V\,),\]
and
\[F_{n}DS^{\infty}_{y,x}V:=\operatorname{Im}(\operatorname{Hom}_{\mathfrak{sl}( 1|1)}(W(n),V)\otimes DS^{\infty}_{y,x}W(n)\to DS^{\infty}_{y,x}V\,),\]
In other words, \(F_{n}DS^{\infty}_{x,y}V\) contains all vectors in \(DS^{\infty}_{x,y}V\) which lie in the images of morphisms of the form \(DS^{\infty}_{x,y}f:DS^{\infty}_{x,y}W(n)\to DS^{\infty}_{x,y}V\) for some \(f\in\operatorname{Hom}_{\mathfrak{sl}(1|1)}(W(n),V)\), and similarly for the second functor.
_Remark 4.4_.: To avoid dealing with the twists of the \(W(n)\)'s, which would make the definition more cumbersome, we used maps of \(\mathfrak{sl}(1|1)\)-module and instead of \(\mathfrak{gl}(1|1)\).
Let us show that this is indeed a filtration:
**Lemma 4.5**.: _For any \(V\in\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}\mathfrak{gl}(1|1)\), we have: \(F_{n}DS^{\infty}_{x,y}V\subset F_{n+1}DS^{\infty}_{x,y}V\) for any \(n\), and_
\[\bigcap_{n}F_{n}DS^{\infty}_{x,y}V=0,\ \ \ \ \bigcup_{n}F_{n}DS^{\infty}_{x,y}V= DS^{\infty}_{x,y}V.\]
_An analogous statement holds for the filtration \(F_{\bullet}DS^{\infty}_{y,x}V\)._
Proof.: We prove the statements for the functor \(DS^{\infty}_{x,y}\), for the other functor the proof is analogous.
Fix maps \(f_{n}:W(n+1)_{-1}\to W(n)\) such that \(DS^{\infty}_{x,y}(f_{n})\neq 0\) (so \(DS^{\infty}_{x,y}(f_{n})\) is an isomorphism in \(\operatorname{Rep}\mathfrak{g}_{m}\)). Let \(\phi:W(n)\to V\). Then
\[\operatorname{Im}(DS^{\infty}_{x,y}(\phi)\circ DS^{\infty}_{x,y}(f_{n}))= \operatorname{Im}(DS^{\infty}_{x,y}(\phi))\]
so \(\operatorname{Im}DS^{\infty}_{x,y}(\phi)\subset F_{n+1}DS^{\infty}_{x,y}V\), proving the first statement.
Next, decomposing \(V\) into a direct sum of indecomposable \(\mathfrak{gl}(1|1)\)-modules, we see that \(F_{n}DS^{\infty}_{x,y}(V)\neq 0\) iff \(V\) contains a summand of the form \(W(k)_{r}\), \(k\leq n\), \(r\in\mathbb{C}\). This implies that \(\bigcap_{n}F_{n}DS^{\infty}_{x,y}V=0\). Moreover, since \(DS^{\infty}_{x,y}V\) is a direct sum of \(DS^{\infty}_{x,y}W(k)_{r}\) where \(W(k)_{r}\) are direct summands of \(V\), we conclude that \(\bigcup_{n}F_{n}DS^{\infty}_{x,y}V\) is the entire space \(DS^{\infty}_{x,y}V\).
#### 4.2.2. A new target category
It is easy to check that if \(\varphi:V\to W\) is a morphism of \(\mathfrak{gl}(1|1)\)-modules, then the map \(DS^{\infty}_{x,y}\varphi\) (respectively, \(DS^{\infty}_{y,x}\varphi\)) respects the filtrations defined above.
This implies that the filtrations define functors \(\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}\mathfrak{gl}(1|1)\to \operatorname{Rep}\mathfrak{g}_{m}^{fil}\), where \(\operatorname{Rep}\mathfrak{g}_{m}^{fil}\) is the category of filtered, semisimple \(\mathbb{C}\langle h\rangle\)-modules. By abuse of notation, we will denote these functors again by \(DS^{\infty}_{x,y}\) and \(DS^{\infty}_{y,x}\).
**Lemma 4.6**.: _The functors \(DS^{\infty}_{x,y},DS^{\infty}_{y,x}:\operatorname{Rep}_{\mathfrak{g}_{ \overline{0}}}\mathfrak{gl}(1|1)\to\operatorname{Rep}\mathfrak{g}_{m}^{fil}\) are symmetric monoidal functors._
Proof.: We prove the statement for the functor \(DS^{\infty}_{x,y}\), the proof for \(DS^{\infty}_{y,x}\) being analogous.
To show that \(DS^{\infty}_{x,y}\) is a symmetric monoidal functor, we only need to establish a natural transformation
\[DS^{\infty}_{x,y}V\otimes DS^{\infty}_{x,y}W\longrightarrow DS^{\infty}_{x,y}(V\otimes W)\]
and show that it is an isomorphism (the fact that this functor respects the unit and the symmetry morphisms is obvious). Indeed, given \(V,V^{\prime}\in\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}\mathfrak{gl }(1|1)\), the filtered \(\mathbb{C}\langle h\rangle\)-module \(DS^{\infty}_{x,y}(V)\otimes DS^{\infty}_{x,y}(V^{\prime})\) has \(n\)-th filtration given by
\[F_{n}\left(DS^{\infty}_{x,y}(V)\otimes DS^{\infty}_{x,y}(V^{\prime})\right)= \sum_{n_{1}+n_{2}=n}F_{n_{1}}DS^{\infty}_{x,y}(V)\otimes F_{n_{2}}DS^{\infty}_ {x,y}(V^{\prime})\]
This is a subspace of the vector superspace \(DS^{\infty}_{x,y}(V)\otimes DS^{\infty}_{x,y}(V^{\prime})\) which is isomorphic (as a superspace) to \(DS^{\infty}_{x,y}(V\otimes V^{\prime})\). This subspace is spanned by the subspaces
\(\operatorname{Im}DS^{\infty}_{x,y}(f)\otimes\operatorname{Im}DS^{\infty}_{x,y}(f^{ \prime})\) for all \(f\in\operatorname{Hom}_{\mathfrak{sl}(1|1)}(W(n_{1}),V)\) and \(f^{\prime}\in\operatorname{Hom}_{\mathfrak{sl}(1|1)}(W(n_{2}),V^{\prime})\), \(n_{1}+n_{2}=n\). Such a pair \((f,f^{\prime})\) defines a map \(f\otimes f^{\prime}:W(n_{1})\otimes W(n_{2})\to V\otimes V^{\prime}\). By Proposition 3.6, \(W(n_{1})\otimes W(n_{2})\) is isomorphic to a direct sum of \(W(n)\) with some projective \(\mathfrak{gl}(1|1)\) module. Thus \(f\otimes f^{\prime}\) induces a map \(DS^{\infty}_{x,y}(f\otimes f^{\prime}):DS^{\infty}_{x,y}(W(n))\to DS^{ \infty}_{x,y}(V\otimes V^{\prime})\). Hence we obtain a natural inclusion
\[F_{n}\left(DS^{\infty}_{x,y}(V)\otimes DS^{\infty}_{x,y}(V^{\prime})\right) \,\subset\,F_{n}DS^{\infty}_{x,y}(V\otimes V^{\prime}) \tag{4.1}\]
making the functor \(DS^{\infty}_{x,y}\) lax monoidal. To show that it is, in fact, strongly monoidal, we need to show that the above inclusion is actually an isomorphism, a statement which it is enough to verify whenever \(V,V^{\prime}\) are indecomposable \(\mathfrak{gl}(1|1)\)-modules. The only case of interest here is when \(V\cong W(k)_{r}\), \(V^{\prime}\cong W(m)_{s}\) for some \(k,m\in\mathbb{Z}\), \(r,s\in\mathbb{C}\). In that case, Example 4.3 shows that the dimensions of both sides of the inclusion (4.1) are equal, which implies that this inclusion is an isomorphism.
#### 4.2.3. The associated graded
We may now define
\[DS^{\infty}_{x,y}V[n] :=F_{n}DS^{\infty}_{x,y}V/F_{n-1}DS^{\infty}_{x,y}V, \tag{4.2}\] \[DS^{\infty}_{y,x}V[n] :=F_{n}DS^{\infty}_{y,x}V/F_{n-1}DS^{\infty}_{y,x}V. \tag{4.3}\]
Define corresponding cofiltrations \(F^{\bullet}\) on \(DS^{\infty}_{x,y}\) and \(DS^{\infty}_{y,x}\) as
\[F^{n}DS^{\infty}_{x,y}V=DS^{\infty}_{x,y}V/F_{-n-1}DS^{\infty}_{x,y}V,\]
and
\[F^{n}DS^{\infty}_{y,x}V=DS^{\infty}_{y,x}V/F_{-n-1}DS^{\infty}_{y,x}V.\]
Clearly we have canonical isomorphisms:
\[\ker(F^{n}DS^{\infty}_{x,y}V\to F^{n-1}DS^{\infty}_{x,y}V)\cong DS ^{\infty}_{x,y}V[-n] \tag{4.4}\] \[\ker(F^{n}DS^{\infty}_{y,x}V\to F^{n-1}DS^{\infty}_{y,x}V)\cong DS ^{\infty}_{y,x}V[-n] \tag{4.5}\]
Now we can prove:
**Proposition 4.7**.: _The natural isomorphism \(DS^{\infty}_{x,y}V^{\vee}\to(DS^{\infty}_{y,x}V)^{\vee}\) of Theorem 3.4 takes the filtration \(F_{\bullet}DS^{\infty}_{x,y}V^{\vee}\) to the cofiltration \(F^{\bullet}DS^{\infty}_{y,x}V\). In particular it induces a natural isomorphism_
\[DS^{\infty}_{x,y}V^{\vee}[n]\cong(DS^{\infty}_{y,x}V[-n])^{\vee}.\]
_where \((-)^{\vee}\) on the left hand side stands for contragredient duality of \(\mathfrak{gl}(1|1)\) and on the right hand side stands for contragredient duality of \(\mathbb{C}\langle h\rangle\)-modules._
Proof.: The isomorphism \(DS^{\infty}_{x,y}V^{\vee}\to(DS^{\infty}_{y,x}V)^{\vee}\) gives rise to a natural filtration on \((DS^{\infty}_{x,y}V)^{\vee}\), which thus gives a natural cofiltration on \(DS^{\infty}_{y,x}V\). In order to show this cofiltration agrees with the one defined above, it suffices to check this is so on indecomposable, and argue by naturality.
Thus consider the indecomposable module \(W(n)_{r}\). Since \(W(n)_{r}^{\vee}=W(-n)_{-r}\), we have
\[F_{k}DS^{\infty}_{x,y}W(n)_{r}^{\vee}=\begin{cases}0,&\text{if }k<-n\\ DS^{\infty}_{x,y}W(n)_{r}^{\vee},&\text{if }k\geq-n.\end{cases}\]
It follows that the induced cofiltration on \(DS^{\infty}_{y,x}W(n)_{r}\) is given by
\[F^{k}DS^{\infty}_{x,y}W(n)_{r}=\begin{cases}0,&\text{if }k<-n\\ DS^{\infty}_{x,y}W(n)_{r},&\text{if }k\geq-n.\end{cases}\]
This is exactly the cofiltration defined above.
The following result should be viewed as a concrete realization of the uniqueness of semisimplification up to isomorphism of functors; see Section 4.3 for more precise statements.
**Proposition 4.8**.: _For each \(n\in\mathbb{Z}\) we have a natural isomorphism of \(\mathbb{C}\langle h\rangle\)-modules_
\[DS^{\infty}_{x,y}V[n]\to DS^{\infty}_{y,x}V_{2n}[n]\]
_Here on the right hand side we have the \((2n)\)-twist of \(V\) by the Berezinian._
Proof.: In this proof all the isomorphisms will be up to parity shift. Let \(v\in DS^{\infty}_{x,y}V[n]\). There exists a summand \(W\cong W(n)\) of \(V\) such that \(v\) lifts to the lowest (respectively, the highest) weight vector of \(W\) if \(n\leq 0\) (respectively, \(n>0\)). We denote this vector by \(\tilde{v}\). Now, within this summand \(W\) there is a natural isomorphism between the highest and lower weight spaces; namely, the maps \(x,y\) define isomorphisms between the weight spaces, so we may repeatedly uniquely lift and project. Explicitly, our isomorphism is given by:
\[\tilde{v}\mapsto(xy^{-1})^{n}\tilde{v}\text{ for }n\leq 0,\ \ \ \ \tilde{v}\mapsto(x^{-1}y)^{n}\tilde{v}\text{ for }\ n\geq 0.\]
In this way we may map \(\tilde{v}\) to the highest (respectively, the lowest) weight vector, which we call \(w\). Then \(w\) defines an element in \(DS^{\infty}_{y,x}V[n]\) with weight shifted by \(-2n\); thus to make this map \(h\)-equivariant, we need to twist by \(2n\). Once we show the above procedure is well-defined, it is clear that it is natural (i.e. respects \(\mathbb{C}\langle h\rangle\)-maps), and so we will be done.
To check that it is well-defined, we need to check what happens if we choose another summand \(W^{\prime}\cong W(n)\) of \(V\) in which \(v\) lifts to the lowest (respectively highest) weight vector \(\tilde{v}^{\prime}\). We will explain what happens in the case of \(n=-m\leq 0\), with the case \(n>0\) following from applying Proposition 4.7. Notice that since \(DS^{\infty}_{x,y}W(n)=DS_{x}W(n)\), our choice of \(\tilde{v},\tilde{v}^{\prime}\) have that \(\tilde{v}-\tilde{v}^{\prime}\in\operatorname{im}x\).
Under our setup, \(\tilde{v},\tilde{v}^{\prime}\) lie in the socles of \(W\), \(W^{\prime}\) respectively, and give rise to well-defined bases
\[w_{-m}=\tilde{v},w_{1-m},\dots,w_{m},\ \text{ and }\ w^{\prime}_{-m}=\tilde{v}^{ \prime},w^{\prime}_{1-m},\dots,w^{\prime}_{m}\]
of \(W\) and \(W^{\prime}\) respectively. These bases are obtained by using the fact that \(x\) and \(y\) define isomorphisms between weight spaces. Here these bases have the property that \(w_{-m+2i}\) lies in the socle of \(W\) for all \(i\), \(yw_{-m+2i+1}=w_{-m+2i}\), \(xw_{-m+2i+1}=w_{-m+2i+2}\) for all \(0\leq i\leq m-1\), and similarly for the corresponding vectors in \(W^{\prime}\).
We want to show that \(w_{m}\) and \(w^{\prime}_{m}\) admit equivalent projections to \(DS^{\infty}_{y,x}V[n]_{2n}\); assume for a contradiction that this is not the case.
Take \(z_{i}=w_{i}-w^{\prime}_{i}\); then we claim that \(z_{m}\) will be the highest weight vector of a submodule of \(V\) isomorphic to either \(X(j)_{k}\) for some \(j>0\) and \(k\in\mathbb{Z}\) or to \(W(j)_{k}\) for some \(j<n\), \(k\in\mathbb{Z}\). Indeed, \(xz_{m-1}=z_{m}\neq 0\) by our assumption, so we must have \(z_{m-1}\neq 0\); if \(z_{m-2}=yz_{m-1}=0\) then \(z_{m-1},z_{m}\) define a submodule isomorphic to \(X(1)_{k}\)
and if \(z_{m-2}\neq 0\), then we may continue the argument. So that either for some \(i\) we have \(z_{m-2i}=0\) and then \(z_{m-2i+1},\ldots,z_{m}\) span a submodule isomorphic to \(X(i)_{k}\) for some \(k\in\mathbb{C}\), or \(z_{m-2i}\neq 0\) for all \(i=0,1\ldots,2m\). In the latter case, \(z_{-m},\ldots,z_{m}\) span a submodule isomorphic to \(W(n)_{k}\) for some \(k\). But as noted above, \(z_{-m}=\tilde{v}-\tilde{v}^{\prime}\in\operatorname{Im}x\), so we must have that \(z_{-m}=xz_{-m-1}\) for some element \(z_{-m-1}\in V\); if \(yz_{-m-1}=0\), then we obtain \(X(m+1)_{k}\) for some \(k\), and if \(yz_{-m-1}\neq 0\) then we obtain a submodule isomorphic to \(W(-m-1)_{k}\).
Let \(M\) denote this indecomposable submodule of \(V\) containing \(z_{m}\) that is isomorphic to either \(X(j)_{k}\) for some \(k\) or \(W(j)_{k}\) with \(j<n\) for some \(k\). Since by assumption \(z_{m}\) defines a nontrivial element in \(DS^{\infty}_{y,x}V[n]_{2n}\), there must exist a split copy of \(W(n)\) in \(V\) such that \(z_{m}\) is the highest weight vector. Thus we obtain a nontrivial map \(M\to W(n)\) which is nonzero on \(z_{m}\); but by Lemma 2.1 such a map must be zero on the socle of \(M\), which contains \(z_{m}\), which leads to a contradiction. This proves the required statement.
We now obtain the main theorem.
**Theorem 4.9**.: _We have a natural isomorphism of \(\mathbb{C}\langle h\rangle\)-modules, respecting tensor products:_
\[DS^{\infty}_{x,y}V^{\vee}[n]\cong\left(DS^{\infty}_{x,y}V_{-2n}[-n]\right)^{ \vee}.\]
_Here on the right hand side we have the \((-2n)\)-twist of \(V\) by the Berezinian, and \((-)^{\vee}\) on the right hand side again stands for contragredient duality of \(\mathbb{C}\langle h\rangle\)-modules._
### Categorical viewpoint
#### 4.3.1.
Recall that the filtrations defined functors \(DS^{\infty}_{x,y},DS^{\infty}_{y,x}:\operatorname{Rep}_{\mathfrak{g}_{0}} \mathfrak{gl}(1|1)\to\operatorname{Rep}\mathfrak{g}_{m}^{fii}\) into the category \(\operatorname{Rep}\mathfrak{g}_{m}^{fil}\) of filtered semisimple \(\mathbb{C}\langle h\rangle\)-modules. Unlike the functors we previously defined into \(\operatorname{Rep}\mathfrak{g}_{m}\), these functors turn out to be very nice:
**Theorem 4.10**.: _The functors \(DS^{\infty}_{x,y},DS^{\infty}_{y,x}:\operatorname{Rep}_{\mathfrak{g}_{0}} \mathfrak{gl}(1|1)\to\operatorname{Rep}\mathfrak{g}_{m}^{fil}\) are essentially surjective, full, symmetric monoidal functors._
Proof.: We prove the theorem for the functor \(DS^{\infty}_{x,y}\), the proof for \(DS^{\infty}_{y,x}\) being analogous. We have already seen in Lemma 4.6 that \(DS^{\infty}_{x,y}\) is symmetric monoidal, so we only need to show that it is full and essentially surjective.
To prove that \(DS^{\infty}_{x,y}\) is essentially surjective, it is enough to show that any indecomposable filtered semisimple \(\mathbb{C}\langle h\rangle\)-module is obtained as the image of some \(W(n)_{r}\) or its parity shift. Indeed, any such indecomposable is one-dimensional, and thus isomorphic (up to parity shift) to some filtered \(\mathbb{C}\langle h\rangle\)-module \(DS^{\infty}_{x,y}W(n)_{r}\) as in Example 4.3.
We now show that \(DS^{\infty}_{x,y}\) is full. It is enough to check that the map
\[DS^{\infty}_{x,y}:\operatorname{Hom}_{\mathfrak{gl}(1|1)}(V,V^{\prime}) \longrightarrow\operatorname{Hom}_{\operatorname{Rep}\mathfrak{g}_{m}^{fil}}( DS^{\infty}_{x,y}V,DS^{\infty}_{x,y}V^{\prime})\]
is surjective when \(V,V^{\prime}\) are indecomposable \(\mathfrak{gl}(1|1)\)-modules, and more specifically, both of the form \(W(n)_{r}\) for some \(n,r\) (otherwise \(DS^{\infty}_{x,y}V=0\) for indecomposable \(V\)). In that case, by Lemma 4.1, the map
\[DS^{\infty}_{x,y}:\operatorname{Hom}_{\mathfrak{gl}(1|1)}(W(n)_{r},W(m)_{s}) \longrightarrow\operatorname{Hom}_{\operatorname{Rep}\mathfrak{g}_{m}^{fil}}( DS^{\infty}_{x,y}W(n)_{r},DS^{\infty}_{x,y}W(m)_{s})\]
has a one-dimensional image if \(s-r=n-m\geq 0\), and \(0\) otherwise. On the other hand, \(DS^{\infty}_{x,y}W(n)_{r}\) is a filtered \(\mathbb{C}\langle h\rangle\)-module of weight \(n+r\) with filtration
\[F_{k}DS^{\infty}_{x,y}W(n)_{r}=\begin{cases}0&\text{ if }k<n\\ DS^{\infty}_{x,y}W(n)_{r}&\text{ if }k\geq n\end{cases}\]
so the space \(\operatorname{Hom}_{\operatorname{Rep}\mathfrak{g}_{m}^{fil}}(DS^{\infty}_{x,y}W(n)_{r},DS^{\infty}_{x,y}W(m)_{s})\) is \(0\) if \(n<m\) and one-dimensional otherwise. This proves that \(DS^{\infty}_{x,y}\) is full.
#### 4.3.2. Semisimplifications
Recall from [1] the notion of the semisimplification of a rigid symmetric monoidal category. Namely, let \(\mathcal{A}\) be a rigid symmetric monoidal category, and let \(\mathcal{N}\) be the monoidal ideal of \(\mathcal{A}\) given by all the negligible morphisms:
\[\mathcal{N}=\{f:X\to Y\in Mor(\mathcal{A})\ :\ \ \forall g:Y\to X,\,Tr(g\circ f)=0\}.\]
Then one may consider the quotient \(S:\mathcal{A}\to\mathcal{A}^{ss}:=\mathcal{A}/\mathcal{N}\). We call the pair \((S,\mathcal{A}^{ss})\) the semisimplification of \(\mathcal{A}\); if \(\mathcal{A}\) satisfies the property that the trace of any nilpotent endomorphism is zero, then \(\mathcal{A}^{ss}\) is a semisimple category.
By [1, Proposition 2.3.4], if \(\mathcal{S}\) is any semisimple rigid symmetric monoidal category for which there exists a full, essentially surjective symmetric monoidal functor \(S^{\prime}:\mathcal{A}\to\mathcal{S}\), then there exists an isomorphism of categories \(F:\mathcal{A}^{ss}\to\mathcal{S}\) such that the two functors \(S,S^{\prime}\circ F:\mathcal{A}\to\mathcal{S}\) are isomorphic.
Consider the category \(\operatorname{Gr}\operatorname{Rep}\mathfrak{g}_{m}:=\operatorname{Rep} \mathfrak{g}_{m}\times\operatorname{Rep}\mathbb{G}_{m}\) and the functor \(\operatorname{Gr}:\operatorname{Rep}\mathfrak{g}_{m}^{fil}\to\operatorname{ Gr}\operatorname{Rep}\mathfrak{g}_{m}\) sending a filtered \(\mathbb{C}\langle h\rangle\)-module to its associated graded. This functor is clearly symmetric monoidal, as well as full and essentially surjective, making the pair \((\operatorname{Gr},\operatorname{Gr}\operatorname{Rep}\mathfrak{g}_{m})\) a semisimplification of the rigid symmetric monoidal category \(\operatorname{Rep}\mathfrak{g}_{m}^{fil}\).
We define functors \(DS^{ss}_{x,y},DS^{ss}_{x,y}:\operatorname{Rep}_{\mathfrak{g}_{0}}\mathfrak{gl} (1|1)\to\operatorname{Gr}\operatorname{Rep}\mathfrak{g}_{m}\) by
\[DS^{ss}_{x,y}:=\operatorname{Gr}\circ DS^{\infty}_{x,y},\ \ \ \ DS^{ss}_{y,x}:= \operatorname{Gr}\circ DS^{\infty}_{y,x}.\]
The grading can be written explicitly as follows:
\[DS^{ss}_{x,y}V=\bigoplus_{n\in\mathbb{Z}}DS^{\infty}_{x,y}V[n],\ \ \ DS^{ss}_{y,x}V= \bigoplus_{n\in\mathbb{Z}}DS^{\infty}_{y,x}V[n],\]
where \(V[n]\) lies in \(\operatorname{Rep}\mathfrak{g}_{m}\).
The following is an immediate consequence from Theorem 4.10.
**Corollary 4.11**.: _The functors \(DS^{ss}_{x,y},DS^{ss}_{y,x}\) are full, essentially surjective symmetric monoidal functors. This makes the pairs \((DS^{ss}_{x,y},\operatorname{Rep}\mathfrak{g}_{m}\times\operatorname{Rep} \mathbb{G}_{m})\), \((DS^{ss}_{y,x},\operatorname{Rep}\mathfrak{g}_{m}\times\operatorname{Rep} \mathbb{G}_{m})\) into (isomorphic) semisimplifications of the category \(\operatorname{Rep}_{\mathfrak{g}_{0}}\mathfrak{gl}(1|1)\)._
_Remark 4.12_.: It was first proven in [16] that the semisimplification of \(\operatorname{Rep}GL(1|1)\) is \(\operatorname{Rep}\mathbb{G}_{m}\times\mathbb{G}_{m}\).
Proposition 4.11 along with [1, Proposition 2.3.4 ] give an immediate (albeit less satisfying) proof of Proposition 4.8. Stated in terms of our semisimplification functors, the result becomes:
**Corollary 4.13**.: _Consider the the automorphism \(\phi\) of \(\mathbb{C}\langle h\rangle\times\mathbb{C}\) satisfying \(\phi(h,z)=(h+2z,z)\) where \(\mathbb{C}\) represents the Lie algebra of \(\mathbb{G}_{m}\). This defines an autofunctor \(\Phi_{\phi}\)
_on \(\operatorname{Rep}\mathfrak{g}_{m}\times\operatorname{Rep}\mathbb{G}_{m}\) given by twisting by \(\phi\). Then we have a natural isomorphism of symmetric monoidal functors:_
\[DS^{ss}_{x,y}\cong\Phi_{\phi}\circ DS^{ss}_{y,x}.\]
This isomorphism is realized explicitly by the natural isomorphisms given in Proposition 4.8.
#### 4.3.3. Summarizing diagram
We may summarize the situation with the following diagram:
The left and right triangles in this diagram commute, and the square gives an isomorphism of symmetric monoidal functors after composing with \(\Phi_{\phi}\).
### Extending the story to the functor \(DS_{x+y}\)
We now consider the functor \(DS_{x+y}:\operatorname{Rep}_{\mathfrak{g}\overline{\mathfrak{l}}(|1|)} \to\operatorname{\mathsf{sVec}}\).
**Lemma 4.14**.: _If \(M\) is an indecomposable \(\mathfrak{gl}(1|1)\)-module, then \(DS_{x+y}M\neq 0\) if and only if \(M\cong W(n)_{r}\) for some \(n\in\mathbb{Z}\) and \(r\in\mathbb{C}\). Furthermore, \(\dim DS_{x+y}W(n)_{r}=(1|0)\)._
Proof.: For projective \(M\) and for \(M=W(n)_{r}\) this follows from Lemma 2.2. For \(M\) of the form \(X(n)_{r}\) or \(Y(n)_{r}\), this is a straightforward computation.
**Lemma 4.15**.: _Let \(n,m\in\mathbb{Z}\), \(r,s\in\mathbb{C}\)._
_If \(n\geq m\) and \(s-r\in\{m-n,m-n+2,\ldots,n-m\}\) then_
\[DS_{x+y}\operatorname{Hom}_{\mathfrak{gl}(1|1)}(W(n)_{r},W(m)_{s})=\mathbb{C}.\]
_Otherwise \(DS_{x+y}\operatorname{Hom}_{\mathfrak{gl}(1|1)}(W(n)_{r},W(m)_{s})=0\)._
Proof.: Similarly to the proof of Lemma 4.1, we may reduce the problem to the question of computing the spaces \(DS_{x+y}\operatorname{Hom}_{\mathfrak{gl}(1|1)}(W(0),W(m)_{s})\neq 0\) for all \(m\in\mathbb{Z},s\in\mathbb{C}\). Since \(\dim DS_{x+y}W(m)_{s}=(1|0)\), we know that this space is of dimension at most \(1\). From here one can compute directly.
We give the local picture of the morphisms given by \(DS_{x+y}(f)\) where \(f\) is a map between \(\mathfrak{gl}(1|1)\)-modules of the form \(W(n)_{r}\) with \(n,r\in\mathbb{Z}\). We write \(\overline{W(n)_{r}}:=DS_{x+y}W(n)_{r}\).
_Remark 4.16_.: One observes that up in the above picture there are two "blocks of morphisms": the block containing \(DS_{x+y}W(0)\), and the block containing \(DS_{x+y}W(0)_{1}\). This is reflected in the action of the group \(\mathbb{Z}/2\mathbb{Z}\) on the functor \(DS_{x+y}\), which acts by \(1\) on the "block" containing \(DS_{x+y}W(0)\) and by \(-1\) on the "block" containing \(DS_{x+y}W(0)_{1}\).
Let \(\mathbf{a_{1}}=(1,-1)\), \(\mathbf{a_{2}}=(1,1)\) be elements of \(\mathbb{Z}\times\mathbb{C}\). Define an order on \(\mathbb{Z}\times\mathbb{C}\) by setting \(\mathbf{b_{1}}\leq\mathbf{b_{2}}\) if \(\mathbf{b_{2}}-\mathbf{b_{1}}=j\mathbf{a_{1}}+k\mathbf{a_{2}}\) for some \(j,k\in\mathbb{N}\). Next, let \(\operatorname{Fil}^{\mathbb{Z}\times\mathbb{C}}\) be the category of pairs \((V,\{V^{\mathbf{b}}\}_{b\in\mathbb{Z}\times\mathbb{C}})\) such that
* \(V\) is a finite-dimensional super vector space;
* for every \(\mathbf{b}\in\mathbb{Z}\times\mathbb{C}\), \(V^{\mathbf{b}}\subseteq V\) is a subspace;
* if \(\mathbf{b_{1}},\mathbf{b_{2}}\in\mathbb{Z}\times\mathbb{C}\) such that \(\mathbf{b_{1}}\leq\mathbf{b_{2}}\), then \(V^{\mathbf{b_{1}}}\subseteq V^{\mathbf{b_{2}}}\);
* \(\bigcup_{\mathbf{b}\in\mathbb{Z}\times\mathbb{C}}V^{\mathbf{b}}=V\).
The morphisms in \(\operatorname{Fil}^{\mathbb{Z}\times\mathbb{C}}\) will be morphisms of vector superspaces \(\phi:V\to W\) such that \(\phi(V^{\mathbf{b}})\subset W^{\mathbf{b}}\) for each \(\mathbf{b}\in\mathbb{Z}\times\mathbb{C}\). This category has an obvious symmetric monoidal structure, \((V,\{V^{\mathbf{b}}\}_{\mathbf{b}})\otimes(W,\{W^{\mathbf{b}}\}_{\mathbf{b}}):= (V\otimes W,\{(V\otimes W)^{\mathbf{b}}\}_{\mathbf{b}})\), where
\[\forall\,\mathbf{b}\in\mathbb{Z}\times\mathbb{C},\qquad(V\otimes W)^{\mathbf{ b}}:=\sum_{\mathbf{b}^{\prime}+\mathbf{b}^{\prime\prime}=\mathbf{b}}V^{\mathbf{b}^{ \prime}}\otimes W^{\mathbf{b}^{\prime\prime}}.\]
Then \(\operatorname{Fil}^{\mathbb{Z}\times\mathbb{C}}\) is a Karoubian monoidal category whose indecomposables are indexed (up to parity shift) by \(v\in\mathbb{Z}\times\mathbb{C}\). The indecomposable object corresponding to \(v\in\mathbb{Z}\times\mathbb{C}\) is \(\mathbb{C}_{\mathbf{v}}:=(\mathbb{C},\{\mathbb{C}_{\mathbf{v}}^{\mathbf{b}} \}_{\mathbf{b}})\), where \(\mathbb{C}_{\mathbf{v}}^{\mathbf{b}}:=\mathbb{C}\) if \(\mathbf{b}\geq\mathbf{v}\) and \(\mathbb{C}_{\mathbf{v}}^{\mathbf{b}}=0\) otherwise.
Following the same ideas as in Section 4.2, for any \(\mathfrak{gl}(1|1)\)-module \(V\) we may give \(\overline{V}:=DS_{x+y}V\) the structure of an object in \(\operatorname{Fil}^{\mathbb{Z}\times\mathbb{C}}\): for any \(\mathbf{b}=(n,r)\in\mathbb{Z}\times\mathbb{C}\),
\[\overline{V}^{\mathbf{b}}:=\operatorname{Im}\left(\,\operatorname{Hom}_{ \mathfrak{gl}(1|1)}(W(n)_{r},V)\otimes DS_{x+y}W(n)_{r}\to\overline{V}\, \right).\]
The vectors in the subspace \(\overline{V}^{\mathbf{b}}\subset\overline{V}\) are exactly those which lie in the images of morphisms of the form \(DS_{x+y}(f):DS_{x+y}W(n)_{r}\to\overline{V}\) for some \(f\in\operatorname{Hom}_{\mathfrak{gl}(1|1)}(W(n)_{r},V)\).
The same arguments as in Lemma 4.5 show that this endows \(\overline{V}\) with the structure of an object in \(\operatorname{Fil}^{\mathbb{Z}\times\mathbb{C}}\). This construction is clearly functorial, defining a functor
\[DS_{x+y}:\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}\mathfrak{gl}(1|1) \to\operatorname{Fil}^{\mathbb{Z}\times\mathbb{C}}.\]
_Example 4.17_.: Let \(V:=W(m)_{s}\), \(m\in\mathbb{Z},s\in\mathbb{C}\). Set \(\mathbf{v}:=(m,s)\).
Then \(\overline{V}:=DS_{x+y}W(n)_{r}\) is a \((1|0)\)-dimensional vector superspace, and by Lemma 4.15, we have: \(DS_{x+y}\operatorname{Hom}_{\mathfrak{gl}(1|1)}(W(n)_{r},W(m)_{s})=\mathbb{C}\) iff \((n,r)\geq(m,s)\) in the above order. Thus we have:
\[\overline{V}^{\mathbf{b}}=\mathbb{C}^{b}_{\mathbf{v}}:=\begin{cases}\mathbb{C }&\text{ if }\mathbf{b}\geq\mathbf{v}\\ 0&\text{ else}\end{cases}\]
for any \(\mathbf{b}\in\mathbb{Z}\times\mathbb{C}\). In other words,
\[\left(DS_{x+y}W(n)_{r},\left\{(DS_{x+y}W(n)_{r})^{\mathbf{b}}\right\}_{ \mathbf{b}}\right)\cong\mathbb{C}_{\mathbf{v}}.\]
The following result becomes a natural extension of the ideas used in Theorem 4.10.
**Proposition 4.18**.: _The functor \(DS_{x+y}:\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}\mathfrak{gl}(1|1) \to\operatorname{Fil}^{\mathbb{Z}\times\mathbb{C}}\) is an essentially surjective, full, symmetric monoidal functor._
Proof.: The proof that this functor is symmetric monoidal is a direct analogy of the proof of Lemma 4.6.
From Example 4.17 we see that for any indecomposable object \(\mathbb{C}_{\mathbf{v}}\in\operatorname{Fil}^{\mathbb{Z}\times\mathbb{C}}\) there exists a corresponding indecomposable object in \(\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}\mathfrak{gl}(1|1)\) which is sent to \(\mathbb{C}_{\mathbf{v}}\). Thus the functor is essentially surjective.
To see that this functor is full, we only need to check that for any indecomposable \(V,W\in\operatorname{Rep}_{\mathfrak{g}_{\overline{0}}}\mathfrak{gl}(1|1)\), the map
\[DS_{x+y}:\operatorname{Hom}_{\mathfrak{gl}(1|1)}(V,W)\to\operatorname{Hom}_{ \operatorname{Fil}^{\mathbb{Z}\times\mathbb{C}}}(DS_{x+y}V,DS_{x+y}W)\]
is surjective. By Lemma 4.14, it is enough to show this when \(V=W(n)_{r}\), \(W=W(m)_{s}\) for \(n,m\in\mathbb{Z}\), \(r,s\in\mathbb{C}\). In that case, by Example 4.17 we have: \(DS_{x+y}W(n)_{r}\cong\mathbb{C}_{(n,r)}\), \(DS_{x+y}W(m)_{s}\cong\mathbb{C}_{(m,s)}\),
\[\operatorname{Hom}_{\operatorname{Fil}^{\mathbb{Z}\times\mathbb{C}}}( \mathbb{C}_{(n,r)},\mathbb{C}_{(m,s)})=\begin{cases}\mathbb{C}&\text{ if }(n,r)\geq(m,s)\\ 0&\text{ else}\end{cases}.\]
So by Lemma 4.15,
\[\operatorname{Hom}_{\operatorname{Fil}^{\mathbb{Z}\times\mathbb{C}}}( \mathbb{C}_{(n,r)},\mathbb{C}_{(m,s)})\cong DS_{x+y}\operatorname{Hom}_{ \mathfrak{gl}(1|1)}(W(n)_{r},W(m)_{s}).\]
This completes the proof of the proposition.
Consider the vector superspace
\[\operatorname{Gr}_{a_{i}}(V^{\bullet})=\bigoplus_{r\in\mathbb{C}}\left(\sum_ {k\in\mathbb{Z}}V^{(0,r)+ka_{i}}\right)\Bigg{/}\left(\sum_{k\in\mathbb{Z}}V^{(- 2,r)+ka_{i}}\right).\]
To see that this is well defined, recall that \(V^{\mathbf{b}}\subset V^{\mathbf{b}+(2,0)}\) for any \(\mathbf{b}\in\mathbb{Z}\times\mathbb{C}\), since \((2,0)=\mathbf{a}_{1}+\mathbf{a}_{2}\).
For a pictorial explanation of what \(Gr_{a_{1}}(V^{\bullet})\) looks like, we consider the following diagram, where we consider only the direct summand corresponding to \(r=0\): the
colored portions (in blue and red) correspond to \(\sum\limits_{k\in\mathbb{Z}}V^{(0,r)+ka_{i}}\), while the red portion alone corresponds to \(\sum\limits_{k\in\mathbb{Z}}V^{(-2,r)+ka_{i}}\), which we quotient out by.
This allows us to define two functors \(\operatorname{Gr}_{a_{1}},\operatorname{Gr}_{a_{2}}:\operatorname{Fil}^{ \mathbb{Z}\times\mathbb{C}}\to\operatorname{Rep}\mathfrak{g}_{m}^{fil}\), where the \(h\) action on \(\operatorname{Gr}_{a_{i}}(V^{\bullet})\) is given by the above \(\mathbb{C}\)-grading, and the filtration on each eigenspace of \(h\) is given by
\[F_{j}\left[\left.\left(\sum\limits_{k\in\mathbb{Z}}V^{(0,r)+ka_{i}}\right) \right/\!\left(\sum\limits_{k\in\mathbb{Z}}V^{(-2,r)+ka_{i}}\right)\right]\,=\,V ^{(0,r)+ja_{i}}\middle/\left[\left(\sum\limits_{k\in\mathbb{Z}}V^{(-2,r)+ka_{i} }\right)\cap V^{(0,r)+ja_{i}}\right]\]
**Theorem 4.19**.: _We have natural isomorphisms of symmetric monoidal functors:_
\[\operatorname{Gr}_{a_{1}}\circ DS_{x+y}\simeq DS_{x,y}^{\infty},\quad\ \operatorname{Gr}_{a_{2}}\circ DS_{x+y}\simeq DS_{y,x}^{\infty}.\]
Proof.: For this proof, whenever we have a semisimple \(\mathbb{C}\langle h\rangle\)-module \(M\), we will write \(M_{(a)}\) for the \(h\)-eigenspace corresponding to the eigenvalue \(a\).
We prove these isomorphisms in the case when \(M\) has integral eigenvalues for \(h\), with the general case following from twisting by the appropriate multiples of the Berezinian character. Consider the double complex
The total complex \(C^{\bullet}\) of this double complex is
\[\cdots\to M^{-}\xrightarrow{x+y}M^{+}\xrightarrow{x+y}M^{-}\to\cdots.\]
where \(C^{k}=M^{+}:=\sum\limits_{j\in 2\mathbb{Z}}M_{(j)}\) for any \(k\in 2\mathbb{Z}\) and \(C^{k}=M^{-}:=\sum\limits_{j\in 2\mathbb{Z}+1}M_{(j)}\) for any \(k\in 2\mathbb{Z}+1\). In particular, \(C^{0}=M^{+}\) and \(C^{1}=M^{-}\).
This double complex induces two spectral sequences in the usual way, via the horizontal or vertical filtrations, or equivalently, via a choice of \(x\) or \(y\).
We consider here the vertical filtration, so that we take cohomology in \(x\) first, to prove the isomorphism \(\operatorname{Gr}_{a_{1}}\circ DS_{x+y}\simeq DS_{x,y}^{\infty}\).
The other filtration gives an analogous argument proving the second isomorphism in the statement of the theorem.
Our vertical filtration gives us a spectral sequence whose \(n\)-th page has \((DS_{x,y}^{n}M)_{(i+j)}\) at the \((i,j)\)-position. If \(M\) is finite-dimensional, then this spectral sequence is regular (see [23, Theorem 5.5.10]), and thus weakly converges. The statement of convergence gives us exactly our natural isomorphism of functors; we spell out what this precisely means for this spectral sequence.
Consider the filtration on our total complex given by \(C_{\leq n}^{\bullet}=\sum\limits_{k}C_{\leq(k+n)}^{k}\), where for an \(h\)-module \(V\), we set \(V_{\leq\ell}=\sum\limits_{i\leq\ell}V_{(i)}\).
Then the differential preserves this filtration, and since this filtration is complete, it induces a natural filtration \(F_{k}H^{i}(C^{\bullet})\) on the cohomology via taking the image of the filtration in cohomology.
By definition of weak convergence of our spectral sequence we have functorial isomorphisms:
\[F_{k}H^{0}(C^{\bullet})/F_{k-1}H^{0}(C^{\bullet})\cong(DS_{x,y}^{\infty}M)_{(2 k)},\quad F_{k}H^{1}(C^{\bullet})/F_{k-1}H^{1}(C^{\bullet})\cong(DS_{x,y}^{\infty}M)_{(2 k+1)}\]
This gives the desired natural isomorphisms. We now need to check that the filtered objects \(F_{k}H^{0}(C^{\bullet})\) and \(F_{k}H^{1}(C^{\bullet})\) agree with the filtration on \(DS_{x+y}(M)\) with respect to \(\mathbf{a_{1}}\) that we are taking the associated graded of. However, by naturality, it suffices to check this on indecomposables alone. If we consider \(M=W(n)_{a}\) with \(a+n\) even, then we obtain \(F_{k}H^{0}(C^{\bullet})=H^{0}(C^{\bullet})\) if \(k\geq n+a\), and is \(0\) otherwise. If \(M=W(n)_{a}\) with \(a+n\) odd, then \(F_{k}H^{1}(C^{\bullet})=H^{1}(C^{\bullet})\) if \(k\geq n+a+1\), and is \(0\) otherwise. This exactly corresponds to the filtration which induces the associated graded \(Gr_{a_{1}}\), as desired.
Our diagram from Section 4.3 can now be enhanced:
## 5. Applications
### Lie superalgebra \((\mathfrak{p})\mathfrak{gl}(1|1)\)-subalgebras
Suppose that \(\mathfrak{g}\) is an arbitrary Lie superalgebra, and that we have map \(\mathfrak{gl}(1|1)\to\mathfrak{g}\) which is either an embedding or has kernel spanned by \(c\). Let \(\mathfrak{k}\) be a subalgebra of \(\mathfrak{g}\) which commutes with the image of \(\mathfrak{gl}(1|1)\) in \(\mathfrak{g}\). If \(\mathfrak{g}\) is a finite type Kac-Moody Lie superalgebra or \(\mathfrak{q}(n)\), we will be interested in the case when \(\mathfrak{gl}(1|1)\) is a diagonal copy of inside a product of root subalgebras isomorphic to \(\mathfrak{gl}(1|1)\) (see explanation below), and \(\mathfrak{k}\) is an embedded copy of \(\mathfrak{g}_{x}\).
**Lemma 5.1**.: _Let \(M\) be a finite-dimensional \(\mathfrak{g}\)-module which is semisimple over \(\mathfrak{gl}(1|1)_{\overline{0}}\)._
1. _For each_ \(n\in\mathbb{N}\cup\{\infty\}\)_,_ \(DS^{n}_{x,y}M\) _and_ \(DS^{n}_{y,x}M\) _naturally have the structure of_ \(\mathfrak{k}\)_-modules such that the differentials_ \(d_{n}(M)\) _define_ \(\mathfrak{k}\)_-equivariant odd differentials._
2. _The spaces_ \(DS^{ss}_{x,y}M\)_, and_ \(DS^{ss}_{y,x}M\) _naturally have the structure of a_ \(\mathfrak{k}\times\mathbb{C}\langle h\rangle\)_-module._
Proof.: This follows from the fact that \([\mathfrak{gl}(1|1),\mathfrak{k}]=0\) and thus \(\mathfrak{k}\) defines \(\mathfrak{gl}(1|1)\)-equivariant morphisms on \(M\).
**Corollary 5.2**.: _We have natural isomorphisms of \(\mathfrak{k}\times\mathbb{C}\langle h\rangle\)-modules:_
\[DS^{\infty}_{x,y}V[n]\to DS^{\infty}_{y,x}V[n]_{2n}\]
### Finite-type Kac-Moody and queer Lie superalgebras
If \(\mathfrak{g}\) is a finite-type Kac Moody Lie superalgebra or is the queer Lie superalgebra \(\mathfrak{q}(n)\), then \(\mathfrak{g}\) admits a Chevalley automorphism \(\sigma=\sigma_{\mathfrak{g}}\) which acts by \((-1)\) on a maximal even torus and satisfies \(\sigma^{2}=\delta\), where \(\delta(u)=(-1)^{\overline{u}}u\) is the grading automorphism on \(\mathfrak{g}\).
Now let \(\alpha_{1},\dots,\alpha_{k}\) be \(k\) linearly independent roots such that \(\dim(\mathfrak{g}_{\alpha})_{\overline{1}}\neq 0\), and such that \(\alpha_{i}\pm\alpha_{j}\) is not a root for \(i\neq j\). Choose nonzero root vectors \(x_{i}\in\mathfrak{g}_{\alpha_{i}}\) and \(y_{i}\in\mathfrak{g}_{-\alpha_{i}}\), and set \(x=\sum\limits_{i}x_{i}\), \(y=\sum\limits_{i}y_{i}\). Finally let \(c=[x,y]\). Now choose a semisimple element
\(h\in\mathfrak{g}_{\overline{0}}\) such that \([h,x]=x\), \([h,y]=-y\), and \(\beta(h)=0\) for all roots \(\beta\) such that \(\beta\pm\alpha_{i}\) is not a root. Then we have constructed a subalgebra of \(\mathfrak{g}\) isomorphic to \(\mathfrak{gl}(1|1)\).
**Definition 5.3**.: We call a subalgebra isomorphic to \(\mathfrak{gl}(1|1)\) as constructed above a diagonal \(\mathfrak{gl}(1|1)\) subalgebra.
In this setup, by [10] we may embed \(\mathfrak{g}_{x}=\mathfrak{g}_{y}\) into \(\mathfrak{g}\) such that it commutes with our subalgebra \(\mathfrak{gl}(1|1)\).
**Corollary 5.4**.: _Let \(\mathfrak{g}\) be a finite-type Kac-Moody Lie superalgebra or \(\mathfrak{q}(n)\), and choose a diagonal \(\mathfrak{gl}(1|1)\) subalgebra. Assume that \(L\) is a simple finite-dimensional module, and if \(\mathfrak{g}=\mathfrak{q}(n)\) also assume that its highest weight is integral: then the restriction to \(\mathfrak{gl}(1|1)\) contains no copies of \(X(n)\) or \(Y(n)\)._
See Section 5.3 for a reminder on the meaning of integral weights for \(\mathfrak{q}(n)\).
Proof.: We prove that in each case the spectral sequences collapse on the first page, which is clearly equivalent.
Observe that for each spectral sequence, the differential defines an odd, \(\mathfrak{g}_{x}\)-equivariant endomorphism on each page. For the Kac-Moody case, by [13], [12], and [10], \(DS_{x}L\cong DS_{y}L\) are pure, meaning that \([DS_{x}L:L^{\prime}][DS_{x}L:\Pi L^{\prime}]=0\) for all simple \(\mathfrak{g}_{x}\)-modules \(L^{\prime}\). Thus all differentials must vanish after the \(0\)th page, and the spectral sequence collapses on the \(1\)st page.
For the \(\mathfrak{q}(n)\) case we use the computations of [10]. Given any simple \(\mathfrak{g}_{x}=\mathfrak{g}_{y}\)-module \(L^{\prime}\) of integral weight, it is shown there that for any two composition factors of \(DS_{x}L\) or \(DS_{y}L\) isomorphic to \(L^{\prime}\), the difference of their \(h\)-weights must be even. Indeed, this follows from the fact that \(DS_{x}L\) is a subquotient of \(DS_{x_{1}}\circ\cdots\circ DS_{x_{r}}(L)\), and we know the statement for the latter module. However, as was noted in Section 3.2, the differential \(d_{n}\) in each case has weight \(2n-1\) or \(1-2n\), so \(d_{n}\) must be \(0\) after the zeroth page.
### Half-integral simple modules for \(\mathfrak{q}(n)\)
#### 5.3.1.
Let \(\mathfrak{h}\subseteq\mathfrak{b}\subseteq\mathfrak{q}(n)\) be a choice of Cartan subalgebra and Borel subalgebra for \(\mathfrak{q}(n)\), so that \(\mathfrak{t}=\mathfrak{h}_{\overline{0}}\) is a maximal torus of \(\mathfrak{q}(n)_{\overline{0}}\). Then there is a basis \(\epsilon_{1},\ldots,\epsilon_{n}\) of \(\mathfrak{t}\) such that the dominant integral weights for \(\mathfrak{q}(n)\) with respect to this basis are given by:
\[\lambda=(a_{1},\ldots,a_{n}),\ \ a_{i}\in\mathbb{C}\text{ such that }a_{i}-a_{i+1}\in \mathbb{N},\]
and if \(a_{i}=a_{i+1}\) then \(a_{i}=0\). We say a weight \(\lambda\) is integral if \(a_{i}\in\mathbb{Z}\) for all \(i\), and we say \(\lambda\) is half-integral if \(a_{i}+1/2\in\mathbb{Z}\) for all \(i\). We say arbitrary module for \(\mathfrak{q}(n)\) is of half-integral weight if all of its composition factors are half-integral. If \(\lambda\) is neither integral nor half-integral then \(L(\lambda)\) is projective, so we ignore this case (see for instance [14]).
In contrast to integral weight modules for \(\mathfrak{q}(n)\) as explained in Corollary 5.4, half-integral simple modules may contain 'Z'-modules; in fact we have the following:
**Lemma 5.5**.: _If \(V\) is a half-integral weight module for \(\mathfrak{q}(n)\), and \(\mathfrak{gl}(1|1)\subseteq\mathfrak{q}(n)\) is a diagonal subalgebra as in Definition 5.3, then the restriction of \(V\) to \(\mathfrak{gl}(1|1)\) contains no \(W\)-submodules as direct summands._
Proof.: Because half-integral modules are projective over the Cartan subalgebra of \(\mathfrak{q}(n)\), we must have that \(DS_{x+y}V=0\). From this the statement follows.
The above lemma tells us that our spectral sequence will have interesting terms for half-integral modules; we now compute them in the case when \(V=L\) is a simple module and \(\mathfrak{gl}(1|1)\) is a root subalgebra, i.e. we only use one root in our construction of \(\mathfrak{gl}(1|1)\).
#### 5.3.2. Arc diagrams
We recall from [1] the arc diagrams associated to half-integral weights. First of all, the simple half-integral weight modules for \(\mathfrak{q}(n)\) are indexed by their highest weights, which are given by a strictly decreasing sequence of half-integers, i.e. \(\lambda=(a_{1}/2,\ldots,a_{n}/2)\) where \(a_{1}/2>a_{2}/2>\cdots>a_{n}/2\) and \(a_{1},\ldots,a_{n}\) are odd integers. Given a highest weight \(\lambda=(a_{1}/2,\ldots,a_{n}/2)\), we associate a weight diagram to it, which will be map \(f_{\lambda}:\frac{2\mathbb{N}+1}{2}\to\{\circ,>,<,\times\}\), i.e. a map from elements of the form \(a/2\) for \(a\) positive odd integer. It is defined as follows:
* If \(2b\neq\pm a_{i}\) for any \(i\), then we declare \(f_{\lambda}(b)=\circ\);
* if \(2b=a_{i}\) and \(2b\neq-a_{j}\) for any \(j\), then \(f_{\lambda}(b)=>\);
* if \(2b=-a_{i}\) for some \(i\) and \(2b\neq a_{j}\) for any \(j\), then \(f_{\lambda}(b)=<\);
* finally, if \(2b=a_{i},-a_{j}\) for some \(i,j\) then \(f_{\lambda}(b)=\times\).
We visualize \(f_{\lambda}\) with its graph; for example, if
\[\lambda=(15/2,13/2,5/2,1/2,-1/2,-3/2,-5/2,-15/2),\]
then \(f_{\lambda}\) looks as follows:
The above association defines a bijection between half-integral highest weights for \(\mathfrak{q}(n)\) and weight diagrams with the \(r\) symbols \(>\), \(s\) symbols \(<\), and \(t\) symbols \(\times\) such that \(2t+r+s=n\).
Associated to \(f_{\lambda}\) we define an arc diagram, which consists of the above weight diagram along with arcs connecting each symbol \(\times\) to a symbol \(\circ\) to the right of it, such that (1) the arcs do not intersect and (2) no symbol \(\circ\) lies underneath an arc. This uniquely specifies the arc diagram; for example, associated to the the weight diagram above we obtain the arc diagram:
Before we can state our theorem, we need some terminology. We say that an arc is maximal if it does not lie under another arc. Maximal arcs have the special property that if they are removed along with the symbol \(\times\) to which they are attached, an arc diagram for \(\mathfrak{q}(n-2)\) is obtained.
We say that a position \(\circ\) in an arc diagram is free if it is not the end of any arc. Given a dominant half-integral weight \(\lambda\) and a half-integer \(n/2\), define \(\ell(\lambda,n/2)\) to be the number of free positions to the left of \(n/2\) in the arc diagram of \(\lambda\) (see [1] for examples).
We caution that in the following theorem we only consider a root subalgebra \(\mathfrak{gl}(1|1)\), which is less general than the setting considered in Lemma 5.5 and Corollary 5.4.
**Theorem 5.6**.: _Let \(\lambda\) be a dominant half-integral for \(\mathfrak{q}(n)\), and \(\mu\) a dominant half-integral weight for \(\mathfrak{q}(n-2)\). Let \(\mathfrak{gl}(1|1)\subseteq\mathfrak{q}(n)\) be a root subalgebra, and let \(x,y,h,c\) be its generators. Then:_
1. \(DS^{k}_{x,y}L(\lambda)\) _is semisimple;_
2. _Either_ \([DS^{k}_{x,y}L(\lambda):L(\mu)]=0\) _or_ \(L(\mu)\) _appears in_ \(DS^{k}_{x,y}L(\mu)\) _with multiplicity_ \((1|1)\)_;_
3. _We have_ \([DS^{k}_{x,y}L(\lambda):L(\mu)]\neq 0\) _if and only if the arc diagram of_ \(\mu\) _is obtained from the arc diagram of_ \(\lambda\) _by removing a maximal arc such that if the_ \(\times\) _end of the arc lies at_ \(j/2\)_, then we have_ \(k\leq\ell(\lambda,j/2)+1\)_._
Proof.: The proof is essentially identical to the computation of \(DS_{x}\) given in [10], and we will explain it using the language from that article. Namely, because \(DS^{k}_{x,y}\) is a symmetric monoidal functor, and it takes the standard module to the standard module, it will commute with translation functors. Further, it commutes with the operation of removing core symbols for stable weights, because this is given by taking the eigenspace of a semisimple element \(z\) which commutes with both \(x\) and \(y\). Thus following the algorithm described in [10], we obtain the formula
\[[DS^{k}_{x,y}(L):L(\mu)]=\dim DS^{k}_{x,y}L(\lambda^{\prime}),\]
where \(\lambda^{\prime}\) is the weight for \(\mathfrak{q}_{2}\) whose arc diagram is obtained by'shrinking' all arcs of \(\mu\) within the arc diagram of \(\lambda\). In this way we reduce the computation to the case of \(\mathfrak{q}_{2}\). Now one may use that the simple module \(L(\frac{2n-1}{2}(\epsilon_{1}-\epsilon_{2})))\) decomposes over \(\mathfrak{gl}(1|1)\) as \(X(n)_{-n/2}\oplus\Pi Y(n)_{-n/2}\). From this the statement follows.
### Contragredient duality
We continue to let \(\mathfrak{g}\) denote a finite-type Kac-Moody Lie superalgebra or \(\mathfrak{q}(n)\). As previously noted, these Lie superalgebras all admit Chevalley automorphisms \(\sigma=\sigma_{\mathfrak{g}}\). Thus we may define contragredient duality functors \(V\mapsto V^{\vee}\) on the category of finite-dimensional modules. If \(V\) is finite-dimensional, we have a canonical isomorphism \(V\cong(V^{\vee})^{\vee}\).
Let \(\mathfrak{gl}(1|1)\subseteq\mathfrak{g}\) be a subalgebra constructed as in Section 5.2. Let us further assume now that it is stable under \(\sigma_{\mathfrak{g}}\). Let \(\mathfrak{k}\) be a root subalgebra of \(\mathfrak{g}\) commuting with \(\mathfrak{gl}(1|1)\) such that if \(\mathfrak{g}_{\alpha}\subseteq\mathfrak{k}\) then \(\mathfrak{g}_{-\alpha}\subseteq\mathfrak{k}\). In particular this implies that \(\mathfrak{k}\) is stable under \(\sigma_{\mathfrak{g}}\).
**Theorem 5.7**.: _We have a natural isomorphism of \(\mathfrak{k}\times\mathbb{C}\langle h\rangle\)-modules:_
\[DS^{\infty}_{x,y}V^{\vee}[n]\cong\left(DS^{\infty}_{x,y}V[-n]_{-2n}\right)^{ \vee}.\]
_where the outer \(\vee\) on the right hand side denotes the contragredient duality functor on \(\mathfrak{k}\times\mathbb{C}\langle h\rangle\)-modules which is induced by the restriction of \(\sigma\) to this subalgebra._
Proof.: This follows from the naturality of the isomorphism in Theorem 4.9.
### The category \(\operatorname{Rep}^{+}_{\mathfrak{g}_{\mathfrak{g}}}(\mathfrak{g})\)
Given an abelian symmetric monoidal category \(\mathcal{C}\), let \(\mathcal{C}^{+}\) denote the Karoubian symmetric monoidal subcategory generated by all simple objects. In other words, the objects of \(\mathcal{C}^{+}\) are the direct summands of arbitrary tensor products of simple modules. We write \(\operatorname{Rep}^{+}_{\mathfrak{g}_{\mathfrak{g}}}(\mathfrak{g}):=( \operatorname{Rep}_{\mathfrak{g}_{\mathfrak{g}}}(\mathfrak{g}))^{+}\). If \(\mathfrak{g}=\mathfrak{gl}(1|1)\), then \(\operatorname{Rep}^{+}_{\mathfrak{g}_{\mathfrak{g}}}(\mathfrak{g})\) has indecomposables given by all projective modules along with the modules \(W(0)_{r}\) for any \(r\in\mathbb{C}\). These categories (and in particular their semisimplifications) have been studied in the case of \(\mathfrak{g}=\mathfrak{gl}(m|n)\), see [12].
**Corollary 5.8**.: _In addition to the hypotheses of Theorem 5.7, assume that \(L\) is simple and \(DS_{x}L\) is a multiplicity-free \(\mathfrak{k}\)-module. Then the restriction of \(L\) to \(\mathfrak{gl}(1|1)\) lies in \(\operatorname{Rep}^{+}_{\mathfrak{gg}}(\mathfrak{gl}(1|1))\)._
Proof.: By Corollary 5.4, no 'Z'-modules appear. Since \(L\) is simple we have \(L^{\vee}\cong L\), and since \(DS^{\infty}_{x,y}L(-n)\) is a subquotient of \(DS_{x}L\) as a \(\mathfrak{k}\)-module, we have \((DS^{\infty}_{x,y}L(-n))^{\vee}\cong DS^{\infty}_{x,y}L(-n)\) as \(\mathfrak{k}\)-modules. Thus ignoring the \(h\)-action, Theorem 5.7 becomes:
\[DS^{\infty}_{x,y}L(n)\cong DS^{\infty}_{x,y}L(-n)_{-2n}.\]
Now suppose that for some \(n\neq 0\) we had \(DS^{\infty}_{x,y}L(n)\neq 0\), and let \(L^{\prime}\) be a composition factor of it as a \(\mathfrak{k}\)-module. Then \(L^{\prime}\) must be a composition factor of \(DS^{\infty}_{x,y}L(-n)\). However then \(L\) would need to appear with multiplicity greater than one in \(DS^{\infty}_{x,y}L\); since this is a subquotient of \(DS_{x}L\), we obtain a contradiction.
**Theorem 5.9**.: _If \(L\) is a simple module over \(\mathfrak{gl}(m|n)\), \(\mathfrak{osp}(2m|2n)\), then the restriction of \(L\) to a root subalgebra \(\mathfrak{gl}(1|1)\) lies in \(\operatorname{Rep}^{+}_{\mathfrak{g}_{\mathfrak{g}_{\mathfrak{g}}}}(\mathfrak{ gl}(1|1))\)._
Proof.: We use Corollary 5.8 in the case of \(\mathfrak{k}=\mathfrak{g}_{x}\) is an embedded subalgebra.
We obtain the cases of \(\mathfrak{gl}(m|n)\) and blocks of \(\mathfrak{osp}(2m|2n)\) which are equivalent to the principal block of \(\mathfrak{osp}(2k|2k)\) for some \(k\), because \(DS_{x}L\) is multiplicity-free in these cases (see [10] for the \(\mathfrak{gl}\) case and [11] for the \(\mathfrak{osp}\) case).
The blocks of \(\mathfrak{osp}(2m|2n)\) that are not equivalent to the principal block of \(\mathfrak{osp}(2k|2k)\) for any \(k\) are instead equivalent to the principal block of \(\mathfrak{osp}(2k+2|2k)\) for some \(k\). Let \(\mathcal{B}\) be a block equivalent to the principal block of \(\mathfrak{osp}(2k+2|2k)\), and let \(L\) be a simple module. Then to show that \(\operatorname{Res}_{\mathfrak{gl}(1|1)}L\in\operatorname{Rep}^{+}_{\mathfrak{ g}_{\mathfrak{g}_{\mathfrak{g}}}}(\mathfrak{gl}(1|1))\), we may apply translation functors to assume that \(L\) is stable. Once doing this, we may find a simple \(\mathfrak{osp}(2m|2n)\)-module \(L^{\prime}\) lying in a block \(\mathcal{B}^{\prime}\) equivalent to the principal block of \(\mathfrak{osp}(2k|2k)\), such that \(L\) is a direct summand of \(T^{\mathcal{B}}_{\mathcal{B}^{\prime}}L^{\prime}\), where \(T^{\mathcal{B}}_{\mathcal{B}^{\prime}}\) is the translation functor from \(\mathcal{B}^{\prime}\) to \(\mathcal{B}\) (see [12], or Sec. 7.6.3 of [11]).
Now we know already that if we restrict \(L^{\prime}\) to \(\mathfrak{gl}(1|1)\) it will lie in \(\operatorname{Rep}^{+}_{\mathfrak{g}_{\mathfrak{g}}}(\mathfrak{gl}(1|1))\). Since the standard module for \(\mathfrak{osp}(m|2n)\) also has this property, \(T^{\mathcal{B}}_{\mathcal{B}^{\prime}}L^{\prime}\) will inherit this property from \(L^{\prime}\), meaning that \(L\) will also have this property, finishing the proof.
_Remark 5.10_.: We conjecture that Theorem 5.9 extends to \(\mathfrak{osp}(2m+1|2n)\). However the result does not extend to \(\mathfrak{q}(n)\); in fact for \(\mathfrak{q}(2)\) already one see that if \(\alpha\) is the unique simple positive root, then for \(n\in\mathbb{N}\) we have
\[\operatorname{Res}^{Q(2)}_{\mathfrak{gl}(1|1)}L(n\alpha)=W(n)\oplus\Pi W(-n).\]
## 6. Appendix: the spectral sequences
Here we give all details behind the construction of our functors obtained via the spectral sequences. Because we are not _exactly_ using a full spectral sequence, rather only one position of it, we give all details below. Further, we will need the notation to give a clear proof of Lemma 3.5.
### Explicit terms of the sequence
Since every position on the spectral sequence introduced in Section 3 at any page is the same, we fix the position \((0,0)\) and study what is happening there. We give an explicit description of the spectral sequence at this position with inspiration from Ravi Vakil's definition in terms of \((p,q)\)-strips, see [20]. We will use this to obtain formulas for boundary maps, which will be odd endomorphisms on each page.
_Remark 6.1_.: The spectral sequences we construct do not require \(M\) to be finite-dimensional; however in order to have convergence this will be a necessary condition.
For each \(r\in\mathbb{Z}_{\geq 0}\) we will define spaces \(B_{r}(M)=B_{r}\), \(Z_{r}(M)=Z_{r}\) and \(E_{r}(M)=E_{r}\). The terms of the spectral sequence will be given by \(E_{r}\), while \(Z_{r}\) and \(B_{r}\) will denote respectively the cycles and boundaries in \(E_{r-1}\) (we set \(E_{-1}:=M\) with \(d_{-1}=0\)).
**Definition 6.2**.: Let \(r\geq 1\). A sequence \((v_{1},\dots,v_{r})\), where \(v_{1},\dots,v_{r}\in M\), is called an _\(r\)-chain_ in \(M\) if it satisfies:
\[yv_{i}=xv_{i+1}\text{ for all }i\leq r-1,\,yv_{r}=0.\]
Thus an \(r\)-chain is a sequence of the form
Now set \(Z_{0}=M\), and for \(r>0\) set:
\[Z_{r}=\{v_{r}\in M\,:\,\exists\text{ an }r\text{-chain }(v_{1},\dots,v_{r-1},v_{r})\}. \tag{6.1}\]
In particular we see that \(Z_{r}\subseteq\ker y\) for \(r>0\), and \(Z_{1}=\ker y\). Now set \(B_{0}=0\), \(B_{1}=\operatorname{Im}y\) and for \(r>1\) define
\[B_{r}=\operatorname{Im}y+\{xw_{1}\,:\,\exists\text{ an }(r-1)\text{-chain }(w_{1},\dots,w_{r-1})\}. \tag{6.2}\]
**Lemma 6.3**.: _We have:_
1. \(Ker(x)\cap Ker(y)\subset Z_{r}\) _for all_ \(r\geq 0\)_._
2. \(Z_{r+1}\subset Z_{r}\) _for all_ \(r\geq 0\)_._
3. \(B_{r}\subset B_{r+1}\) _for all_ \(r\geq 0\)_._
4. \(B_{r^{\prime}}\subseteq Z_{r}\) _for all_ \(r,r^{\prime}\geq 0\)_._
Proof.: For the first statement, if \(v\in Ker(x)\cap Ker(y)\), then the sequence \((0,\dots,0,v_{r}:=v)\) satisfies the conditions in Definition (6.1) so \(v\in Z_{r}\).
For the second statement, let \(v_{r+1}\in Z_{r+1}\) and let \((v_{1},v_{2},\dots,v_{r},v_{r+1})\) be an \((r+1)\)-chain. Then \((v_{2},\dots,v_{r},v_{r+1})\) is an \(r\)-chain so \(v_{r+1}\in Z_{r+1}\).
Next, to show that \(B_{r}\subset B_{r+1}\), we only need to check that for \(xw_{1}\in B_{r}\) as in Definition (6.2), we also have \(xw_{1}\in B_{r+1}\). Indeed, the \((r-1)\)-chain \((w_{1},\dots,w_{r-1})\) can be extended to an \(r\)-chain \((w_{1},\dots,w_{r-1},0)\) and thus \(xw_{1}\in B_{r+1}\) as well.
For the last statement, if we take an element \(yv\in\operatorname{Im}y\) then we have \(y(yv)=0\) and \(x(yv)=-y(xv)\) so we may set \(v_{r-1}=-xv\). Then \(xv_{r-1}=-x^{2}v=0\), so we can take \(v_{i}=0\) for \(i<r-1\). Then \((v_{1},\dots,v_{r-1},v_{r}:=v)\) is an \(r\)-chain so \(v\in Z_{r}\).
On the other hand, if we have an \((r-1)\)-chain \((w_{1},\ldots,w_{r-1})\) then \(yw_{1}=xw_{2}\), so that \(x(xw_{1})=0\), and \(yxw_{1}=-xyv_{1}=-x^{2}w_{2}=0\). Thus \(xw_{1}\in Ker(x)\cap Ker(y)\) and so \(xw_{1}\in Z_{r}\) for any \(r\).
Thus we have two chains of subspaces, one increasing and one decreasing:
\[0=B_{0}\subset B_{1}\subset B_{2}\subset\ldots\subset\ldots\subset Z_{2} \subset Z_{1}\subset Z_{0}=M.\]
We define
\[E_{r}(M)=E_{r}:=Z_{r}/B_{r}.\]
Given \(v\in Z_{r}\), we write \(\overline{v}\) for its projection to \(E_{r}\). The following is clear.
**Lemma 6.4**.: _The definitions of \(Z_{r},B_{r}\), and \(E_{r}\) are functorial in \(M\)._
_Example 6.5_.: \(E_{0}=M\), \(E_{1}=M_{y}\), \(E_{2}=DS_{x}(DS_{y}M)\).
### The differential
We now define a differential \(d_{r}(M)=d_{r}\) on \(E_{r}\). First, we define a parity-shifting map \(\widetilde{d}_{r}:Z_{r}\to E_{r}\) by setting \(\widetilde{d}_{0}:=y\), and for \(r>0\) and any \(v_{r}\in Z_{r}\) we let
\[\widetilde{d}_{r}(\overline{v_{r}}):=\overline{xv_{1}}.\]
where \((v_{1},\ldots,v_{r})\) is an \(r\)-chain.
First, we show that this map is well defined:
**Lemma 6.6**.: _We have: \(xv_{1}\in Z_{r}\), and \(\overline{xv_{1}}\) does not depend on the choice of sequence \((v_{1},\ldots,v_{r})\)._
Proof.: First we check that \(xv_{1}\in Z_{r}\). Indeed, \(xv_{1}\in B_{r+1}\) and by Lemma 6.3 we have: \(B_{r+1}\subset Z_{r}\). So \(xv_{1}\in Z_{r}\).
Secondly, suppose that we have 2 \(r\)-chains \((v_{1},\ldots,v_{r-1},v_{r})\) and \((v^{\prime}_{1},\ldots,v^{\prime}_{r-1},v_{r})\). Let \(w_{i}=v_{i}-v^{\prime}_{i}\). Then we have that \(yw_{r-1}=0\), \(xw_{r-1}=yw_{r-2},\ldots,xw_{2}=yw_{1}\) so \((w_{1},\ldots,v_{w-1})\) is an \((r-1)\)-chain. Thus \(xw_{1}\in B_{r}\), so \(\overline{xv_{1}}=\overline{sv^{\prime}_{1}}\) and the map \(\widetilde{d}_{r}\) is well-defined.
**Lemma 6.7**.: _We have \(B_{r}\subset Z_{r+1}=Ker(\widetilde{d}_{r})\)._
Proof.: We only need to show that \(Z_{r+1}=Ker(\widetilde{d}_{r})\) (the inclusion \(B_{r}\subset Z_{r+1}\) was shown in Lemma 6.3).
For \(r=0\) the statement is obvious, so we may assume that \(r>0\).
Given \(v_{r+1}\in Z_{r+1}\) with a corresponding \((r+1)\)-chain \((v_{1},\ldots,v_{r+1})\), the truncated sequence \((v_{2},\ldots,v_{r+1})\) is an \(r\)-chain and so \(\widetilde{d}_{r}(v_{r+1})=\overline{xv_{2}}\). But \(xv_{2}=yv_{3}\) so \(\overline{xv_{2}}=0\), and so \(v_{r+1}\in Ker(\widetilde{d}_{r})\). Thus \(Z_{r+1}\subset Ker(\widetilde{d}_{r})\).
On the other hand, given \(v_{r}\in Ker(\widetilde{d}_{r})\) with a corresponding \(r\)-chain \((v_{1},\ldots,v_{r-1},v_{r})\), we may write \(xv_{1}=yv+xw_{1}\) where \(v\in M\) and \((w_{1},\ldots,w_{r-1})\) is an \((r-1)\)-chain. Then set \(v^{\prime}_{i}:=v_{i}-w_{i}\). We have: \(xv^{\prime}_{1}=yv\), \(yv^{\prime}_{i}=yv_{i}-yw_{i}=xv_{i+1}-xw_{i+1}=xv^{\prime}_{i+1}\) for \(1\leq i\leq r-2\) and \(yv^{\prime}_{r-1}=yv_{r-1}=xv_{r}\). Thus \((v,v^{\prime}_{1},\ldots,v^{\prime}_{r-1},v_{r})\) is an \((r+1)\)-chain and so \(v_{r}\in Z_{r+1}\).
Hence \(Ker(\widetilde{d}_{r})\subset Z_{r+1}\) and the statement is proved.
Thus the map \(\widetilde{d}_{r}\) factors through \(E_{r}=Z_{r}/B_{r}\) and we obtain a map
\[d_{r}:E_{r}\longrightarrow E_{r}.\]
It is not hard to see that \(d_{r}^{2}=0\) (since \(x(xv_{1})=0\) already), and so we have obtained our differential.
**Lemma 6.8**.: _The cohomology of \(d_{r}\) on \(E_{r}\) is isomorphic to \(E_{r+1}\)._
Proof.: We have already seen that \(Ker(\widetilde{d}_{r})=Z_{r+1}\), so \(Ker(d_{r})=Z_{r+1}/B_{r}\).
We now show that \(\operatorname{Im}(\widetilde{d}_{r})=(B_{r+1}+B_{r})/B_{r}\) (the image of \(B_{r+1}\) under the quotient map \(Z_{r}\to E_{r}\)). This would prove the required statement, since
\[Z_{r+1}/B_{r}\Big{/}(B_{r+1}+B_{r})/B_{r}\cong Z_{r+1}\Big{/}B_{r+1}=E_{r+1}.\]
Indeed, let \(v_{r}\in Z_{r}\) with a corresponding \(r\)-chain \((v_{1},\dots,v_{r-1},v_{r})\). Then \(xv_{1}\in B_{r+1}\) and thus \(\widetilde{d}_{r}(v_{r})\in(B_{r+1}+B_{r})/B_{r}\). Hence \(\operatorname{Im}(\widetilde{d}_{r})\subset(B_{r+1}+B_{r})/B_{r}\).
Vice versa, any equivalence class in the quotient \((B_{r+1}+B_{r})/B_{r}\) is of the form \(\overline{xw_{1}}\) for some \(xw_{1}\in B_{r+1}\) with a corresponding \(r\)-chain \((w_{1},\dots,w_{r})\). Then \(w_{r}\in Z_{r}\) and \(\widetilde{d}(w_{r})=\overline{xw_{1}}\). Hence \((B_{r+1}+B_{r})/B_{r}\subset\operatorname{Im}(\widetilde{d}_{r})\).
### Leibniz property of \(d_{r}\)
We seek to show that the functor of taking the \(r\)-th page of the above spectral sequence defines a symmetric monoidal functor, for any \(r\). Let \(M,N\) be \(PGL(1|1)\)-modules.
**Lemma 6.9**.: _We have natural inclusions_
\[Z_{r}(M)\otimes Z_{r}(N)\subseteq Z_{r}(M\otimes N),\qquad B_{r}(M)\otimes Z _{r}(N)+Z_{r}(M)\otimes B_{r}(N)\subseteq B_{r}(M\otimes N).\]
Proof.: For \(r=0,1\) this is clear, so we assume \(r>1\). Suppose \(v_{r}\in Z_{r}(M)\), \(w_{r}\in Z_{r}(N)\) and let \((v_{1},\dots v_{r})\), \((w_{1},\dots,w_{r})\) be the corresponding \(r\)-chains in \(M\), \(N\) respectively. Then \(\left(\sum_{i+j=k}v_{r-i}\otimes w_{r-j}\right)_{k=r-1,\dots,0}\) is an \(r\)-chain in \(M\otimes N\). Indeed, we have
\[y(v_{r}\otimes w_{r}) = 0\] \[x(v_{r}\otimes w_{r}) = y(v_{r-1}\otimes w_{r}+v_{r}\otimes w_{r-1})\] \[x(v_{r-1}\otimes w_{r}+v_{r}\otimes w_{r-1}) = y(v_{r-2}\otimes w_{r}+v_{r-1}\otimes w_{r-1}+v_{r}\otimes w_{r-2})\] \[\vdots\] \[\sum_{i+j=k}x(v_{r-i}\otimes w_{r-j}) = \sum_{i+j=k+1}y(v_{r-i}\otimes w_{r-j}).\]
Thus \(Z_{r}(M)\otimes Z_{r}(N)\subseteq Z_{r}(M\otimes N)\). Next, we check that
\[B_{r}(M)\otimes Z_{r}(N)+Z_{r}(M)\otimes B_{r}(N)\subseteq B_{r}(M\otimes N).\]
First we clearly have \(\operatorname{Im}y\otimes Z_{r}(N)+Z_{r}(M)\otimes\operatorname{Im}y\subseteq \operatorname{Im}y|_{M\otimes N}\) for any \(r\geq 0\). Next, let \((v_{1},\dots v_{r-1})\) be an \((r-1)\)-chain in \(M\), and \((w_{1},\dots,w_{r})\) be an \(r\)-chain in \(N\), so that \(xv_{1}\in B_{r}(M),w_{r}\in Z_{r}(N)\). We want to show that \(xv_{1}\otimes w_{r}\in B_{r}(M\otimes N)\). In the
following we work modulo \(\operatorname{Im}y|_{M\otimes N}\), since it lies in \(B_{r}(M\otimes N)\).
\[xv_{1}\otimes w_{r} = x(v_{1}\otimes w_{r})-(-1)^{\overline{v_{1}}}v_{1}\otimes xw_{r}\] \[= x(v_{1}\otimes w_{r})-(-1)^{\overline{v_{1}}}v_{1}\otimes yw_{r-1}\] \[= x(v_{1}\otimes w_{r})+yv_{1}\otimes w_{r-1}\] \[= x(v_{1}\otimes w_{r})+xv_{2}\otimes w_{r-1}\] \[= x(v_{1}\otimes w_{r}+v_{2}\otimes w_{r-1})+(-1)^{\overline{v_{2 }}}v_{2}\otimes xw_{r-1}\] \[\vdots\] \[= x\left(\sum_{i+j=r-2}v_{r-1-i}\otimes w_{r-j}\right)\pm v_{r-1} \otimes xw_{2}\]
Now we use that \(xw_{2}=yw_{1}\) and that \(yv_{r-1}=0\) so that the second term \(v_{r-1}\otimes xw_{2}\) lies in \(\operatorname{Im}y\). Thus we only need to show the first term lies in \(B_{r}(M\otimes N)\). Consider the sequence \(\left(\sum\limits_{i+j=k}v_{r-1-i}\otimes w_{r-j}\right)_{k=r-2,r-3,\ldots,1}\).
Let us show that this is an \((r-1)\)-chain:
\[y(v_{r-1}\otimes w_{r}) = 0\] \[y(v_{r-2}\otimes w_{r}+v_{r-1}\otimes w_{r-1}) = x(v_{r-1}\otimes w_{r})\] \[\vdots\] \[y(\sum_{i+j=k}v_{r-1-i}\otimes w_{r-j}) = x(\sum_{i+j=k-1}v_{r-1-i}\otimes w_{r-j})\]
From the above equations we learn that
\[x\left(\sum_{i+j=r-1}v_{r-1-i}\otimes w_{r-j}\right)\in B_{r}(M\otimes N).\]
Therefore
\[B_{r}(M)\otimes Z_{r}(N)\subseteq B_{r}(M\otimes N).\]
A similar argument shows that
\[Z_{r}(M)\otimes B_{r}(N)\subseteq B_{r}(M\otimes N).\]
It follows that we have a natural map
\[\Phi_{r}:E_{r}(M)\otimes E_{r}(N)\to E_{r}(M\otimes N).\]
**Proposition 6.10**.: _For each \(r\geq 1\), \(\Phi_{r}\) is an isomorphism._
_The action of \(d_{r}\) on \(E_{r}(M\otimes N)\) is by the Leibniz rule:_
\[d_{r}(v_{r}\otimes w_{r})=d_{r}(v_{r})\otimes w_{r}+(-1)^{\overline{v_{r}}}v_ {r}\otimes d_{r}(w_{r}).\]
Proof.: First of all, recall that given odd operators \(d_{M}:M\to M,d_{N}:N\to N\) on two supervector spaces \(M\) and \(N\), we may consider an odd operator \(d:M\otimes N\to M\otimes N\) given by the Leibnitz rule
\[d(m\otimes n)=d_{M}(m)\otimes n+(-1)^{\bar{m}}m\otimes d_{N}(n).\]
Then clearly \(Ker(d_{M})\otimes Ker(d_{N})\subset Ker(d)\) and it is a well-known fact that this induces an isomorphism
\[\left.Ker(d_{M})\middle/\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
We now have a functor \(E_{r}:Rep(\mathfrak{psl}(1|1))\to\mathbf{DsVec}\) and we just proved that it is symmetric monoidal.
By a general argument for monoidal functors (see e.g. [2], 2.10.6), we obtain a natural isomorphism \(E_{r}(M)^{*}\cong E_{r}(M^{*})\) for any \(M\in Rep(\mathfrak{psl}(1|1))\).
### Contragredient duality
Recall \(\sigma_{\mathfrak{gl}(1|1)}\) and \(\sigma_{n}\) from Section 2.5. We now prove Lemma 3.3:
**Lemma 6.12**.: _We have a natural isomorphism of functors_
\[DS^{i}_{x,y}\circ(-)^{\sigma_{\mathfrak{gl}(1|1)}}\cong(-)^{\sigma_{n}}\circ DS ^{i}_{y,x}.\]
Proof.: For a module \(M\), observe that there is a canonical map
\[(DS^{i}_{y,x}M)^{\sigma_{h}}\to DS^{i}_{x,y}(M^{\sigma}_{\mathfrak{gl}(1|1)})\]
given by \(v_{r}\mapsto v_{r}\), and it is obviously an isomorphism. Checking that the action of \(h\) agrees with this map is straightforward, so we get the result.
### Action of spectral sequence on finite-dimensional modules
We give here the proof of Lemma 3.5.
Proof.: The spectral sequence converges after the first page for all cases but \(DS^{r}_{y,x}\) acting on \(X(n)\) and \(DS^{r}_{x,y}\) acting on \(Y(n)\), so these are cases we need to study. By contragredient duality it's enough to consider \(DS^{r}_{y,x}\) acting on \(X(n)\).
For \(X(n)\) we have \(\operatorname{Im}y\subseteq Z_{r}(X(n))\subseteq\ker y\) for all \(r>0\), and \(\operatorname{Im}y\) has codimension \(2\) in \(\ker y\), with complement spanned by \(u_{-n+1/2},u_{n-1/2}\), where \(u_{0}\) is even of weight \(0\) and \(u_{n-1/2}\) is odd of weight \(n-1/2\). It is clear that \(u_{n-1/2}\in Z_{r}\) for all \(r\). On the other hand, it is not difficult to check that \(u_{-n+1/2}\in Z_{r}\) only for \(r\leq n\). Thus we have
\[Z_{1}=Z_{2}=\cdots=Z_{n}=\operatorname{Im}y+\langle u_{-n+1/2},u_{n-1/2} \rangle,\ \ \ \ Z_{r}=\operatorname{Im}y+\langle u_{n-1/2}\rangle\ \text{ for }r>n.\]
On the other hand \(\operatorname{Im}y\subseteq B_{r}\) for \(r>0\), and \(u_{-n+1/2}\notin B_{r}\) for \(r>0\) since \(u_{-n+1/2}\) is not in the image of \(x\). However \(u_{n-1/2}\in B_{r}\) exactly when \(r>n\). Thus we have
\[E_{1}=\cdots=E_{n}=\langle u_{-n+1/2},u_{n-1/2}\rangle,\ \ \ \ E_{r}=0\ \text{ for }r>n.\]
From our setup one can now compute the maps \(d_{r}\), and we find that on \(X(n)\) we have: \(d_{0}=y\), \(d_{n}:E_{n}\to E_{n}\) is given by \(d_{n}(u_{-n+1/2})=u_{n-1/2}\), and \(d_{r}=0\) otherwise.
|
2309.01395 | AVATAR: Robust Voice Search Engine Leveraging Autoregressive Document
Retrieval and Contrastive Learning | Voice, as input, has progressively become popular on mobiles and seems to
transcend almost entirely text input. Through voice, the voice search (VS)
system can provide a more natural way to meet user's information needs.
However, errors from the automatic speech recognition (ASR) system can be
catastrophic to the VS system. Building on the recent advanced lightweight
autoregressive retrieval model, which has the potential to be deployed on
mobiles, leading to a more secure and personal VS assistant. This paper
presents a novel study of VS leveraging autoregressive retrieval and tackles
the crucial problems facing VS, viz. the performance drop caused by ASR noise,
via data augmentations and contrastive learning, showing how explicit and
implicit modeling the noise patterns can alleviate the problems. A series of
experiments conducted on the Open-Domain Question Answering (ODSQA) confirm our
approach's effectiveness and robustness in relation to some strong baseline
systems. | Yi-Cheng Wang, Tzu-Ting Yang, Hsin-Wei Wang, Bi-Cheng Yan, Berlin Chen | 2023-09-04T06:47:46Z | http://arxiv.org/abs/2309.01395v1 | AVATAR: Robust Voice Search Engine Leveraging Autoregressive Document Retrieval and Contrastive Learning
###### Abstract
Voice, as input, has progressively become popular on mobiles and seems to transcend almost entirely text input. Through voice, the voice search (VS) system can provide a more natural way to meet user's information needs. However, errors from the automatic speech recognition (ASR) system can be catastrophic to the VS system. Building on the recent advanced lightweight autoregressive retrieval model, which has the potential to be deployed on mobiles, leading to a more secure and personal VS assistant. This paper presents a novel study of VS leveraging autoregressive retrieval and tackles the crucial problems facing VS, viz. the performance drop caused by ASR noise, via data augmentations and contrastive learning, showing how explicit and implicit modeling the noise patterns can alleviate the problems. A series of experiments conducted on the Open-Domain Question Answering (ODSOA) confirm our approach's effectiveness and robustness in relation to some strong baseline systems.
Voice search, Information retrieval, Autoregressive information retrieval, Contrastive learning
## I Introduction
Compared to desktop and laptop computers, flourishing with recent advances in speech technology, voice-based input is gradually replacing text input as the primary method of interaction between humans and machines on small-screen mobile devices [1]. Through voice inputs, how to provide users with an effective mechanism to access the content they want in an overwhelming amount of information available on the Internet, viz. voice search (VS), has gained a place with other spoken language technologies at the center of the stage. VS is related to, but distinct from spoken document retrieval (SDR) [2, 3]. Compared to SDR, in which text queries are given to the system and need to search for relevant spoken documents meanwhile facing essential issues, the queries are usually too short to conveying the information needs. VS searches for relevant text documents using a more natural modality, viz. voice, that lead to longer queries, thus better-expressing information needs [4].
Over the years, many efforts have been devoted to investigating deep neural network-based retrieval methods, showing good promise in many IR and SDR tasks. These models can be applied to various VS scenarios. Recently, dense retrieval models have become the primary neural-based document retrieval approach, complementary to sparse retrieval models such as TF-IDF [5] and BM25 [6], that matches keywords efficiently with an inverted index. Dense retrieval models [7, 8] leverage the pre-trained language models to encode information in latent semantic space, thus better capturing the semantic relationships between queries and documents. A typical dense retrieval construct of a dual encoder contains two independent neural networks optimized for embedding the queries and documents. The advantage of the dual encoder design is that the entire corpus can be encoded and indexed offline. At the inference time, the score of a query-document pair can be efficiently computed as the inner product of the corresponding query and document embeddings.
Recently, another novel autoregressive retrieval approach orthogonal to the dual encoder models dubbed Differentiable Search Index (DSI)[9] has been proposed. DSI first encodes all the information about the corpus into a single transformer model's parameters space, on top of which DSI can generate the relevant document identities (docids) in an autoregressive manner in response to a user query. DSI has many advantages over the dual encoder models: 1) It avoids using only dot products, which could miss the fine-grained interaction between the query and the document meta information. 2) It lowers memory requirements; storing dense vectors for the whole corpus requires a large memory footprint. Although
Fig. 1: Flow diagram of our proposed Avatar VS system. Given a voice query, Avatar first uses an ASR system to obtain a text query (errors generated from the ASR system were denoted in orange). Then, an autoregressive retriever directly generates the ranked list of relevant document ID (docid) through constrained beam search.
with the benefits mentioned above, DSI suffers serious data distribution problems during the model training and inference phase. [10]. Specifically, DSI learns to build connections between long document texts and their docids, but at inference time, relatively short queries are input into the model to retrieve their relevant docids. Differentiable search index with query generation (DSI-QG) [10] migrates this problem using another query generation model to generate relevant pseudo queries from documents. It uses these short pseudo queries instead of long text documents to build the connection with their docids in the training phase. After incorporating these pseudo queries [11], the autoregressive retrieval model has become effective while requiring less memory footprint and has potential to be deployed on the mobile or edge device, making it a more secure and personal retrieval system.
Nevertheless, as far as we are concerned, the autoregressive retrieval model has not been sufficiently and systematically studied in neither ad-hoc information retrieval (IR) nor VS, and its retrieval effectiveness is mostly unknown. Based on this background, we present an empirical VS evaluation that sheds light on the efficacy of the autoregressive retrieval model in this paper. Further, we address one of the critical problems with VS, viz. noises in the query caused by the ASR system has an enormous impact on the retrieval model [12, 13], by means of explicitly modeling the ASR noise pattern using data augmentation and implicitly teaching the model to distinguish the features invariant to noise using contrastive learning. A series of experiments conducted on the Open-Domain Spoken Question Answering dataset (ODSQA) [14] confirm our approach's effectiveness and robustness in relation to some strong baseline systems.
## II Methodology
In this paper, we propose Avatar, a robust voice search engine leveraging **a**utoregressive **r**etrieval and **c**ntrastive **l**earning. Figure 1 shows the main workflow. Given a voice query, Avatar first uses an ASR system to transcribe voice queries into a text query. Then, an autoregressive retriever built from standard transformer architecture directly generates the relevant docids through constrained beam search. In this section, we first introduce autoregressive retrieval and then demonstrate our approach to robust the model.
### _Autoregressive Retrieval_
Unlike classic retrieval techniques, autoregressive retrieval methods use a sequence-to-sequence (seq2seq) language model for encoding all the corpus information into the model's parameter space. After receiving the user query, beam search is used to generate the rank list of docids. Specifically, the autoregressive retriever ranks each document \(d\in D\) in the corpus by computing a score with an autoregressive formulation:
\[score(d|q)=p_{\theta}(y|q)=\prod_{m=1}^{M}p_{\theta}(y|y_{1<m},q), \tag{1}\]
where \(q\) is the query entered by the user, \(y\) is the set of \(M\) tokens in docid of \(d\), and \(\theta\) represents the parameters of the model. Original DSI models suffer from data distribution problems during the model training and inference phase. To migrate this problem, we follow the DSI-QG to use the pseudo queries generated from a seq2seq query generation model and incorporate them into the model training phase. In other words, we generate pseudo queries \(Q_{qg}\) from their relevant documents using a query generation model. Following that we combine them with the original queries \(Q_{sup}\) from the training data to form the new training sets \(q\in Q_{seq}=\{Q_{sup}\cup Q_{qg}\}\). After that, we train the model by the general seq2seq training objective and teach forcing:
\[\mathcal{L}_{seq}(\theta)=-\sum_{q\in Q_{seq}}\log p_{\theta}(y|q). \tag{2}\]
A well-built docid must be able to identify the different documents while reflecting their semantic information. Since exploring different docids is not the focus of this study, we adopt the semantic docid proposed in DSI and leave the extension of docids for future work. At first, semantic docid
Fig. 2: Pre-training: supervised contrastive learning (SCL). SCL takes data of the same class as positive samples and consists of three query types: queries from the original dataset, pseudo queries generated from the query generation model, and augmented queries.
uses a BERT language model [15] to encode all the documents in the corpus to obtain the semantic vectors. Second, the hierarchical clustering algorithm is employed to cluster the semantically similar documents in the same group in a hierarchical fashion. Finally, we can assign each document the identifier by group number by traversing the hierarchical tree. The docids generated by the beam search do not necessarily exist in the corpus. Inspired by [16], we use the constrained beam search to guide the decoder to search in a limited tokens space for each step to generate the valid docids. Concretely, we define constraints based on a prefix tree built on all docid strings.
### _Explicit: Data Augmentation_
Data augmentation (DA) is a simple and effective method often used to strengthen models by explicitly exposing them to data containing ASR noise and clean data so that model learning remains invariant to noise. Specifically, we generate three random augmentations for each query in \(Q_{seq}\): \(Q_{da}=Aug(Q_{seq})\), where \(Aug(\cdot)\) is a data augmentation module similar to [17], we generate random substitution, deletion, or insertion errors. For substitution, we use similar phonological words to substitute them. Finally, we train the model using the training sets consisting of \(Q=\{Q_{seq}\cup Q_{da}\}\).
### _Implicit: Contrastive Learning_
Contrastive learning (CL) can help the model distinguish the invariant features in the ASR noise and clean text query. Firstly, we pre-train the model's encoder to bring the original queries closer to the augmented queries and push the others away. By closely looking into the autoregressive retrieval model, we find that it's fundamentally similar to the sequence classification task. Thus bringing the same class close together in the latent space can benefit the classification task later.
Supervised Contrastive Learning (SCL)[18] takes data of the same class as positive samples and pulls their embeddings closer together. In the end, representations from the same class form a clustering effect and discriminate different classes by the margins created between them. It is more suitable to use SCL for the autoregressive retrieval model to enhance its robustness against ASR errors. Given a min-Batch of \(N\) queries, random sample from the \(Q\) training sets, \(B=\{(q_{i},y_{i})\}_{i=1..N}\). We first obtain the representation of each query through the model's encoder \(Enc(\cdot)\) and use the \(Proj(\cdot)\) projection network, \(z=Proj(Enc(q))\) to obtain the sequence-level representation. Let \(i\in I=\{1,...,N\}\) be the index of \(B\). The following equation can describe our SCL objective:
\[\mathcal{L}_{scl}=-\sum_{i\in I}\frac{1}{|S(i)|}\sum_{s\in S(i)}\log\frac{\exp (z_{i}\cdot z_{s})}{\sum_{a\in A(i)}\exp(z_{i}\cdot z_{a})} \tag{3}\]
Here \(\cdot\) denotes inner product operations, \(A(i)=I\setminus\{i\}\) is the set of index \(I\) minus \(i\), \(S(i)=\{s\in A(i):y_{s}=y_{i}\}\) is the index of all positive samples of index \(i\), and \(|S(i)|\) is its cardinality. The process is illustrated in Figure 2.
### _Combine_
Combining the above Explicit and Implicit approaches makes our proposed Avatar model more robust when encountering ASR noises. Overall, we first use SCL Eq. (3) to pre-train the model's encoder and then use DA with general seq2seq objective Eq. (2) to fine-tune the model further.
## III Experiments
### _Experimental Setting_
**Dataset and Evaluation.** We used Open-Domain Spoken Question Answering (ODSQA) for our experiments. ODSQA consists of 30,461 query-document pairs, whose the queries are natural language, and the 2,051 documents are from Delta Reading Comprehension Dataset (DRCD) [19]. Since ODSQA only releases 1,465 query audios equipped with official ASR transcriptions from the ODSQA-test, we adopt these as our testing set and the remaining 28,996 query-document pairs as our training set. To observe the influences on the model caused by different WER-level and Entity Error Rates (EER), in addition to official ODSQA ASR transcriptions, we apply another two ASR systems to obtain the testing set's transcriptions. Specifically, we used a Conformer Mask-CTC ASR system trained on In-house Data and a Conformer ASR system on Aishell [20]. Detail summarized in Table I.
Like the original DSI, we utilize Hits@1 and Hits@10 to evaluate the effectiveness of the baselines and the model. This metric reports the proportion of the correct docid ranked in the top 1 and top 10 predictions.
**Baselines.** We compare Avatar with the following baselines: 1) Okapi BM25: A classic sparse retrieval method bases on the inverted index. 2) DPR: A dual encoder dense retriever that trained with contrastive loss and hard negatives. 3) DSI: An autoregressive retrieval method that uses document texts as input for indexing. 4) DSI-QG: An improved version of DSI that mitigates the data distribution mismatch problems by using generated queries as inputs for indexing.
**Implementation details** In this study, we use two multilingual T5(mT5)[21] base models provided by huggingface for our system, one as the Avatar model and the other as the Query Generation model. We apply the same training method as that of DSI-QG for the query generation model and generate three relevant pseudo queries for each document. For the Avatar model pre-training, we employ the same
\(MeanPooling\) as that of [22] to obtain the sequence-level represents. We utilize the same mT5 model as a fair comparison for all the learning-based models in the baseline. Since the document length in the corpus is too long for the model to accommodate, we only keep the document title and the first 100 tokens as the model's inputs.
### _Main Results_
The evaluation performance is presented in Table 3. Based on term matching, we find that the BM25 model has the lowest performance among all models, showing the importance of assessing the semantic information. By looking at DSI-QG, incorporating the pseudo queries can effectively improve the model's retrieval ability compared to the autoregressive DSI model. Moreover, the noise caused by the ASR system can cause catastrophic impacts on all the baselines model, including the strong DPR model. The magnitude of the drop also rises with the increase of WER. Through data augmentation and supervised contrastive learning, our proposed Avatar can alleviate the influences from the imperfection of the ASR transcriptions and not only keep the performance in a clean environment but also simultaneously increase the retrieval ability in contrast to other strong baseline systems.
### _Ablation Study_
**Study of different WER.** We now focus on evaluating the effectiveness of different ASR noise-robust methods in the model and show the performance in Table III. Not surprisingly, explicit modeling of the noise pattern, viz. data augmentation, has the biggest effect on the model's performance. Through implicitly teaching the model to distinguish the invariant features between the ASR noise and clean query text, viz. pre-trained with SCL objective, we can further enhance the model's robustness, after combine these two, the model's performance increases vastly.
**Study of different EER.** The entities mentioned in the query significantly impact the success of retrieving the relevant documents. To further test our proposed Avatar's robustness, we split the testing sets into two subsets, one contains the ASR noise in the entity mentioned in the queries, and the other includes the noise in the non-entity position, results shown in Table 4. First, the model's retrieval effect is dramatically reduced when ASR noise occurs in Entity queries, which illustrates the importance of the entities mentioned in the query for retrieval. We find that adding data augmentation and contrastive learning improves the effectiveness of our model.
## IV Conclusions and Future Work
In this paper, we have presented a novel robustness method to improve the performance of the autoregressive retrieval model when exposed to noisy ASR transcriptions, whose effectiveness has also been validated and analyzed through a series of empirical experiments. The proposed model Avatar sheds light on the potential of an on-device VS engine, which can bring more convenience, security, and personal experience. In future work, we intend to first tackle the more intensive tasks which are entities error caused by ASR system and second extend the model to a large scale that requires designing the autoregressive retrieval with more capacity. |
2304.09735 | Rehabilitation Exercise Repetition Segmentation and Counting using
Skeletal Body Joints | Physical exercise is an essential component of rehabilitation programs that
improve quality of life and reduce mortality and re-hospitalization rates. In
AI-driven virtual rehabilitation programs, patients complete their exercises
independently at home, while AI algorithms analyze the exercise data to provide
feedback to patients and report their progress to clinicians. To analyze
exercise data, the first step is to segment it into consecutive repetitions.
There has been a significant amount of research performed on segmenting and
counting the repetitive activities of healthy individuals using raw video data,
which raises concerns regarding privacy and is computationally intensive.
Previous research on patients' rehabilitation exercise segmentation relied on
data collected by multiple wearable sensors, which are difficult to use at home
by rehabilitation patients. Compared to healthy individuals, segmenting and
counting exercise repetitions in patients is more challenging because of the
irregular repetition duration and the variation between repetitions. This paper
presents a novel approach for segmenting and counting the repetitions of
rehabilitation exercises performed by patients, based on their skeletal body
joints. Skeletal body joints can be acquired through depth cameras or computer
vision techniques applied to RGB videos of patients. Various sequential neural
networks are designed to analyze the sequences of skeletal body joints and
perform repetition segmentation and counting. Extensive experiments on three
publicly available rehabilitation exercise datasets, KIMORE, UI-PRMD, and
IntelliRehabDS, demonstrate the superiority of the proposed method compared to
previous methods. The proposed method enables accurate exercise analysis while
preserving privacy, facilitating the effective delivery of virtual
rehabilitation programs. | Ali Abedi, Paritosh Bisht, Riddhi Chatterjee, Rachit Agrawal, Vyom Sharma, Dinesh Babu Jayagopi, Shehroz S. Khan | 2023-04-19T15:22:15Z | http://arxiv.org/abs/2304.09735v1 | # Rehabilitation Exercise Repetition Segmentation
###### Abstract
Physical exercise is an essential component of rehabilitation programs that improve quality of life and reduce mortality and re-hospitalization rates. In AI-driven virtual rehabilitation programs, patients complete their exercises independently at home, while AI algorithms analyze the exercise data to provide feedback to patients and report their progress to clinicians. To analyze exercise data, the first step is to segment it into consecutive repetitions. There has been a significant amount of research performed on segmenting and counting the repetitive activities of healthy individuals using raw video data, which raises concerns regarding privacy and is computationally intensive. Previous research on patients' rehabilitation exercise segmentation relied on data collected by multiple wearable sensors, which are difficult to use at home by rehabilitation patients. Compared to healthy individuals, segmenting and counting exercise repetitions in patients is more challenging because of the irregular repetition duration and the variation between repetitions. This paper presents a novel approach for segmenting and counting the repetitions of rehabilitation exercises performed by patients, based on their skeletal body joints. Skeletal body joints can be acquired through depth cameras or computer vision techniques applied to RGB videos of patients. Various sequential neural networks, including many-to-many models (with binary sequence output and density map output) and many-to-one models (with a single output), are designed to analyze the sequences of skeletal body joints and perform repetition segmentation and counting. Extensive experiments on three publicly available rehabilitation exercise datasets, KIMORE, UI-PRMD, and IntelliKehabDS, demonstrate the superiority of the proposed method compared to previous methods. The proposed method enables accurate exercise analysis while preserving privacy, facilitating the effective delivery of virtual rehabilitation programs.
exercise segmentation, exercise repetition counting, skeletal body joints, LSTM, transformer, convolutional neural network, virtual rehabilitation
## I Introduction
Referral of patients to rehabilitation programs following a stroke, cardiac event, or injury is a common practice aimed at improving patients' quality of life and reducing re-hospitalization and death rates [1]. Central to these programs are regular and repetitive exercises that enable patients to regain mobility and strength [1]. Recently, Artificial Intelligence (AI)-driven virtual rehabilitation has emerged as a promising approach to delivering rehabilitation programs remotely to patients in their homes [2]. This approach involves the use of various sensors to capture patients' movements and the use of AI algorithms to analyze patients' movements during exercise [2, 3]. The analysis results can be used to provide patients with feedback on the quality or completion of their exercises [3, 4]. Additionally, clinicians can also use the analysis results to monitor patients' progress and take appropriate interventions.
In rehabilitation programs, patients are typically prescribed specific exercises with designated numbers of sets and repetitions [1, 5, 6, 7, 8]. Evaluating exercise performance relies on objective criteria such as compliance with the prescribed number of sets and repetitions, repeating exercises in a constant manner, proper technique and quality of movements, and correct posture of various body parts [5, 6]. Therefore, repetition (temporal) segmentation, the process of dividing a continuous sequence of movement data into individual repetitions, is the first step in an AI-driven exercise evaluation pipeline [5]. Exercise repetition counting can either be derived from the segmentation process or executed as a separate task.
A variety of data modalities were used for repetition segmentation and counting [20], including Inertial Measurement Unit (IMU) sensor data [9, 10, 11, 12, 13, 14, 15], video data [16, 17, 18], and skeletal body joints [19]. Existing algorithms for segmenting human movement can be divided into unsupervised and supervised algorithms [20]. Unsupervised algorithms, which do not require labeled data to develop, include thresholding, template matching, and exemplar-based approaches [20, 21]. Supervised algorithms requiring labeled data to be trained on include Support Vector Machines (SVM) [11], Hidden Markov Model (HMM) [13], Convolutional Neural Networks (CNNs) [10], and the combination of CNNs and Finite State Machines (FSMs) [9]. The existing works on repetition counting [15, 18, 19, 28] are not capable of segmenting movement data into individual repetitions, that is, they are not capable of determining the start and end timestamps of individual repetitions. Video-based approaches are not privacy-preserving and are computationally prohibitive [16, 17, 18]. The IMU-based approaches require wearing multiple IMU sensors while exercising [9, 10, 11, 12, 13, 14, 15], which is challenging in the real world, i.e., in rehabilitation patients' homes. The previous works on healthy individuals [16, 17, 18, 19] cannot be directly used on rehabilitation patients due to the fact that segmenting and counting exercise repetitions is more challenging in patients due to irregularities
in the duration and completion of exercises resulting from their respective impairments [5, 6].
This paper presents novel methods for rehabilitation exercise repetition segmentation and counting from the skeletal body joints of patients. Various many-to-many and many-to-one deep sequential neural network architectures, including Long Short Term Memory (LSTM) and a combination of LSTM and CNN, are designed to analyze the sequences of skeletal body joints and perform repetition segmentation and counting. Our primary contributions are as follows:
* This is the first work on rehabilitation exercise repetition segmentation using skeletal body joints collected by depth cameras or extracted from RGB video using advanced computer-vision techniques.
* We developed neural network architectures capable of analyzing sequences of body joints, i.e., multivariate time series.
* We conducted extensive experiments on three publicly available rehabilitation exercise datasets and demonstrated the effectiveness of the proposed method compared to the previous methods.
As a point of clarification, the purpose of this paper is not rehabilitation exercise recognition/classification nor rehabilitation exercise quality/correctness assessment. Specifically, this paper focuses on the temporal (not spatial) segmentation of rehabilitation exercises into individual repetitions and counting the number of repetitions.
## II Related Work
This section reviews existing research on repetitive action segmentation and counting, with an emphasis on rehabilitation exercise segmentation and counting. The literature review is organized according to the data modality used for segmentation and counting.
### _Rehabilitation Exercise Segmentation from IMU sensors_
Lin et al. [11] proposed an approach for segmenting rehabilitation exercises into individual repetitions using the data collected by IMU wearable sensors worn on the hip, knee, and ankle. Segmentation was defined as a binary classification problem in which movement data at consecutive timestamps are classified into segment points or non-segment points. To perform segmentation, after several steps of preprocessing, including filtering, down-sampling, and windowing, the IMU signal is classified into segment or non-segment classes by an SVM. Lin et al. [13] proposed a two-stage approach in which first, segment point candidates are identified by analyzing the velocity features extracted from the IMU signal. Then, HMMs are used to recognize segment locations from segment point candidates. The method was evaluated on three publicly available IMU datasets and achieved high accuracy. Brennan et al. [9, 10] proposed a two-stage approach in which first, a CNN classifies sliding windows of the IMU signals into either "dynamic" or "dormant" classes. The classification results are then streamed into an FSM that keeps track of the classes of consecutive windows and outputs the starting and ending points of repetitions. They achieved high accuracy in shoulder and knee exercises. Bevilacqua et al. [14] proposed a joint rehabilitation exercise motion primitive segmentation and classification using a mixture of LSTMs and boosting aggregation. Their method was evaluated on accelerometer and gyroscope data and achieved high exercise primitive classification accuracy.
### _Rehabilitation Exercise Repetition Counting from Skeletal Body Joints_
Hsu et al. [19] proposed a rehabilitation exercise repetition counting using Skeletal body joints data. Initially, the pairwise cosine similarity of the skeleton time series data is calculated. A spectrogram is then constructed based on the pairwise cosine similarity. A repetition count is then calculated by integrating from the spectrogram. The method was evaluated on the UI-PRMD rehabilitation exercise dataset [7] and the MM-Fit fitness exercise dataset [34] and achieved low Mean Absolute Error (MAE).
### _Repetitive Action Segmentation from Videos_
Hu et al. [16] introduced a large-scale repetitive action counting dataset, named RepCount, containing 1451 videos with about 20000 annotations of the start and end of repetitions. The dataset contains in-the-wild videos of healthy individuals exercising. They have also proposed a deep neural network architecture, named TransRAC [16], for repetition counting that was trained and evaluated on the RepCount dataset. In TransRAC, with step sizes of 1, 2, and 4, multi-scale video sequences are generated from the input video. After extracting features from the multi-scale video sequences by an encoder neural network, temporal correlations are calculated between the extracted features, and correlation matrices are created. The concatenation of the correlation matrices is input to a transformer to output a density map as a prediction. The ground-truth density maps are generated by approximating a Gaussian distribution between the start and end of each repetition [16]. To overcome a major limitation of TransRAC, its inability to handle long videos, Chung et al. [17] proposed a video transformer equipped with class token distillation and marginally improved the repetition counting results of TransRAC. Zhang et al. [18] proposed an approach for repetition counting from videos incorporating the corresponding sound of the video. An S3D-based architecture was used for repetition counting from video, while a ResNet-18-based neural network was used for repetition counting from audio. The results on an audiovisual dataset showed improvements when audio data is added to video data for repetition counting. To learn more about repetitive action segmentation and counting from videos using deep learning algorithms, please refer to [18, 24, 25, 26, 27, 28].
The major disadvantage of the IMU-based methods [9, 10, 11, 13, 14] is that they require the determination of various parameters, including window sizes and thresholds. Additionally, these methods are capable of analyzing multivariate time series with a small number of variables, corresponding to the
number of IMU sensors worn during exercise, but are unable to analyze multivariate time series with high dimensionalities, such as all the joints of the skeletal system. Moreover, it may be infeasible for rehabilitation patients to wear multiple IMU sensors while exercising independently at home. The method proposed by Hsu et al. [19] was designed for rehabilitation exercise counting and is not capable of segmenting rehabilitation exercises. Exercise temporal segmentation into individual repetitions is required for accurate exercise assessment [5]. A limitation of video-based methods [16, 17, 18, 24, 25, 26, 27, 28] is their complexity and their large number of training parameters, in addition to privacy concerns. There is no previous method for segmenting rehabilitation exercises based on skeletal body joints [5]. In order to fill the gap in the literature, this paper proposes deep-learning algorithms for segmenting and counting repetitions in rehabilitation exercises performed by patients. The proposed method facilitates exercise assessment and feedback generation for patients in virtual rehabilitation programs without the need for body-worn sensors.
## III Method
The input to the proposed method is the sequence of skeletal body joints of a patient doing rehabilitation exercises. The output is the segmented sequence into individual exercise repetitions and the total number of repetitions. The sequence of features extracted from the sequence of skeletal body joints is fed to a sequential neural network for exercise repetition segmentation and counting.
The proposed method can work with either a depth camera capable of capturing skeletal body joints or a regular RGB camera. In the latter case, an additional module is required in order to extract skeletal body joints from RGB video frames. Several libraries are available for extracting body joints from videos, including MediaPipe [22] and OpenPose [23].
The sequential neural networks in the proposed method can be provided with three types of data, raw body joints, exercise-specific features extracted from the body joints, and their concatenation. Inspired by Guo and Khan [29] worked on the KIMORE dataset, exercise-specific features are those calculated based on the angles between the body joints that are moving in specific exercises. For instance, for upper-extremity rehabilitation exercises for stroke patients, the shoulder-wrist angle is an exercise-specific feature [6, 29]. The body joints or features, extracted from each frame of the input data, are provided for each timestamp of a sequential neural network.
### _Sequential Neural Network_
Three sequential neural networks are used to analyze the sequence of features and perform rehabilitation exercise segmentation and counting. The core of all the networks is an LSTM trailed by a 1D CNN followed by a fully connected neural network.
#### Iii-A1 Many-to-many with Binary Sequence Output
The sequential neural network is trained to output a binary sequence. Corresponding to the input at every timestamp (the body joints or extracted feature vector from every frame), the network generates one or zero. The output of one indicates that the input frame at a particular timestamp is a frame after a repetition of an exercise is complete or before the beginning of the next repetition, whereas an output of zero indicates that the input frame is during a repetition of an exercise. Using the generated output binary sequence, repetition segmentation is performed based on the occurrences of outputs of one among outputs of zero. Repetition counting is done by counting the number of segmented repetitions.
#### Iii-A2 Many-to-many with Density Map Output
The sequential neural network is trained to output a density map. A density map is a vector consisting of the same number of elements as the number of frames in the input, i.e., the number of timestamps in the sequential neural network. The ground-truth density maps are generated by approximating a Gaussian distribution between the start and end of each repetition [16]. Repetition segmentation is performed by finding the peaks in the predicted density map. Repetition counting is done by counting the number of segmented repetitions.
#### Iii-A3 Many-to-one with Repetition Counts Output
By summing the outputs of the sequential neural network at consecutive timestamps, it will be converted to a many-to-one sequential neural network and is trained to output the number of repetitions. Unlike the two previous architectures, this architecture can only perform repetition counting and not segmentation.
The details of the parameters of the sequential neural networks are explained in Section IV-C.
## IV Experiments
This section evaluates the performance of the proposed method on three publicly available datasets using different evaluation metrics and in comparison to previous methods.
### _Datasets_
#### Iv-A1 Kimore
The KIMORE dataset [6] contains RGB and depth videos along with body joint position and orientation data captured by the Kinect camera. The data were collected from 78 subjects, including 44 healthy subjects and 34 patients with motor dysfunction (stroke, Parkinson's disease, and low back pain). Each data sample in this dataset is composed of one subject performing multiple repetitions of one of the five exercises: (1) lifting of the arms, (2) lateral tilt of the trunk with the arms in extension, (3) trunk rotation, (4) pelvis rotations on the transverse plane, and (5) squatting. The data samples were annotated in terms of exercise quality and technique.
The focus of this paper is on repetition segmentation and counting. KIMORE, however, does not contain such annotations. Two co-authors of this paper annotated the start and end of repetitions in each data sample using RGB video playback and the Aegisub software [35]. The annotations were verified by another two co-authors. The annotations were used as ground-truth labels for training and evaluation of neural network models. The annotations are available at [https://github.com/abedicodes/repetition-segmentation](https://github.com/abedicodes/repetition-segmentation). The
mean (standard deviation) of the number of repetitions across the total 353 samples in the dataset is 4.70 (2.21).
In this paper, two modalities of data from the KIMORE dataset were used. See Section III. In the first setting, RGB videos were used as input data. The body joints were extracted by OpenPose [23] and then inputted into the sequential models for analysis. In the second setting, the body joints captured by Kinect were directly considered as input to the sequential models.
#### Iv-A2 Ui-Prnd
UI-PRMD [7] is a dataset of physical therapy rehabilitation collected from 10 healthy individuals who performed 10 physical therapy rehabilitation exercises correctly and incorrectly to represent the performance of patients. Each data sample in this dataset is composed of one subject performing multiple repetitions of one of the 10 exercises, (1) deep squat, (2) hurdle step, (3) inline lunge, (4) side lunge, (5) sit to stand, (6) standing active straight leg raise, (7) standing shoulder abduction, (8) standing shoulder extension, (9) standing shoulder internal-external rotation, and (10) standing shoulder scaption. The annotations for repetition segmentation were available in UI-PRMD and were used in our experiments. The mean (standard deviation) of the number of repetitions across the total 200 samples in the dataset is 10.00 (0.00). This dataset contains body joints captured by Vicon and Kinect cameras. The Kinect data was used in our experiments.
#### Iv-A3 IntelliRehabDS
IntelliRehabDS [8] is a dataset of body joints captured by the Kinect camera collected from 29 subjects, 15 patients and 14 healthy subjects. Each data sample in this dataset is composed of one subject performing multiple repetitions of one of the 9 physical rehabilitation exercises, (1) elbow flexion left, (2) elbow flexion right, (3) shoulder flexion left, (4) shoulder flexion right, (5) shoulder abduction left, (6) shoulder abduction right, (7) shoulder forward elevation, (8) side tap left, and (9) side tap right. The annotations for repetition segmentation were available in IntelliRehabDS and were used in our experiments. The mean (standard deviation) of the number of repetitions across the total 361 samples in the dataset is 5.69 (2.48).
### _Evaluation Metrics_
In accordance with the literature, Off-By-One count Accuracy (OBOA) and MAE were used as evaluation metrics for repetition counting [16, 17, 19]. Intersection-Over-Union (IOU) [31] was used for repetition segmentation. A new metric for evaluating repetition segmentation was developed in this study, Mean Average Error in Frames (MAE-F). First, for every data sample, the average number of frames by which the start and end points of the predicted repetition deviate from those of the ground truth repetition is calculated. Then, these deviations are averaged over all data samples.
### _Experimental Settings_
The dimension of the input to the sequential neural networks is determined based on the dimension of the feature vectors extracted from frames of the data samples. This dimension is 75, 43, and 118, when using the Kinect body joints, exercise-specific features [29], and their concatenation, respectively. This dimension is the number of neurons in the input layer of the LSTM. The number of neurons in the hidden layers of LSTM is set to two times the dimension of the feature vectors. In different exercises of different datasets, the number of LSTM layers varies from one to three layers.
For comparison, we implemented a _modified_ version of the TransRAC model [16], described in Section II-C. In place of using the encoder in TransRAC for feature extraction from multi-scale videos, multi-scale body joints were replaced. The other parts of the network remained unchanged and the network was trained from scratch to predict the density map. As a _video-based_ method, the pre-trained TransRAC on RepCount _TransRAC_ was also used to predict the density map for the RGB videos of exercises in the KIMORE dataset [6].
Due to the fact that none of the datasets determined separate training and test sets, five-fold cross-validation was used. The loss function was a linear combination of the Kullback-Leibler divergence loss and L1 loss minimized by the Adam optimizer [32]. The experiments were implemented in PyTorch [32] and scikit-learn [33] on a server with 128 GB of RAM and NVIDIA 2080 Ti 12 GB GPU. The code of our implementation is available at [https://github.com/abedicodes/repetition-segmentation](https://github.com/abedicodes/repetition-segmentation).
### _Experimental Results_
#### Iv-D1 Kimore
Tables I (a)-(b), and (c)-(d), respectively present the results of repetition counting, and segmentation on the KIMORE dataset [6] through five-fold cross-validation. The body joints captured by the Kinect camera in the dataset (rows #4-11 in tables), the raw RGB videos in the dataset (#2), and the body joints extracted from the RGB videos by OpenPose (#1 and #3) [23] were analyzed in the experiments in Table I. It should be noted that repetition counting was carried out in rows #1 and #3-10 of Tables I (a)-(b) as a result of repetition segmentation. First, repetition segmentation was undertaken, and the number of repetitions was counted. The repetition counting was performed directly in rows #2 and #11 of Tables I (a)-(b).
The results of the many-to-many model with density map output (#8) are superior to those of the many-to-many model with binary sequence output (#10). The ground-truth density maps were generated by approximating a Gaussian distribution between the start and end of each repetition [16]. The model that was trained on and predicted based on density maps was more robust to noise and fluctuations in body joints, as well as irregularities and imperfections in completing exercises in the patients in the KIMORE dataset.
There is a significant difference between the results of the two models described above (#8 and #10) and those of the many-to-one model with repetition counts output (#11). One reason for this is that more ground-truth information was available for the training of the first two models. The first two models described above were provided with labels for every
\begin{table}
\end{table} TABLE I: (a) Mean absolute error and (b) off-by-one accuracy of repetition counting, and (c) intersection over union and (d) mean average error in frames of repetition segmentation on the KIMORE dataset [6] through five-fold cross-validation. Density Map: many-to-many model with density map output, General: One single model was trained and evaluated on all the samples in the dataset, Pre-trained TransRAC: Pre-trained TransRAC model on RepCount video dataset [16], Modified TransRAC: A modified version of the TransRAC model [16] by removing the autoencoder feature extractor in TransRAC and replacing the body joints as the features, Exercise Specific: Exercise-specific models were trained and evaluated on the samples of specific exercises in the dataset, Density Map (LSTM Only): many-to-many model without CNN with density map output, Binary sequence: many-to-many model with binary sequence output, Counts: many-to-one model with repetition counts output.
timestamp; however, the many-to-one model was provided with only one label for the entire network.
The results based on body joints captured by the Kinect camera (#8) were far better than those extracted from RGB videos (#1). Due to the low quality of videos in the KIMORE dataset, the high distance of the subjects to the camera, imperfect lighting conditions, and blurred faces in the dataset to preserve privacy, OpenPose [23] had difficulty extracting body joints from the dataset, resulting in low-quality body joints compared to Kinect body joints. Therefore, the OpenPose [23] input to the models was noisy, resulting in suboptimal performance.
Comparing the results of the many-to-many model with density map output (#8) as described in Section III-A2 (an LSTM trailed by a 1D CNN and a linear layer) with the results of the many-to-many model with density map with only an LSTM trailed by a linear layer and without 1D CNN (#9) shows the importance of including 1D CNN in the proposed model.
Across all settings, the proposed method outperformed the modified TransRAC method (#3-4), Section IV-C. The reason for this can be attributed to two factors: the extreme complexity of the modified TransRAC model in comparison to the simple yet effective sequential models used in this paper, as well as the limited number of training samples in the KIMORE dataset. It is common for healthcare and rehabilitation datasets to have a small number of training samples. In this regard, it illustrates the necessity of taking the amount of data into account when selecting and designing the architecture of deep neural networks.
The RGB videos of exercises included in the KIMORE dataset were input into the original TransRAC model which was pre-trained on the RepCount video dataset [16] (#2). The results are much inferior to those predicted by the proposed architectures. The RepCount dataset contains videos of healthy subjects performing repetitive actions perfectly, while the KIMORE dataset contains rehabilitation exercises conducted by patients with imperfect repetitive actions.
Comparing the results of different features provided to the many-to-many model with density map output, body joints (#8), features extracted from body joints (#6), and their concatenation (#7) indicates that for the repetition counting and segmentation tasks, there is no need to extract handcrafted features from body joints [29] and the body joints as the input to the models result in the best performance.
According to the results of the general model trained and evaluated on all the samples in the dataset (#8), as compared to the exercise-specific models trained and evaluated on specific exercises (#5), exercise-specific models are not needed for repetition counting and segmentation tasks, and general models perform marginally better than exercise-specific models.
Comparing the performance of the models for specific exercises reveals that almost all methods had difficulty segmenting and counting repetitions for Ex. 4 in the KIMORE dataset. This is due to the fact that Ex. 4 involved pelvis rotations, i.e., movements along the z axis, which is relatively difficult to capture by depth cameras and analyze by models. As the movements in Ex. 4 differed from those in other exercises, which involved movements of the hands and feet along the x and y axes, it was difficult to generalize models to the samples in Ex. 4.
There is a similar overall trend in the performance of different methods in different settings across all four evaluation metrics, MAE, OBO, IOU, and MAE-F. Overall, the general many-to-many model with density map output trained with Kinect body joints achieved superior results with the lowest total MAE (0.5313) and the highest total OBO (0.9233) for repetition counting, as well as the highest total IOU (0.6886) and the lowest total MAE-F (28) for repetition segmentation.
#### Iv-C2 Ui-PRMD
Table II (a) shows the results of repetition segmentation and counting on the UI-PRMD dataset [7] using the general many-to-many model with density map output and the general many-to-one model with repetition counts output both trained and evaluated on Kinect body joints through five-fold cross-validation. Table II (a) illustrates that the repetition counting results of the many-to-many model with density map output are very similar to those of the many-to-one model with repetition counts output having zero errors in counting and outperforming the previous repetition counting method [19]. It is important to note that the number of repetitions across samples in the UI-PRMD dataset has a very low standard deviation. Using the proposed method, repetition segmentation is successfully performed with a total IOU of 0.82 and a total MAE-F of 12.
#### Iv-C3 IntelliRehabDS
Table II (b) shows the results of repetition segmentation and counting on the IntelliRehabDS dataset [8] using the general many-to-many model with density map output and the general many-to-one model with repetition counts output both trained and evaluated on Kinect body joints through five-fold cross-validation. In Table II (b), the results of the many-to-many model with density map output are superior to those of the many-to-one model with repetition counts output. Using the proposed method, repetition segmentation is successfully performed with a total IOU of 0.68 and a total MAE-F of 39.
Figure 1 illustrates the ground truth values and predictions for repetition counts of the proposed many-to-many model with density map output on (a) healthy subjects and (b) patients in all the exercises in the KIMORE dataset using five-fold cross-validation. The predictions closely follow ground-truth values in both populations, however, there are some deviations in patients as a result of irregularities in the duration and completion of exercise repetitions in patients. Figure 1 (b) shows large deviations in predictions for samples with a large number of repetitions, such as samples with 12 and 13 repetitions. This can be attributed to the imbalanced distribution of samples in the dataset, i.e., having a small number of samples with large numbers of repetitions.
## V Conclusion and Future Works
The purpose of this study was to develop a learning-based method for segmenting and counting repetitions in rehabil
\begin{table}
\end{table} TABLE II: Mean Absolute Error (MAE) and Off-By-One count accuracy (OBO) of repetition counting and Intersection-Over-Union (IOU) and Mean Average Error in Frames (MAE-F) of repetition segmentation on (a) the UI-PRMD [7] and (b) the IntelliRehabDS [8] dataset through five-fold cross-validation. Density Map – General: many-to-many model with density map output trained and evaluated on all the samples in the dataset, Counts – General: many-to-one model with repetition counts output trained and evaluated on all the samples in the dataset.
Fig. 1: The ground truth values and predictions for repetition counts of the proposed many-to-many model with density map output on (a) healthy subjects and (b) patients in all the exercises in the KIMORE dataset [6].
itation exercises. On three publicly available rehabilitation exercise datasets, the proposed method successfully segmented and counted repetitions, and outperformed previous works, including a video-based method. This study represented the first work on repetition segmentation using skeletal body joints and data collected from patients, which is more challenging due to irregularities in exercise duration and completion. Despite being much lighter than video-based models [24, 25, 26, 27, 28, 16], our sequential models require an initial stage of body-joint extraction from videos or directly by the use of depth cameras. Our body-joint-based method has the advantage of being more interpretable and capable of being used in a framework providing patients with actionable feedback on their exercises. The successful segmentation and counting of rehabilitation exercises using the proposed method is the first step in the development of an automated virtual rehabilitation platform capable of assessing exercise quality and providing feedback to patients and reports to clinicians. Future research may involve incorporating the attention mechanism [30, 36] into the current sequential models and extending the models to facilitate multi-task learning to perform exercise segmentation and assessment jointly.
|
2306.07161 | Minimal Terracini loci in projective spaces | We characterize the number of points for which there exist non-empty
Terracini sets of points in $\mathbb{P}^n$. Then we study minimally Terracini
finite sets of points in $\mathbb{P}^n$ and we obtain a complete description in
the case of $\mathbb{P}^3$, when the number of points is less than twice the
degree of the linear system. | Edoardo Ballico, Maria Chiara Brambilla | 2023-06-12T14:48:42Z | http://arxiv.org/abs/2306.07161v2 | # On minimally Terracini finite sets of points in projective spaces
###### Abstract.
We characterize the number of points for which there exist non-empty Terracini sets of points in \(\mathbb{P}^{n}\). Then we study _minimally Terracini_ finite sets of points in \(\mathbb{P}^{n}\) and we obtain a complete description in the case of \(\mathbb{P}^{3}\), when the number of points is less than twice the degree of the linear system.
Key words and phrases:interpolation problems, minimal Terracini locus, Terracini locus, zero-dimensional schemes 2010 Mathematics Subject Classification: Primary: 14C20; Secondary:14N07 Partially supported by GNSAGA of INdAM
## 1. Introduction
The notion of _Terracini locus_ in projective spaces has been recently introduced in [3] and then extended to other projective varieties and investigated in [2, 4, 5, 10]. This property encodes the fact that a set of double points imposes independent conditions to a linear system, hence it gives information for interpolation problems over double points in special position. Moreover it can be interpreted in terms of special loci contained in higher secant varieties to projective varieties, for general reference see e.g. [11, 6].
The interest in this subject is also motivated by the connection with the theory of tensors, see e.g. [12]. In particular, since symmetric tensors can be identified with homogeneous polynomials, the development of geometric methods in projective spaces can give contribution to the study of the rank and decompositions of symmetric tensors.
A finite set of points \(S\) of \(\mathbb{P}^{n}\) is said to be _Terracini_ with respect to \(\mathcal{O}_{\mathbb{P}^{n}}(d)\) if
\[h^{0}(\mathcal{I}_{2S}(d))>0,\ h^{1}(\mathcal{I}_{2S}(d))>0,\ \text{and}\ \langle S \rangle=\mathbb{P}^{n}.\]
We denote by \(\mathbb{T}(n,d;x)\) all the sets of points \(S\subset\mathbb{P}^{n}\) of cardinality \(x\) which are Terracini with respect to \(\mathcal{O}_{\mathbb{P}^{n}}(d)\).
The first result of this paper characterizes the triples \(n,d,x\) such that the Terracini locus is non-empty, as follows:
**Theorem 1.1**.: _Fix positive integers \(n\), \(d\) and \(x\)._
_(i) If either \(n=1\) or \(d=2\), then \(\mathbb{T}(n,d;x)=\emptyset\) for any \(x\)._
_(ii) \(\mathbb{T}(2,3;x)=\emptyset\) for any \(x\)._
_(iii) If \(n\geq 2\), \(d\geq 3\) and \((n,d)\neq(2,3)\), then \(\mathbb{T}(n,d;x)\neq\emptyset\) if and only if \(x\geq n+\lceil d/2\rceil\)._
In order to make a finer description it is very useful to study _minimally Terracini loci_. The _minimally Terracini_ property has been introduced in [2, Definition 2.2]
for any projective variety. A Terracini set of points \(S\subset\mathbb{P}^{n}\) is said to be minimally Terracini with respect to \(\mathcal{O}_{\mathbb{P}^{n}}(d)\) if
\[h^{1}(\mathcal{I}_{2A}(d))=0\text{ for all }A\subsetneq S.\]
We denote by \(\mathbb{T}(n,d;x)^{\prime}\) the set of all \(S\in\mathbb{T}(n,d;x)\) which are minimally Terracini with respect to \(\mathcal{O}_{\mathbb{P}^{n}}(d)\).
In Theorem 3.1 we see that if \(S\in S(\mathbb{P}^{n},x)\) is minimally Terracini for some \(\mathcal{O}_{\mathbb{P}^{n}}(d)\), then such \(d\) is unique and it is the maximal integer \(t\) such that \(h^{1}(\mathcal{I}_{2S}(t))>0\). On the other hand, also the dimension \(n\) for which a set can be minimally Terracini is unique, by the concision result proved in Proposition 3.3.
Note that, for fixed \(n,d\), we know that \(\mathbb{T}(n,d;x)\) is not empty for infinitely many \(x\), by Theorem 1.1. On the other hand, if we consider the subsets of minimally Terracini sets \(\mathbb{T}(n,d;x)^{\prime}\subseteq\mathbb{T}(n,d;x)\) we have that they are not empty only for finitely many \(x\), as proved in Proposition 3.5. In other words the minimality property is a strong condition which allows us to prove interesting bounds and characterizations of the triples \(n,d,x\) for which \(\mathbb{T}(n,d;x)^{\prime}\) is or is not empty.
In Section 4 we investigate the sets of points on rational normal curves and on their degenerations (reducible rational normal curves). In particular Theorem 4.2 and Proposition 4.7 completely describe the minimal Terracini sets related to such curves. Since rational normal curves give rise to elements of \(\mathbb{T}(n,d;1+\lceil nd/2\rceil)^{\prime}\), we may formulate the following conjecture:
**Conjecture 1.2**.: _For any \(x\leq\lfloor\frac{nd+1}{2}\rfloor\), we have \(\mathbb{T}(n,d;x)^{\prime}=\emptyset\)._
Here we prove the conjecture for \(\mathbb{P}^{2}\), Proposition 5.2, and for \(\mathbb{P}^{3}\), Theorem 1.3.
After the easy description of the situation in the plane (see Section 5), we focus on the case of \(\mathbb{P}^{3}\), and we obtain the following three results, which are the main results of this paper.
**Theorem 1.3**.: _Fix integers \(d\geq 4\) and \(x\) such that \(2x\leq 3d+1.\) Then \(\mathbb{T}(3,d;x)^{\prime}=\emptyset\)._
**Theorem 1.4**.: _Fix integers \(d\geq 7\) and \(x=1+\lceil 3d/2\rceil\). Then \(S\in\mathbb{T}(3,d;x)^{\prime}\) if and only if \(S\) is contained in a rational normal curve._
**Theorem 1.5**.: _Fix integers \(d\geq 17\) and \(x\) such that \(1+\lceil 3d/2\rceil<x<2d\). Then \(\mathbb{T}(3,d;x)^{\prime}=\emptyset\)._
The bound in the last theorem is sharp, as shown in Example 6.2, where the points lie on an elliptic curve. Hence we notice that there are gaps and, once fixed \(n,d\) big enough, the first cases where \(\mathbb{T}(3,d;x)^{\prime}\) is not empty are \(1+\lceil 3d/2\rceil\) and \(2d\), corresponding respectively to rational normal curves and to elliptic curves. The situation is completely analogous in \(\mathbb{P}^{2}\), see Section 5.
The paper is organized as follows: in Section 2 we present the preliminary results and in particular we introduce the notion of critical scheme, which is a crucial tool in our proofs. Section 3 contains the first properties of Terracini and minimal Terracini sets and the proof of Theorem 1.1. In Section 4 we characterize the minimally Terracini sets of points on rational normal curves and their degenerations. Section 5 is devoted to the plane, while Section 6 to the case of \(\mathbb{P}^{3}\) and to the proofs of Theorems 1.3, 1.4 and 1.5.
## 2. Preliminaries and notation
For any \(x\in\mathbb{N}\) let \(S(\mathbb{P}^{n},x)\) denote the set of all the union of \(x\) points of a projective space \(\mathbb{P}^{n}\). For any set \(E\subset\mathbb{P}^{n}\), let \(\langle E\rangle\) denote the linear span of \(E\) in \(\mathbb{P}^{n}\).
We denote by \(\mathbb{T}_{1}(n,d;x)\) the set of all \(S\in S(\mathbb{P}^{n},x)\) such that \(h^{0}(\mathcal{I}_{2S}(d))>0\) and \(h^{1}(\mathcal{I}_{2S}(d))>0\).
**Definition 2.1**.: A set \(S\) of points of \(\mathbb{P}^{n}\) is said to be _Terracini with respect to \(\mathcal{O}_{\mathbb{P}^{n}}(d)\)_ if
* \(h^{0}(\mathcal{I}_{2S}(d))>0\) and \(h^{1}(\mathcal{I}_{2S}(d))>0\),
* \(\langle S\rangle=\mathbb{P}^{n}\).
We denote by \(\mathbb{T}(n,d;x)\) the set of all \(S\in S(\mathbb{P}^{n},x)\) which are Terracini with respect to \(\mathcal{O}_{\mathbb{P}^{n}}(d)\).
Obviously \(\mathbb{T}(n,d;x)=\emptyset\) for all \(x\leq n\), since every \(S\in\mathbb{T}(n,d;x)\) spans \(\mathbb{P}^{n}\).
We recall from [2, Definition 2.2] the following important definition; it applies to any projective variety, but we write it now only the case of \(\mathbb{P}^{n}\).
**Definition 2.2**.: A set \(S\) is said to be _minimally Terracini with respect to \(\mathcal{O}_{\mathbb{P}^{n}}(d)\)_ if it is Terracini and moreover
* \(h^{1}(\mathcal{I}_{2A}(d))=0\) for all \(A\subsetneq S\).
We denote by \(\mathbb{T}(n,d;x)^{\prime}\) the set of all \(S\in\mathbb{T}(n,d;x)\) which are minimally Terracini with respect to \(\mathcal{O}_{\mathbb{P}^{n}}(d)\).
In the next remark we recall the exceptional cases of the Alexander-Hirschowitz theorem, which are all the cases when any general set of points is minimally Terracini.
**Remark 2.3**.: Assume \((n,d;x)\in\{(2,4;5),(3,4;9),(4,4;14),(4,3;7)\}\). Fix a general \(S\in S(\mathbb{P}^{n},x)\). Since \(x\geq n+1\), we have \(\langle S\rangle=\mathbb{P}^{n}\) and hence \(S\in\mathbb{T}(n,d;x)\) by the Alexander-Hirschowitz theorem [1]. Since \(\sigma_{x-1}(X_{n,d})\) is not defective, then \(S\in\mathbb{T}(n,d;x)^{\prime}\).
We collect here some preliminary results we will use in the sequel.
**Remark 2.4**.: Fix a zero-dimensional scheme \(Z\subset\mathbb{P}^{n}\). Since \(Z\) is zero-dimensional, \(h^{i}(\mathcal{O}_{Z}(t))=0\) for all \(t\in\mathbb{N}\) and all \(i\geq 1\). Obviously \(h^{j}(\mathcal{O}_{\mathbb{P}^{n}}(t))=0\) for all \(j>0\). Thus the exact sequence
\[0\xrightarrow{}\mathcal{I}_{Z}(t)\xrightarrow{}\mathcal{O}_{\mathbb{P}^{n}}(t )\xrightarrow{}\mathcal{O}_{Z}(t)\xrightarrow{}0\]
gives \(h^{i}(\mathcal{I}_{Z}(t))=0\) for all \(i>1\).
We will often use the following obvious observation.
**Remark 2.5**.: Let \(W\subset Z\subset\mathbb{P}^{n}\) be zero-dimensional schemes. Since \(\dim Z=0\), \(h^{i}(\mathcal{I}_{W,Z}(d))=0\) for all \(i\geq 1\). Thus
\[h^{0}(\mathcal{I}_{Z}(d))\leq h^{0}(\mathcal{I}_{W}(d))\ \ \text{and}\ \ h^{1}( \mathcal{I}_{W}(d))\leq h^{1}(\mathcal{I}_{Z}(d)).\]
Furtherly, it is easy to prove that
\[h^{0}(\mathcal{I}_{Z}(d))\leq h^{0}(\mathcal{I}_{Z}(d+1))\ \ \text{and}\ \ h^{1}( \mathcal{I}_{Z}(d+1))\leq h^{1}(\mathcal{I}_{Z}(d)).\]
**Remark 2.6**.: Fix a hyperplane \(H\subset\mathbb{P}^{n}\) and any finite set \(S\subset H\). The residual exact sequence of \(H\)
\[0\xrightarrow{}\mathcal{I}_{S}(d-1)\xrightarrow{}\mathcal{I}_{2S}(d) \xrightarrow{}\mathcal{I}_{2S\cap H,H}(d)\xrightarrow{}0 \tag{1}\]
Remark 2.4 gives
\[h^{1}(H,\mathcal{I}_{2S\cap H,H}(d))\leq h^{1}(\mathcal{I}_{2S}(d))\leq h^{1}( H,\mathcal{I}_{2S\cap H,H}(d))+h^{1}(\mathcal{I}_{S}(d-1)). \tag{2}\]
We recall from [7] the following useful observation.
**Remark 2.7**.: Let \(Z\) be a zero dimensional scheme in \(\mathbb{P}^{n}\), such that \(h^{1}(\mathcal{I}_{Z}(d))>0\). By [7, Lemma 34], we know that if \(\deg(Z)\leq 2d+1\), then there is a line \(L\) such that \(\deg(Z\cap L)\geq d+2\). In particular we have that \(\deg(Z)\geq d+2\).
We recall the following lemma which we learned from K. Chandler ([8, 9]).
**Lemma 2.8**.: _Let \(W\) be an integral projective variety, \(\mathcal{L}\) a line bundle on \(W\) with \(h^{1}(\mathcal{L})=0\) and \(S\subset W_{\mathrm{reg}}\) a finite collection of points. Then \(h^{1}(\mathcal{I}_{(2S,W)}\otimes\mathcal{L})>0\) if and only if there is a scheme \(Z\subset 2S\) such that any connected component of \(Z\) has degree \(\leq 2\) and such that \(h^{1}(\mathcal{I}_{Z}\otimes\mathcal{L})>0\)._
The schemes \(Z\) appearing in Lemma 2.8 are curvilinear subscheme of a collection of double points. More precisely in the following definition we introduce the notion of _critical schemes_, which are the crucial tools in our proofs.
**Definition 2.9**.: Given \(S\) a collection of \(x\) points in \(\mathbb{P}^{n}\), we say that a zero-dimensional scheme \(Z\) is _\(d\)-critical for \(S\)_ if:
* \(Z\subseteq 2S\) and any connected component of \(Z\) has degree \(\leq 2\),
* \(h^{1}(\mathcal{I}_{Z}(d))>0\),
* \(h^{1}(\mathcal{I}_{Z^{\prime}}(d))=0\) for any \(Z^{\prime}\subsetneq Z\).
The next lemmas describe the properties of a critical scheme.
**Lemma 2.10**.: _Let \(Z\) be a zero-dimensional scheme such that \(h^{1}(\mathcal{I}_{Z}(d))>0\) and \(h^{1}(\mathcal{I}_{Z^{\prime}}(d))=0\) for any \(Z^{\prime}\subsetneq Z\). Then \(h^{1}(\mathcal{I}_{Z}(d))=1\)._
Proof.: Assume \(h^{1}(\mathcal{I}_{Z}(d))\geq 2\) and take a subscheme \(Z^{\prime}\subset Z\) such that \(\deg(Z^{\prime})=\deg(Z)-1\). We have \(h^{1}(\mathcal{I}_{Z^{\prime}}(d))\geq h^{1}(\mathcal{I}_{Z}(d))-\deg(Z)+ \deg(Z^{\prime})>0\). Thus \(Z\) is not critical, a contradiction.
**Lemma 2.11**.: _Fix \(S\in\mathbb{T}(n,d;x)^{\prime}\) and take \(Z\) critical for \(S\). Then \(Z_{\mathrm{red}}=S\)._
Proof.: Assume \(S^{\prime}:=Z_{\mathrm{red}}\neq S\). Lemma 2.8 gives \(h^{1}(\mathcal{I}_{2S^{\prime}}(d))>0\). Thus \(S\nsubseteq\mathbb{T}(n,d;,x)^{\prime}\), a contradiction.
**Lemma 2.12**.: _Fix integers \(n\geq 2\), \(d>t\geq 1\) and \(x>1\). Take \(S\in\mathbb{T}(n,d;x)^{\prime}\) and a critical scheme \(Z\) for \(S\). Take \(D\in|\mathcal{O}_{\mathbb{P}^{n}}(t)|\) with \(Z\nsubseteq\mathbb{D}\). Then \(h^{1}(\mathcal{I}_{\mathrm{Res}_{D}(Z)}(d-t))>0\)._
Proof.: Since \(Z\nsubseteq D\) and is critical, then Lemma 2.8 gives \(h^{1}(\mathcal{I}_{Z\cap D}(d))=0\). Thus the residual exact sequence of \(D\) gives \(h^{1}(\mathcal{I}_{\mathrm{Res}_{D}(Z)}(d-t))>0\)
## 3. First results on minimally Terracini sets of points
We prove now the fact that if \(S\in S(\mathbb{P}^{n},x)\) is minimally Terracini for some \(\mathcal{O}_{\mathbb{P}^{n}}(d)\), then such \(d\) is unique and it is the maximal integer \(t\) such that \(h^{1}(I_{2S}(t))>0\).
**Theorem 3.1**.: _Fix \(n\geq 2\) and \(S\in\mathbb{T}(n,d;x)^{\prime}\). Then_
_(i) \(h^{1}(\mathcal{I}_{2S}(d+1))=0\),_
_(ii) \(S\notin\mathbb{T}(n,t;x)\) for any \(t\geq d+1\),_
_(iii) \(S\notin\mathbb{T}(n,t;x)^{\prime}\) for any \(t\leq d-1\)._
Proof.: We prove now (i). Assume \(h^{1}(\mathcal{I}_{2S}(d+1))>0\). By Lemma 2.8, there is a \((d+1)\)-critical scheme for \(S\), that is a zero-dimensional scheme \(S\subset Z\subset 2S\) with every connected component of degree \(\leq 2\) and satisfying \(h^{1}(\mathcal{I}_{Z}(d+1))=1\) (by Lemma 2.11).
Fix \(p\in Z_{\mathrm{red}}\) and call \(Z(p)\) the connected component of \(Z\) supported at \(p\). Set \(L:=\langle Z(p)\rangle\). Hence \(L\) is either a line, or a point \(L=Z(p)=\{p\}\).
Let \(H\subset\mathbb{P}^{n}\) be a general hyperplane containing \(L\). Since \(Z\) is curvilinear, by generality of \(H\) we can assume that the scheme \(Z\cap H\) is equal to the scheme \(Z\cap L\). Let us denote \(\zeta=Z\cap H=Z\cap L\).
Now assume that \(h^{1}(\mathcal{I}_{\zeta,H}(d+1))>0\). Then \(L\) is a line. Since \(\zeta\subset L\), then we have the following diagram
from which we get \(h^{1}(\mathcal{I}_{\zeta,L}(d+1))>0\) (that implies \(\deg(Z\cap L)\geq d+3\)).
On the other hand, this implies \(h^{1}(\mathcal{I}_{\zeta}(d+1))>0\). Since \(\langle S\rangle=\mathbb{P}^{n}\) and \(n\geq 2\), the set \(S\cap L\) gives \(S\notin\mathbb{T}(n,d;x)^{\prime}\), a contradiction.
Now assume \(h^{1}(\mathcal{I}_{Z\cap H,H}(d+1))=0\). The residual exact sequence of \(H\) gives \(h^{1}(\mathcal{I}_{\mathrm{Res}_{H}(Z)}(d))>0\). Since \(\mathrm{Res}_{H}(Z)_{\mathrm{red}}\subseteq S\setminus\{p\}\), by Lemma 2.8 we have that \(h^{1}(\mathcal{I}_{2(S\setminus\{p\})}(d+1))=0\). This contradicts the minimality of \(S\), that is \(S\notin\mathbb{T}(n,d,x)^{\prime}\), a contradiction.
To prove (ii) it is enough to use (i) and Remark 2.5.
Claim (iii) follows easily by contradiction, using again (i) and Remark 2.5.
**Remark 3.2**.: Take any \(S\in\mathbb{T}(n,d;x)^{\prime}\). Since \(h^{1}(\mathcal{I}_{2S^{\prime}}(d))=0\) for all \(S^{\prime}\subset S\) such that \(\#S^{\prime}=x-1\), then \(h^{1}(\mathcal{I}_{2S}(d))\leq n+1\). This property holds for any projective variety of dimension \(n\).
The following result is a kind of _concision_ or _autarky_ for Terracini loci of Veronese varieties.
**Proposition 3.3**.: _Take a finite set of points \(S\subset\mathbb{P}^{n}\) such that \(M:=\langle S\rangle\subseteq\mathbb{P}^{n}\). Then_
\[h^{1}(M,\mathcal{I}_{2S\cap M,M})(d))>0\ \ \text{if and only if}\ \ h^{1}( \mathcal{I}_{2S}(d))>0.\]
Proof.: By Remark 2.5, we have \(h^{1}(\mathcal{I}_{2S}(d))\geq h^{1}(\mathcal{I}_{2S\cap M}(d))\). Since \(M\) is arithmetically Cohen-Macaulay, we get \(h^{1}(M,\mathcal{I}_{2S\cap M,M}(d))=h^{1}(\mathcal{I}_{2S\cap M}(d))\). Hence the _only if_ part is obvious.
Now assume \(h^{1}(\mathcal{I}_{2S}(d))>0\). Take a hyperplane \(H\subset\mathbb{P}^{n}\) such that \(H\supseteq M\) and use induction on \(n-\dim M\). It is sufficient to prove that \(h^{1}(H,\mathcal{I}_{2S\cap H,H}(d))>0\).
Take a critical scheme \(Z\) for \(S\). In order to conclude it is enough to find a zero-dimensional scheme \(W\subset H\) such that \(h^{1}(H,\mathcal{I}_{W,H}(d))>0\), \(W_{\mathrm{red}}=Z_{\mathrm{red}}\) and for each \(p\in Z_{\mathrm{red}}\) the connected components, \(Z_{p}\) and \(W_{p}\) of \(Z\) and \(W\) containing \(p\) have the same degree. Fix a general \(o\in\mathbb{P}^{n}\setminus H\). Let \(h_{o}:\mathbb{P}^{n}\setminus\{o\}\to H\) denote the linear projection for \(o\). Since \(o\) is general, \(o\) is not contained in one of the finitely many lines spanned by the degree \(2\) connected components of \(Z\). Since \(Z_{\mathrm{red}}\subset H\), \(o\) is not contained in a line spanned by \(2\) points of \(Z_{\mathrm{red}}\). Thus \(h_{|Z}\) is an isomorphism. Set \(W:=h(Z)\). By the semicontinuity theorem for cohomology to prove that \(h^{1}(H,\mathcal{I}_{W,H}(d))>0\) it is sufficient to prove that \(W\) is a flat limit of a flat family \(\{W_{c}\}_{c\in\mathbb{K}\setminus\{0\}}\) of schemes projectively equivalent to \(Z\). Fix a system \(x_{0},\ldots,x_{n}\) of homogeneous coordinates of \(\mathbb{P}^{n}\) such that \(H=\{x_{0}=0\}\) and \(o=[1:0:\cdots:0]\). For any \(c\in\mathbb{K}\setminus\{0\}\) let \(h_{c}\) denote the automorphism of \(\mathbb{P}^{n}\) defined by the formula \(h_{c}([x_{0}:x_{1}:\cdots:x_{n}])=[cx_{0}:x_{1}:\cdots:x_{n}]\). Note that \(h_{c|H}:H\to H\) is the identity map. Set \(W_{c}:=h_{c}(W)\).
We start now the classification of Terracini and minimal Terracini sets of points in \(\mathbb{P}^{n}\). Obviously \(\mathbb{T}(n,d;x)=\emptyset\) for all \(x\leq n\), since every \(S\in\mathbb{T}(n,d;x)\) spans \(\mathbb{P}^{n}\).
**Lemma 3.4**.: \(\mathbb{T}(1,d;x)=\mathbb{T}_{1}(1,d;x)=\emptyset\) _for all \(d>0\) and \(x>0\)._
Proof.: Assume the existence of \(S\in\mathbb{T}_{1}(1,d;x)\). Thus \(h^{1}(\mathcal{I}_{2S}(d))>0\) and hence \(2x\geq d+2\) and \(h^{0}(\mathcal{I}_{2S}(d))>0\) and hence \(2x\leq d+1\), a contradiction.
The following lemma shows a key difference between \(\mathbb{T}(n,d;x)\) and its subset \(\mathbb{T}(n,d;x)^{\prime}\). In particular for fixed \(n\) and \(d\), we have \(\mathbb{T}(n,d;x)^{\prime}\neq\emptyset\) for only finitely many integers \(x\).
**Proposition 3.5**.: _Fix integers \(n\geq 2\) and \(d\geq 3\). Set \(\rho:=\lceil({n+d\choose n}+1)/(n+1)\rangle\). Then \(\mathbb{T}(n,d;x)^{\prime}=\emptyset\) for all \(x>\rho\). Moreover, if \({n+d\choose n}/(n+1)\in\mathbb{Z}\), then \(\mathbb{T}(n;d;1+{n+d\choose n}/(n+1))^{\prime}=\emptyset\)._
Proof.: Assume \(S\in\mathbb{T}(n,d;x)\) and take \(S^{\prime}\subset S\) with \(\#(S^{\prime})=x-1\geq\rho\). Hence \(h^{1}(\mathcal{I}_{2S^{\prime}}(d))>0\). On the other hand \(h^{1}(\mathcal{I}_{2S^{\prime}}(d))>0\) by Remark 2.5. Thus \(S\) is not minimally Terracini.
Now assume \({n+d\choose n}\equiv 0\mod n+1\). In this case \(\mathbb{T}(n,d;1+{n+d\choose n}/(n+1))^{\prime}=\emptyset\) for the following reason. Take \(S\) with \(\#S=1+{n+d\choose n}/(n+1)\) and such that \(h^{0}(\mathcal{I}_{2S}(d))>0\) and \(h^{1}(\mathcal{I}_{2S}(d))>0\). Let \(S^{\prime}\subset S\) be such that \(\#S^{\prime}={n+d\choose n}/(n+1)\). Since \(h^{1}(\mathcal{I}_{2S^{\prime}}(d))=h^{0}(\mathcal{I}_{2S^{\prime}}(d))\) and \(h^{0}(\mathcal{I}_{2S^{\prime}}(d))>0\), then \(S\notin\mathbb{T}(n,d;1+{n+d\choose n}/(n+1))^{\prime}\).
**Lemma 3.6**.: \(\mathbb{T}(n,2;x)=\emptyset\) _for all \(x>0\) and all \(n>0\)._
Proof.: We may assume \(x\geq n+1\). By the Alexander-Hirschowitz theorem any quadric form is the sum of at most \(n+1\) squares of linear forms. Hence Terracini lemma gives \(h^{0}(\mathcal{I}_{2S}(2))=0\) for any \(S\in S(\mathbb{P}^{n},x)\) spanning \(\mathbb{P}^{n}\)
The following result shows that many elements of \(\mathbb{T}_{1}(n,d;x)\setminus\mathbb{T}(n,d;x)\) are easily produced and not interesting.
**Lemma 3.7**.: _Fix \(n\geq 2\), \(d\geq 2\) and \(x\geq\lceil d/2\rceil+1\). Let \(S\) be a collection of \(x\) points on a line \(L\subset\mathbb{P}^{n}\). Then \(S\in\mathbb{T}_{1}(n,d;x)\)._
Proof.: We need to prove that \(h^{1}(\mathcal{I}_{2S}(d))>0\) and \(h^{0}(\mathcal{I}_{2S}(d))>0\). Fix a hyperplane \(H\) containing \(L\). Take \(G:=2H\) if \(d=2\) and call \(G\) the union of \(2H\) and a hypersurface of degree \(d-2\) if \(d>2\). Since \(S\subset\operatorname{Sing}(G)\), \(h^{0}(\mathcal{I}_{2S}(d))>0\). Since \(\deg(2S\cap L)=2x\geq d+2\), \(h^{1}(\mathcal{I}_{2S\cap L}(d))>0\). Thus \(h^{1}(\mathcal{I}_{2S}(d))>0\), by Remark 2.5.
**Lemma 3.8**.: _For any \(x>0\), we have \(\mathbb{T}(3,3;x)^{\prime}=\emptyset\)._
Proof.: We know that \(\mathbb{T}(3,3;x)^{\prime}=\emptyset\) for \(x\leq 4\) by Proposition 3.10.
We assume \(x\geq 5\) and let \(S\in\mathbb{T}(3,3;x)^{\prime}\). If four of the points of \(S\) are in a plane \(H\), then we have that \(S\) is not minimal. Therefore the points of \(S\) are in linear general position and thus \(h^{0}(\mathcal{I}_{2S}(3))=0\), because the set of unions of \(n+2\) points of \(\mathbb{P}^{n}\) in linear general position is an open orbit for the action of \(\operatorname{Aut}(\mathbb{P}^{n})\) and the secant varieties of \(\nu_{3}(\mathbb{P}^{3})\) have the expected dimension by Alexander-Hirschowitz theorem.
### Proof of Theorem 1.1
We are now in position to give the proof of Theorem 1.1 which classifies Terracini loci. We start with the following lemma.
**Lemma 3.9**.: _Assume \(n\geq 1\) and \(d\geq 2\). Let \(Z\subset\mathbb{P}^{n}\) be a zero-dimensional scheme such that \(\deg(Z)\leq d+n+1\), \(h^{1}(\mathcal{I}_{Z}(d))>0\) and \(\langle Z\rangle=\mathbb{P}^{n}\). Then there is a line \(L\) such that \(\deg(L\cap Z)\geq d+2\) and \(\deg(Z)=d+n+1\)._
Proof.: The lemma is trivial for \(n=1\). Now assume \(n=2\). Since \(\deg(Z)\leq 2d+1\), there is a line \(L\) such that \(\deg(Z\cap L)\geq d+2\), by Remark 2.7. Clearly, since \(\langle Z\rangle=\mathbb{P}^{2}\), we get \(\deg(Z)=d+3\).
Now assume \(n>2\). Take a hyperplane \(H\subset\mathbb{P}^{n}\) such that \(w:=\deg(Z\cap H)\) is maximal. Since \(\langle Z\rangle=\mathbb{P}^{n}\), we have \(n\leq w<z\) and \(\langle Z\cap H\rangle=H\). If \(h^{1}(H,\mathcal{I}_{Z\cap H,H}(d))>0\), then we conclude by induction on \(n\).
Now assume \(h^{1}(H,\mathcal{I}_{Z\cap H,H}(d))=0\) and by the residual exact sequence of \(H\)
\[0\xrightarrow{}\mathcal{I}_{\operatorname{Res}_{H}(Z)}(d-1)\xrightarrow{} \mathcal{I}_{Z}(d)\xrightarrow{}\mathcal{I}_{Z\cap H,H}(d)\xrightarrow{}0, \tag{3}\]
we have \(h^{1}(\mathcal{I}_{\operatorname{Res}_{H}(Z)}(d-1))>0\). By Remark 2.7, since \(\deg(\operatorname{Res}_{H}(Z))\leq z-w\leq d+1\leq 2d+1\), we have a line \(L\) with \(\deg(L\cap\operatorname{Res}_{H}(Z))\geq d+2\), a contradiction.
The following proposition proves the emptyness of the Terracini locus for small number of points.
**Proposition 3.10**.: _Assume \(n,d\geq 2\) and fix an integer \(x\) such that \(x\leq n+\lceil d/2\rceil-1\). Then \(\mathbb{T}(n,d;x)=\emptyset\)._
Proof.: The case \(d=2\) is true by Lemma 3.6, hence we can assume \(d\geq 3\).
Assume \(n=2\). If \(S\in\mathbb{T}(2,d;x)\) and \(Z\) is a critical scheme for \(S\), then we have \(\deg(Z)\leq 2x\leq d+3\). Hence by Lemma 3.9 there exists a line \(L\) such that \(\deg(Z\cap L)\geq d+2\) and hence \(x>\#(S\cap L)\geq\lceil d/2\rceil+1\), a contradiction.
Assume \(n\geq 3\) and we use induction on \(n\). Consider \(S\in\mathbb{T}(n,d;x)\) and let \(S^{\prime}\subseteq S\) be the minimal subset such that \(h^{1}(\mathcal{I}_{2S^{\prime}}(d))>0\). Set \(y:=\#S^{\prime}\), \(M:=\langle S^{\prime}\rangle\), and \(m:=\dim M\). Proposition 3.3 gives \(h^{1}(\mathcal{I}_{2S^{\prime}\cap M,M}(d))>0\).
If \(m<n\) then we get that either \(y\geq m+\lceil d/2\rceil\) by induction (and hence \(x\geq y+(m-n)=n+\lceil d/2\rceil\), a contradiction), or \(h^{0}(M,\mathcal{I}_{2S^{\prime}\cap M}(d))=0\) and hence \(y(m+1)\geq\binom{m+d}{m}\), which is possible only if \(M\) is a line. In this case we have again a contradiction because, since \(S\) spans \(\mathbb{P}^{n}\) and \(h^{1}(\mathcal{I}_{2S^{\prime}\cap M,M}(d))>0\), we have
\[x\geq y+(n-1)\geq\frac{d+2}{2}+n-1=n+\frac{d}{2}.\]
Thus we may assume \(m=n\). Let \(H\subset\mathbb{P}^{n}\) be any hyperplane spanned by \(S^{\prime}\cap H\). Let \(S^{\prime\prime}=S^{\prime}\cap H\). Thus \(n\leq\#(S^{\prime\prime})<y\). Since \(\operatorname{Res}_{H}(2S^{\prime\prime})=S^{\prime\prime}\), we have the exact sequence:
\[0\longrightarrow\mathcal{I}_{S^{\prime\prime}}(d-1)\longrightarrow\mathcal{I }_{2S^{\prime\prime}}(d)\longrightarrow\mathcal{I}_{2S^{\prime\prime}\cap H,H }(d)\to 0\]
The minimality of \(S^{\prime}\) and Proposition 3.3 give \(h^{1}(H,\mathcal{I}_{2S^{\prime\prime}\cap H,H}(d))=0\). Now, if \(h^{1}(\mathcal{I}_{S^{\prime\prime}}(d-1))>0\), then either \(\#(S^{\prime\prime})\geq n+d\), a contradiction, or \(\#(S^{\prime\prime})\leq n+d-1\). Then Lemma 3.9 applied in \(H\) gives \(\#(S^{\prime\prime})=n+d-1\) which is again impossible. Hence \(h^{1}(\mathcal{I}_{S^{\prime\prime}}(d-1))=0\). Thus from the previous exact sequence we have \(h^{1}(\mathcal{I}_{2S^{\prime\prime}})(d))=0\).
We consider now the exact sequence with respect to the quadric hypersurface \(2H\):
\[0\longrightarrow\mathcal{I}_{S^{\prime}\setminus S^{\prime\prime}}(d-2) \longrightarrow\mathcal{I}_{2S^{\prime}}(d)\longrightarrow\mathcal{I}_{2S^{ \prime\prime},2H}(d)\to 0\]
where \(\operatorname{Res}_{2H}(2S^{\prime})=S^{\prime}\setminus S^{\prime\prime}\).
Since the quadric hypersurface \(2H\) in \(\mathbb{P}^{n}\) is arithmetically Cohen-Macaulay, we get \(h^{1}(\mathcal{I}_{2S^{\prime\prime},2H}(d))=0\), which implies \(\mathcal{I}_{S^{\prime}\setminus S^{\prime\prime}}(d-2)>0\). But since \(\#(S^{\prime}\setminus S^{\prime\prime})\leq y-n\leq\lceil d/2\rceil-1\) we have a contradiction.
We now give the proof of the main result of this section.
Proof of Theorem 1.1:.: Part (i) is true by Lemmas 3.4 and 3.6.
Now assume \(n=2\) and \(d=3\). A singular plane cubic \(C\) with at least \(3\) singular points is either the union of \(3\) lines or the triple lines or the union of a double line and another line. Thus if \(\operatorname{Sing}(C)\) spans \(\mathbb{P}^{2}\), then \(\#\text{Sing}(C)=3\) and \(\operatorname{Sing}(C)\) is projectively equivalent to \(3\) non-collinear points. Hence \(\mathbb{T}(2,3;x)=\emptyset\) for all \(x\geq 4\). Thus we have proved (ii) because clearly \(\mathbb{T}(2,3;3)=\emptyset\).
Assume now \(n\geq 2\), \(d\geq 3\) and \((n,d)\neq(2,3)\). By Proposition 3.10 we have that if \(x<n+\lceil d/2\rceil\), then \(\mathbb{T}(n,d;x)=\emptyset\). Hence it is enough to prove the other implication.
Consider first the case \(n=2\) and \(d\geq 4\). We have \(x\geq\lceil d/2\rceil+2\). Let \(L,M,N\) three lines and \(G:=(d-2)L\cup M\cup N\). Take as \(S\) the point \(M\cap N\) and \(x-1\) points of \(L\setminus(M\cup N)\). Since \(S\subset\operatorname{Sing}(G)\), \(h^{0}(\mathcal{I}_{2S}(d))>0\). We have \(h^{1}(\mathcal{I}_{2S}(d))>0\). Indeed \(L\) contains at least \(\lceil d/2\rceil+1\) points of \(L\), hence \(\deg(2S\cap L)\geq d+2\) and by Remark 2.6 we have \(h^{1}(\mathcal{I}_{2S}(d))\geq h^{1}(\mathcal{I}_{2S\cap L,L}(d))>0\).
Now assume \(n\geq 3\), \(d=3\) and \(x\geq n+2\). Fix hyperplanes \(H,K,U\) of \(\mathbb{P}^{n}\) such that \(\dim H\cap K\cap U=n-3\). Since \(H\cap K\) and \(H\cap U\) are \(2\) different codimension \(1\) subspaces of \(H\), their union spans \(H\). Let \(S\) be the union of \(n+1\) general points in \((H\cap K)\cup(H\cap U)\) and a point on \((K\cap U)\setminus(H\cap K\cap U)\). Hence \(\langle S\rangle=\mathbb{P}^{n}\), \(h^{0}(\mathcal{I}_{2S}(3)\neq 0\) and it is easy to show (by induction on \(n\)) that \(h^{1}(\mathcal{I}_{2S}(3))\neq 0\). Hence by Remark 2.5, it follows that \(\mathbb{T}(n,3;x)\neq\emptyset\) for all \(x\geq n+2\) and \(n\geq 3\).
Now assume \(n\geq 3\) and \(d\geq 4\). Recall that \(x\geq n+\lceil d/2\rceil\). Take a line \(L\subset H\) and set \(G:=(d-2)H\cup K\cup U\). Consider a collection \(E\) of \(x-n+1\) points on the line \(L\). Since \(\#E\geq\lceil d/2\rceil+1\), by Lemma 3.7 we have \(h^{1}(\mathcal{I}_{2E}(d))>0\). Let \(A\subset H\)
be a collection of \(n-2\) general points. Note that \(\langle E\cup A\rangle=H\). Take as \(S\) the union of \(A\cup E\) and a point of \((U\cap K)\setminus(H\cap K\cap U)\). Obviously \(S\) spans \(\mathbb{P}^{n}\) and \(h^{1}(\mathcal{I}_{2S}(d))>0\) by Remark 2.5.
Notice that the set of points \(S\in\mathbb{T}(3,3;5)\) produced in the previous proof is not minimally Terracini, because \(4\) points belong to a plane. Indeed by Lemma 3.8 we already know that \(\mathbb{T}(3,3;5)^{\prime}=\emptyset\).
## 4. Rational normal curves
We start now to analyze the set of points laying on a rational normal curve. For each \(n>1\), we denote by \(\mathcal{C}_{n}\) the set of all rational normal curves of \(\mathbb{P}^{n}\).
**Lemma 4.1**.: _Fix integers \(n\geq 2\), \(d\geq 4\) and \(x=\lceil nd/2\rceil\). Take a rational normal curve \(C\in\mathcal{C}_{n}\) and let \(S\subset C\) be a collection of \(x\) points on \(C\). Then \(h^{1}(\mathcal{I}_{2S}(d))=0\)._
Proof.: Assume \(h^{1}(\mathcal{I}_{2S}(d))>0\). By Lemma 2.8, there exists a \(d\)-critical scheme \(Z\) for \(S\). Since \(C\) is scheme-theoretically cut-out by quadrics, there is \(Q\in|\mathcal{I}_{C}(2)|\) such that \(Q\cap Z=C\cap Z:=\zeta\) and we have
\[0\rightarrow\mathcal{I}_{C,Q}(d)\rightarrow\mathcal{I}_{\zeta,Q}(d) \rightarrow\mathcal{I}_{\zeta,C}(d)\to 0\]
Since \(\deg(Z)\leq 2x\leq nd+1\), we have \(h^{1}(\mathcal{I}_{\zeta,C}(d))=0\), and since \(C\) is projectively normal we get \(h^{1}(\mathcal{I}_{\zeta,Q}(d))=0\).
Thus the residual exact sequence of \(Q\) gives \(h^{1}(\mathcal{I}_{\operatorname{Res}_{Q}(Z)}(d-2))>0\).
Since \(\operatorname{Res}_{Q}(Z)\subseteq S\subset C\), we have
\[0\rightarrow\mathcal{I}_{C}(d-2)\rightarrow\mathcal{I}_{\operatorname{Res}_{ Q}(Z)}(d-2)\rightarrow\mathcal{I}_{\operatorname{Res}_{Q}(Z),C}(d-2) \to 0,\]
hence, to get a contradiction, it is sufficient to prove that \(\deg(\operatorname{Res}_{Q}(Z))<n(d-2)+2\). But indeed
\[\deg(\operatorname{Res}_{Q}(Z))\leq x\leq\lceil nd/2\rceil\leq n(d-2)+1,\]
where the last inequality is true for \(d\geq 4\).
**Theorem 4.2**.: _Fix integers \(n\geq 2\), \(d\geq 3\) and assume \((n,d)\neq(2,3)\). Given a rational normal curve \(C\in\mathcal{C}_{n}\) and a collection \(S\subset C\) of \(x\) points on the curve._
_Then_
_(i) if \(n\geq 3,d\geq 4\) and \(x\geq 1+\lceil nd/2\rceil\), then \(S\in\mathbb{T}(n,d;x)\);_
_(ii) if \(n\geq 4,d=3\) and \(x=1+\lceil nd/2\rceil\), then \(S\in\mathbb{T}(n,d;x)\);_
_(iii) if \(n\geq 2\), \(d\geq 4\) and \(x=1+\lceil nd/2\rceil\), then \(S\in\mathbb{T}(n,d;x)^{\prime}\)._
Proof.: By the exact sequence
\[0\rightarrow\mathcal{I}_{C\cup 2S}(d)\rightarrow\mathcal{I}_{2S}(d) \rightarrow\mathcal{I}_{2S\cap C,C}(d)\to 0\]
since \(h^{1}(\mathcal{I}_{2S\cap C,C}(d))=h^{1}(\mathcal{O}_{\mathbb{P}^{1}}(nd-2x))>0\), we have \(h^{1}(\mathcal{I}_{2S}(d))>0\). Since \(x\geq n+1\) and \(C\) is a rational normal curve, \(\langle S\rangle=\mathbb{P}^{n}\).
If \(n\geq 3\) then \(h^{0}(\mathcal{I}_{C}(2))\geq 2\), hence \(C\) is contained in \(2\) different quadric hypersurfaces. Thus if \(d\geq 4\), we have \(h^{0}(\mathcal{I}_{2S}(d))>0\). Hence \(S\in\mathbb{T}(n,d;x)\) and we have proved (i).
Assume now \(x=1+\lceil nd/2\rceil\). Fix any collection \(A\) of \(x\) general points and note that \(h^{0}(\mathcal{I}_{2S}(d))\geq h^{0}(\mathcal{I}_{2A}(d))\).
Hence, assuming \(d=3\) we have
\[h^{0}(\mathcal{I}_{2S}(3))\geq\binom{n+3}{3}-(n+1)x>0\]
where the last inequality is true for any \(n\geq 5\). If \(n=4\) and \(x=7\), we have \(h^{0}(\mathcal{I}_{2S}(3))\geq h^{0}(\mathcal{I}_{2A}(3))=1\), by the Alexander-Hirschowitz theorem. Thus we have proved (ii).
Now assume \(n=2\) and \(x=d+1\). We have \(h^{0}(\mathcal{I}_{2S}(d))\geq\binom{d+2}{2}-3(d+1)>0\), for \(d\geq 5\). If \(d=4\) and \(x=5\), then \(h^{0}(\mathcal{I}_{2S}(4))\geq h^{0}(\mathcal{I}_{2A}(4))=1\), again by the Alexander-Hirschowitz theorem. Hence \(S\in\mathbb{T}(n,d;x)\) for \(n=2\) and \(d\geq 4\).
In order to complete the proof of (iii) we need to prove the minimality, and this follows by Lemma 4.1.
**Remark 4.3**.: Recall that by Theorem 1.1 we know that \(\mathbb{T}(2,3;x)=\emptyset\) for all \(x>0\). Moreover in the proof of Lemma 3.8 we have seen that \(x\geq 5\) in \(\mathbb{P}^{3}\) points are not Terracini if they are in linearly general position. Hence if \(S\) is a collection of \(x\geq 5\) points on a rational normal cubic we have \(S\not\in\mathbb{T}(3,3;5)\).
### Degenerations of rational normal curves
We introduce now the notion of _reducible rational normal curves_.
**Definition 4.4**.: A reduced, connected and reducible curve \(T\subset\mathbb{P}^{n}\), for \(n\geq 2\), such that \(\deg(T)=n\), \(\langle T\rangle=\mathbb{P}^{n}\) is called _reducible rational normal curve_.
Of course, in \(\mathbb{P}^{2}\) a reducible rational normal curve is a reducible conic.
Since \(T\) is connected, there is an ordering \(T_{1},\ldots,T_{s}\) of the irreducible component such that each \(T[i]:=T_{1}\cup\cdots\cup T_{i}\), \(1\leq i\leq s\), is connected. We say that each such ordering of the irreducible components of \(T\) is a _good ordering_.
Set \(n_{i}:=\deg(T_{i})\). Note that \(n=n_{1}+\cdots+n_{s}\) and \(\dim\langle T_{i}\rangle\leq n_{i}\) with equality if and only if \(T_{i}\) is a rational normal curve in its linear span. For \(i=1,\ldots,s-1\) we have the following Mayer-Vietoris exact sequence
\[0\xrightarrow{}\mathcal{O}_{T[i+1]}(t)\xrightarrow{}\mathcal{O}_{T[i]}(t) \oplus\mathcal{O}_{T_{i+1}}(t)\xrightarrow{}\mathcal{O}_{T[i]\cap T_{i+1}}(t )\xrightarrow{}0, \tag{4}\]
in which \(T[i]\cap T_{i+1}\) is the scheme-theoretic intersection. Since \(T[i+1]\) is connected, \(\deg(T[i]\cap T[i+1])>0\). Thus (4) gives \(\dim\langle T[i+1]\rangle\leq\dim\langle T[i]\rangle+n_{i}\) with equality if and only if \(\deg(T[i]\cap T[i+1])=1\), \(T_{i+1}\) is a rational normal curve in its linear span and \(\langle T[i]\rangle\cap\langle T_{i+1}\rangle\) is the point \(T[i]\cap T_{i+1}\).
Since \(n=n_{1}+\cdots+n_{s}\), by induction on \(i\) we get \(p_{a}(T)=0\) and that each \(T_{i}\) is a rational normal curve in its linear span. Using (4) and induction on \(t\) we also get \(h^{1}(\mathcal{O}_{T}(t))=0\) and \(h^{0}(\mathcal{O}_{T}(t))=nt+1\) for all \(t\geq 0\), and that the restriction map \(H^{0}(\mathcal{O}_{\mathbb{P}^{n}}(t))\xrightarrow{}H^{0}(\mathcal{O}_{T}(t))\) is surjective, i.e. \(T\) is arithmetically Cohen-Macaulay. In the same way we see that each \(T[i]\) is arithmetically Cohen-Macaulay in its linear span.
Recall that each \(T_{i}\) is smooth. For any \(p\in T_{i}\) let \(L_{i}(p)\) denote the tangent line of \(T_{i}\) at \((p)\). Take \(p\in\operatorname{Sing}(T)\) and let \(T_{i_{1}},\ldots T_{i_{k}}\), \(k\geq 2\), be the irreducible components of \(T\) passing through \(p\). Since \(n=n_{1}+\cdots+n_{s}\) and \(p_{a}(T)=0\), the \(k\) lines \(L_{i_{1}}(p),\ldots,L_{i_{k}}(p)\) through \(p\) span a \(k\)-dimensional linear space (such a singularity is often called a seminormal or a weakly normal curve singularity).
An irreducible component \(T_{i}\) of \(T\) is said to be a _final component_ if \(\#(T_{i}\cap\operatorname{Sing}(T))=1\). Since \(s\geq 2\), \(T\) has at least \(2\) final components (e.g. \(T_{1}\) and \(T_{s}\) for any good ordering of the irreducible components of \(T\)), but it may have many final components (e.g. for some \(T\) with \(s\geq 3\) we may have \(\#\operatorname{Sing}(T_{i})=1\) for all \(i\geq 2\) and there is one \(T\), unique up to a projective transformation, formed by \(n\) lines through the same point).
**Remark 4.5**.: Take a (reducible) rational normal curve \(T\subset\mathbb{P}^{n}\). Since \(h^{1}(\mathcal{O}_{T})=0\), the exact sequence
\[0\longrightarrow\mathcal{I}_{T}\longrightarrow\mathcal{O}_{\mathbb{P}^{n}} \longrightarrow\mathcal{O}_{T}\to 0\]
gives \(h^{2}(\mathcal{I}_{T})=0\). Since \(h^{1}(\mathcal{I}_{T}(1))=0\), the Castelnuovo-Mumford Lemma gives that the homogeneous ideal of \(T\) is generated by quadrics. Thus \(T\) is scheme-theoretically cut out by quadric.
**Lemma 4.6**.: _Fix \(n\geq 2\), \(d\geq 4\). Let \(T\) be a reducible rational normal curve in \(\mathbb{P}^{n}\) and \(S\in S(\mathbb{P}^{n},x)\) such that \(S\subset T_{\mathrm{reg}}\) and \(\langle S\rangle=\mathbb{P}^{n}\). If \(2x\geq dn+2\), then \(S\in\mathbb{T}(n,d;x)\)._
Proof.: Since \(h^{0}(\mathcal{I}_{T}(2))=\binom{n}{2}\), we have that \(h^{0}(\mathcal{I}_{2S}(d))>0\) if \(d\geq 4\).
Set \(Z:=2S\cap T\). Since \(S\cap\mathrm{Sing}(T)=\emptyset\), \(\deg(Z)=2x\) and \(Z\) is a Cartier divisor of \(T\). Since \(h^{0}(\mathcal{O}_{T}(d))=nd+1\), then \(h^{1}(\mathcal{I}_{Z,T}(d))\geq 1\). Hence \(h^{1}(\mathcal{I}_{Z}(d))\geq 1\), since \(T\) is arithmetically Cohen-Macaulay, and \(S\in\mathbb{T}_{1}(n,d;x)\).
**Proposition 4.7**.: _Assume \(n\geq 2\) and \(d\geq 5\) and set \(x=1+\lceil nd/2\rceil\). Fix a reducible rational normal curve \(T=T_{1}\cup\cdots\cup T_{s}\subset\mathbb{P}^{n}\), \(s\geq 2\). Assume the existence of \(S\in\mathbb{T}(n,d;x)^{\prime}\) such that \(S\subset T\). Set \(n_{i}:=\deg(T_{i})\) and \(x_{i}:=\#(S\cap T_{i})\). Then:_
1. \(S\subset T_{\mathrm{reg}}\)_;_
2. \(n\) _is even and_ \(d\) _is odd;_
3. _every final component_ \(T_{i}\) _of_ \(T\) _has_ \(n_{i}\) _odd and_ \(2x_{i}=n_{i}d+1\)_._
Proof.: Note that \(x_{1}+\cdots+x_{s}\geq x\) and that \(x_{1}+\cdots+x_{s}=x\) if and only if \(S\subset T_{\mathrm{reg}}\). We have \(n=n_{1}+\cdots+n_{s}\), \(2x=nd+2\) if \(nd\) is even and \(2x=nd+3\) if \(n\) and \(d\) are odd. Set \(W:=2S\cap T\) as schemes.
Set \(S_{1}:=S\cap\mathrm{Sing}(T)\) and \(S_{2}:=S\setminus S_{1}\). For each \(o\in\mathrm{Sing}(T)\) let \(m(o)\) denote the number of irreducible components of \(T\) passing through \(o\). We saw that \(T\) has Zariski tangent of dimension \(m(o)\) and hence the connected component \(W(o)\) of \(W\) with \(o\) as its reduction has degree \(m(o)+1\). Thus \(w:=\deg(W)=2\#S_{2}+\sum_{o\in S_{1}}(m(o)+1)\). Since \(T\) has at most \(s-1\) singular points, \(S_{2}\neq\emptyset\). Fix \(u\in S_{2}\) and set \(S^{\prime}:=S\setminus\{u\}\). If \(\deg(W)\geq nd+4\), then \(h^{1}(\mathcal{I}_{2S^{\prime}}(d))>0\) and hence \(S\notin\mathbb{T}(n,d;x)^{\prime}\), a contradiction. Now assume \(\deg(W)\leq nd+3\). Thus \(nd\) is even, \(\#S_{1}=1\), say \(S_{1}=\{u\}\), and \(T\) is nodal at \(u\). Since \(p_{a}(T)=0\), \(T\) is connected, the irreducible components of \(T\) are smooth and \(T\) is a nodal at \(u\), \(T\setminus\{u\}\) has \(2\) connected components. Call \(T^{\prime}\) and \(T^{\prime\prime}\) the closures in \(\mathbb{P}^{n}\) of the \(2\) connected components of \(T\setminus\{u\}\). Note that \(\deg(W)=\deg(W\cap T^{\prime})+\deg(W\cap T^{\prime\prime})\) and \(n=\dim\langle T^{\prime}\rangle+\dim\langle T^{\prime\prime}\rangle\), either \(\deg(W\cap T^{\prime})\geq\dim\langle T^{\prime}\rangle+2\) or \(\deg(W\cap T^{\prime\prime})\geq d\dim\langle T^{\prime\prime}\rangle+2\). Thus \(S\notin\mathbb{T}(n,d;x)^{\prime}\). Note that \(2x_{i}\leq n_{i}d+1\) is equivalent to \(2x_{i}\leq n_{i}d\) if \(n_{i}d\) is even. This is sufficient to exclude the case \(d\) even.
From now on we assume \(d\) odd. Recall that \(2x_{i}\leq n_{i}d+1\) for all odd \(n_{i}\) and \(2x_{i}\leq n_{i}d\) for all even \(n_{i}\). Since \(d\geq 5\) and \(2x_{i}\leq n_{i}d+1\) for all \(i\), a good ordering of the irreducible components of \(T\) and \(s-1\) Mayer-Vietoris exact sequences give \(h^{1}(\mathcal{I}_{S}(d-2))=0\). Let \(Z\) be a critical scheme for \(S\). Since \(h^{1}(\mathcal{I}_{T}(1))=0\) and \(h^{2}(\mathcal{I}_{T})=h^{2}(\mathcal{O}_{T}(1))=0\), the Castelnuovo-Mumford's lemma gives that \(\mathcal{I}_{T}(2)\) is globally generated. Since \(\mathcal{I}_{T}(2)\) is globally generated and every connected component of \(Z\) has degree \(\leq 2\), \(Q\cap Z=T\cap Z\) for a general \(Q\in|\mathcal{I}_{T}(2)|\). Since \(\mathrm{Res}_{Q}(Z)\subseteq S\) and \(h^{1}(\mathcal{I}_{S}(d-2))=0\) and \(Q\) is arithmetically Cohen-Macaulay, the residual exact sequence of \(Q\) gives \(h^{1}(\mathcal{I}_{Z\cap Q}(d))=0\) and hence \(Z\subset T\). Thus
\(Z\subseteq W\). Since \(T\) is arithmetically Cohen-Macaulay, we get \(h^{1}(\mathcal{I}_{Z,T}(d))>0\) and hence \(h^{1}(\mathcal{I}_{W,T}(d))>0\).
(a) Assume \(n\) odd. In particular \(s\geq 3\) and there are at least three odd \(n_{i}\) with \(2x_{i}=n_{i}d+1\). Let \(T^{\prime}\) be a minimal connected subcurve of \(T\) such that \(\deg(T^{\prime}\cap W)\geq 2+d\dim(\langle T^{\prime}\rangle)\). Since \(2x_{i}\leq n_{i}d+1\) for all \(i\) and each subcurve \(T^{\prime\prime}\) of \(T\) has at least one final component (a final component of \(T^{\prime\prime}\), not necessarily of \(T\)) the minimality of \(T^{\prime}\) gives \(\deg(T^{\prime}\cap W)=2+d\dim(\langle T^{\prime}\rangle)\). The set \(S\cap T^{\prime}\) shows that \(S\notin\mathbb{T}(n,d;x)^{\prime}\), a contradiction.
(b) Assume \(n\) even and let \(T_{i}\) any final component of \(T\). Let \(Y\) be the union of all other components of \(T\). Since \(T_{i}\) is a final component, \(Y\) is connected (and hence \(\deg(Y)=\dim\langle Y\rangle\) and hence \(Y\) is a, possibly reducible, rational normal curve in \(\langle Y\rangle\), \(\langle Y\rangle\cap\langle T_{i}\rangle\) is a point, \(p\), and \(\{p\}\) is the scheme-theoretic intersection of \(T_{i}\) and \(Y\). We proved that \(p\notin S\). Since \(\langle S\rangle=\mathbb{P}^{n}\) and \(p\notin S\), then \(\langle S\cap T_{i}\rangle=\langle T_{i}\rangle\) and \(\langle S\cap Y\rangle=\langle Y\rangle\) and in particular \(S\cap T_{i}\neq\emptyset\) and \(S\cap Y\neq\emptyset\). Since \(S\) is minimal and \(T\) is arithmetically Cohen-Macaulay, \(h^{1}(\mathcal{I}_{Z\cap T_{i}}(d))=h^{1}(\mathcal{I}_{Z\cap Y,T}(d))=0\). The following Mayer-Vietoris type sequence on \(T\)
\[0\to\mathcal{I}_{Z,T}(d)\to\mathcal{I}_{Z\cap T_{i},T_{i}}(d)\oplus\mathcal{ I}_{Z\cap Y,Y}(d)\to\mathcal{O}_{p}(d)\to 0 \tag{5}\]
is exact, because \(p\notin S\). We proved that \(h^{1}(\mathcal{I}_{Z\cap T_{i},T_{i}}(d))=h^{1}(\mathcal{I}_{Z\cap Y,Y}(d))=0\). Assume \(2x_{i}\leq n_{i}d\) (which is always the case if \(n_{i}\) is even). The restriction map \(H^{0}(\mathcal{I}_{Z\cap T_{i},T_{i}}(d))\to H^{0}(\mathcal{O}_{p}(d))\) is surjective, because \(T_{i}\cong\mathbb{P}^{1}\) and \(\deg(Z\cap T_{i})\leq\deg(\mathcal{O}_{T_{i}}(d)\). Thus (5) gives \(h^{1}(T,\mathcal{I}_{Z,T}(d))=0\), a contradiction.
## 5. Minimally Terracini finite sets in the plane
In this section we focus on the case of the plane. We deduce from [7] the following result, which we will need in the sequel.
**Remark 5.1**.: Fix positive integers \(d,z\) such that \(z\leq 3d\). Let \(Z\subset\mathbb{P}^{2}\) be a zero-dimensional, \(Z\neq\emptyset\). If \(\deg(Z)=z\) and \(d\) is the maximal integer \(t\) such that \(h^{1}(I_{2S}(t))>0\), then either there is line \(L\) such that \(\deg(L\cap Z)\geq d+2\) or there is a conic such that \(\deg(Z\cap D)\geq 2d+2\) or \(z=3d\) and \(Z\) is the complete intersection of a plane cubic and a degree \(d\) plane curve (see [7, Remarque (i) p. 116]).
**Proposition 5.2**.: _Fix integers \(x>0\) and \(d\geq 4\)._
_(a) If \(x\leq d\), then \(\mathbb{T}(2,d;x)^{\prime}=\emptyset\)._
_(b) Let \(S\in S(\mathbb{P}^{2},d+1)\). Then \(S\in\mathbb{T}(2,d,d+1)^{\prime}\) if and only if \(S\) is contained in a reduced conic \(D\). Moreover, if \(D=R\cup L\) is reducible (with \(L\) and \(R\) lines), then \(d\) is odd, \(\#(S\cap R)=\#(S\cap L)=(d+1)/2\) and \(S\cap R\cap L=\emptyset\)._
_(c) Assume \(d\geq 5\). Then \(\mathbb{T}(2,d;x)^{\prime}=\emptyset\) for all \(x\) such that \(d+2\leq x<3d/2\)._
Proof.: Fix \(S\in\mathbb{T}(2,d;x)^{\prime}\) and let \(Z\) be a critical scheme for \(S\). We have \(\deg(Z)\leq 2x\) and \(d\) is the maximal integer such that \(h^{1}(\mathcal{I}_{Z}(d))>0\) by Theorem 3.1.
Assume first \(x\leq d\). Then \(\deg(Z)\leq 2d\) and, by Remark 2.7, there is a line \(L\) such that \(\deg(Z\cap L)\geq d+2\). Thus \(h^{1}(\mathcal{I}_{Z\cap L}(d))>0\). Since \(\langle S\rangle=\mathbb{P}^{2}\), \(S\) is not minimal, and this prove (a).
The _if_ implication of part (b) follows from Theorem 4.2 (iii).
We prove now the other implication. Take \(S\in\mathbb{T}(2,d;d+1)^{\prime}\) and let \(Z\) be a critical scheme for \(S\). By Lemma 2.11, \(Z_{\text{red}}=S\). Assume that \(S\) is not contained in a reduced conic. Since \(\langle S\rangle=\mathbb{P}^{2}\), \(S\) is not contained in a double line, therefore
is not contained in a conic. Hence Remark 5.1 implies that there is a line \(L\subset\mathbb{P}^{2}\) such that \(\deg(L\cap Z)\geq d+2\) and hence \(h^{1}(\mathcal{I}_{Z\cap L}(d))>0\). Hence \(S\) is not minimal.
Finally Proposition 4.7 gives the last part of (b).
We prove now (c). Assume \(d+2\leq x<3d/2\) and let \(S\in\mathbb{T}(2,d;x)\) with \(Z\) critical for \(S\). Since \(S\) is minimal \(\#(S\cap L)\leq(d+1)/2\) for all lines \(L\) and \(\#(S\cap D)\leq 2d+1\) for each conic. Since \(Z\) is critical, \(\deg(Z\cap L)\leq d+1\) for each line \(L\) and \(\deg(D\cap Z)\leq 2d+1\) for any conic \(D\). Thus since \(\deg(Z)\leq 3d-1\), by Remark 5.1 we have \(h^{1}(\mathcal{I}_{Z}(d))=0\), a contradiction.
Just above the range covered by Proposition 5.2 we have the following examples.
**Example 5.3**.: Assume \(d=2k\) even, \(d\geq 6\) and take \(x:=3k\). Let \(C\subset\mathbb{P}^{2}\) be a smooth plane cubic and \(T\) a smooth plane curve of degree \(k\). Take as \(S\) the complete intersection \(C\cap T\). Set \(Z:=C\cap 2T=2S\cap C\). Since \(\deg(Z)=3d\) and \(h^{0}(\mathcal{O}_{C}(d))=3d\), then \(h^{1}(\mathcal{I}_{Z,C}(d))=h^{0}(\mathcal{I}_{Z,C}(d))=1\). Since \(h^{0}(\mathcal{O}_{C}(d-3)))=3d-9\geq 3k=\#S\), we get \(h^{1}(\mathcal{I}_{S,C}(d-3))=0\). Since \(C\) is arithmetically normal, \(h^{1}(\mathcal{I}_{S}(d-3))=0\). Thus the residual exact sequence of \(C\) gives \(h^{1}(\mathcal{I}_{2S}(d))=h^{1}(\mathcal{I}_{Z,C}(d))\)= 1. We also get \(h^{1}(\mathcal{I}_{2S^{\prime}\cap C,C}(d))=0\) for all \(S^{\prime}\subseteq S\), since \(\deg(2S^{\prime}\cap C)\leq 3d-2\). Thus \(S\in\mathbb{T}(2,d;3d/2)^{\prime}\).
**Example 5.4**.: Take \(d\) odd, \(d\geq 7\), and set \(x:=(3d+1)/2\). Let \(C\subset\mathbb{P}^{2}\) an smooth plane cubic. Take \(S\subset C\) such that \(\#S=(3d+1)/2\). By assumption \(S\) is a Cartier divisor of \(C\). Since \(p_{a}(C)=1\) and \(\deg(\mathcal{O}_{C}(d-3)))=3d-9>\#S\), then \(h^{1}(C,\mathcal{I}_{S,C}(d-3))=0\). Since \(C\) is arithmetically-normal, \(h^{1}(\mathcal{I}_{S}(d-3))=0\). Thus the residual exact sequence of \(C\) gives \(h^{1}(\mathcal{I}_{2S}(d))=h^{1}(\mathcal{I}_{2S\cap C,C}(d))\). Since \(p_{a}(C)=1\), we get \(h^{1}(\mathcal{I}_{2S\cap C,C}(d))=1\). We also have \(h^{1}(\mathcal{I}_{2S^{\prime}\cap C,C}(d))=0\) for all \(S^{\prime}\subsetneq S\), since \(\deg(2S^{\prime}\cap C)\leq 3d-1\), hence \(S\in\mathbb{T}(2,d;(3d+1)/2)^{\prime}\).
## 6. Minimally Terracini finite sets in \(\mathbb{P}^{3}\)
Now we consider the case of finite sets of points in \(\mathbb{P}^{3}\). The following proposition extends Remark 5.1 to the case of schemes of \(\mathbb{P}^{3}\).
**Proposition 6.1**.: _Fix a positive integer \(d\). Let \(Z\subset\mathbb{P}^{3}\) be a zero-dimensional scheme such that \(\langle Z\rangle=\mathbb{P}^{3}\), its connected components have degree \(\leq 2\) and \(z:=\deg(Z)\leq 3d+1\). We have \(h^{1}(\mathcal{I}_{Z}(d))>0\) if and only if one of the following cases occur:_
_(i) there is a line \(L\subset\mathbb{P}^{3}\) such that \(\deg(Z\cap L)\geq d+2\);_
_(ii) there is a conic \(D\) such that \(\deg(D\cap Z)\geq 2d+2\);_
_(iii) there is a plane cubic \(T\) such that \(\deg(T\cap Z)=3d\) and \(T\cap Z\) is the complete intersection of \(T\) and a degree \(d\) plane curve._
Proof.: Since the _if_ part is trivial, we only need to prove the _only if_ part.
We use induction on \(d\). The case \(d=1\) is obvious, since conditions \(\deg(Z)\leq 4\) and \(\langle Z\rangle=\mathbb{P}^{3}\) imply that \(Z\) is linearly independent.
Assume \(d\geq 2\) and that the proposition is true for lower degrees. If there is a plane \(H\) such that \(h^{1}(\mathcal{I}_{Z\cap H}(d))>0\), then we may use Remark 5.1.
Now assume that \(h^{1}(\mathcal{I}_{Z\cap H}(d))=0\) for any plane \(H\subset\mathbb{P}^{3}\). Take a plane \(H\subset\mathbb{P}^{3}\) such that \(w:=\deg(Z\cap H)\) is maximal. Since \(\langle Z\rangle=\mathbb{P}^{3}\) then \(z\geq 4\), and \(w\geq 3\) and hence \(\deg(\operatorname{Res}_{H}(Z))=z-w\leq 3(d-1)+1\). Since \(h^{1}(\mathcal{I}_{Z\cap H}(d))=0\), the residual exact sequence of \(H\) gives \(h^{1}(\mathcal{I}_{\operatorname{Res}_{H}(Z)}(d-1))>0\). The inductive assumption applied to the scheme \(\operatorname{Res}_{H}(Z)\) implies that either there is a line \(R\) such that
\(\deg(R\cap\operatorname{Res}_{H}(Z))\geq d+1\), or there is a conic \(D\) such that \(\deg(D\cap\operatorname{Res}_{H}(Z))\geq 2d\), or there is a plane cubic \(C\) such that \(\deg(\operatorname{Res}_{H}(Z)\cap C)=3d-3\) and \(\operatorname{Res}_{H}(Z)\cap C\) is the complete intersection of \(C\) and a degree \(d-1\) plane curve.
Assume for the moment \(d=2\). In this case either there is a conic \(D\) such that \(\deg(D\cap\operatorname{Res}_{H}(Z))\geq 4\) (and hence \(w\geq 4\) and \(z\geq 8\), a contradiction), or there is a line \(R\) such that \(\deg(R\cap\operatorname{Res}_{H}(Z))\geq d+1\). If \(d\geq 3\) the existence of the conic or the cubic implies \(z\geq 4d\), a contradiction.
Thus for any \(d\geq 2\) there is a line \(R\) such that \(\deg(R\cap\operatorname{Res}_{H}(Z))\geq d+1\).
Since we already excluded case (i) (because \(h^{1}(\mathcal{I}_{Z\cap M}(d))=0\) for every plane \(M\)) we have \(\deg(R\cap Z)=d+1=\deg(R\cap\operatorname{Res}_{H}(Z))\).
Take any plane \(M\) containing \(R\) and spanned by \(M\cap Z\). Since \(h^{1}(\mathcal{I}_{M\cap Z}(d))=0\), the residual exact sequence of \(M\) first gives \(h^{1}(\mathcal{I}_{\operatorname{Res}_{M}(Z)}(d-1))>0\) and then the existence of a line \(L\) such that \(\deg(L\cap\operatorname{Res}_{M}(Z))\geq d+1\). As before we have \(\deg(L\cap Z)=d+1=\deg(L\cap\operatorname{Res}_{M}(Z))\).
First assume \(L=R\). Since each connected component of \(Z\) has degree \(\leq 2\), we get \(\#(Z_{\operatorname{red}}\cap R)=d+1\), \(\operatorname{Res}_{M}(Z)=Z_{\operatorname{red}}\) and each connected component of \(Z\) with reduction contained in \(R\) has degree \(2\). Fix \(p\in Z_{\operatorname{red}}\cap R\) and let \(Z(p)\) denote the connected component of \(Z\) supported at \(p\).
Note that \(N:=\langle R\cup Z(p)\rangle\) is a plane. Taking \(N\) as \(M\) we see (since \(\deg(R\cap\operatorname{Res}_{N}(Z))\leq d\)) that there is at least one line \(J\neq R\) (maybe \(J=L\)) such that \(\deg(J\cap\operatorname{Res}_{N}(Z))=d+1\). If \(J\cap R\neq\emptyset\), then \(\deg((J\cup R)\cap Z)\geq\deg(J\cap\operatorname{Res}_{M}(Z))+\deg(R\cap Z) \geq 2d+2\) (since \(M\supset R\)). Hence we are in case (ii) with a reducible conic, but this case is impossible because we assumed \(h^{1}(\mathcal{I}_{N\cap Z}(d))=0\).
Thus we may assume \(J\cap R=\emptyset\) and \(\deg(J\cap Z)=\deg(R\cap Z)=d+1\). Let \(Q\) be a general quadric containing \(J\cup R\). Since each connected component of \(Z\) has degree \(\leq 2\) and \(Q\) is general, \(Z\cap Q=Z\cap(J\cup R)\). Since \(R\cap J=\emptyset\), \(Q\) is a smooth quadric and \(J,R\) are in the same ruling. Since \(\deg(J\cap Z)=\deg(R\cap Z)=d+1\), the residual exact sequence of \(J\cup R\) in \(Q\) gives \(h^{1}(\mathcal{I}_{Z\cap Q,Q}(d))=0\). Thus the residual exact sequence of \(Q\) gives \(h^{1}(\mathcal{I}_{\operatorname{Res}_{Q}(Z)}(d-2))>0\). Hence \(\deg(\operatorname{Res}_{Q}(Z))\geq d\). Thus \(z\geq d+2d+2\), a contradiction.
Notice that if \(z\leq 3d\), case (iii) of the previous theorem never occurs since \(\langle Z\rangle=\mathbb{P}^{3}\).
Thanks to Proposition 6.1, we can easily prove Theorem 1.3 which states the emptyness of the minimal Terracini loci \(\mathbb{T}(3,d;x)^{\prime}\) for \(0<2x\leq 3d+1\).
Proof of Theorem 1.3.: Consider \(S\in\mathbb{T}(3,d;x)^{\prime}\) and let \(Z\) be a critical scheme for \(S\). By Lemma 2.11 we know that \(Z_{\operatorname{red}}=S\) hence \(\langle Z\rangle=\mathbb{P}^{3}\). Since \(\deg(Z)\leq 2x\leq 3d+1\), we can apply Proposition 6.1.
In any of the three cases there is a plane \(H\) and a subset \(S^{\prime}=S\cap H\) which contradicts the minimality of \(S\).
Now we will prove Theorem 1.4, which characterizes the first non-empty minimal Terracini loci in \(\mathbb{P}^{3}\), i.e. \(\mathbb{T}(3,d;1+\lceil 3d/2\rceil)^{\prime}\). Notice that one implication follows from Theorem 4.2 (iii). By Proposition 4.7, we also know that if \(S\) is contained in a reducible rational normal curve, then \(S\not\in\mathbb{T}(3,d;1+\lceil 3d/2\rceil)^{\prime}\).
Proof of Theorem 1.4.: We only need to prove that any \(S\in\mathbb{T}(3,d;1+\lceil 3d/2\rceil)^{\prime}\) is contained in a rational normal curve.
Given \(d\geq 7\) and \(x=1+\lceil 3d/2\rceil\), we set \(\varepsilon:=1\) if \(d\) is even and \(\varepsilon:=0\) if \(d\) is odd. Given \(S\in\mathbb{T}(3,d;x)^{\prime}\), let \(Z\) be a critical scheme for \(S\) and \(z:=\deg(Z)\). Recall that \(Z_{\operatorname{red}}=S\) and \(z\leq 3d+3-\varepsilon\).
Take a quadric \(Q\in|\mathcal{O}_{\mathbb{P}^{3}}(2)|\) such that \(w:=\deg(Z\cap Q)\) is maximal.
(a) Assume \(Z\nsubseteq Q\). Since \(\dim|\mathcal{O}_{\mathbb{P}^{3}}(2)|=9\), we have \(w\geq 9\). By minimality of \(S\), we also have \(h^{1}(\mathcal{I}_{Z\cap Q}(d)=0\), hence \(h^{1}(\mathcal{I}_{\operatorname{Res}_{Q}(Z)}(d-2))>0\). Since \(\deg(\operatorname{Res}_{Q}(Z))=z-w\leq 3d+3-9=3(d-2)\), if \(\langle\operatorname{Res}_{Q}(Z)\rangle=\mathbb{P}^{3}\), then Proposition 6.1 implies that:
(i) either there is a line \(L\) such that \(\deg(\operatorname{Res}_{Q}(Z)\cap L)\geq d\),
(ii) or there is a plane conic \(D\) such that \(\deg(\operatorname{Res}_{Q}(Z)\cap D)\geq 2d-2\).
If \(\dim\langle\operatorname{Res}_{Q}(Z)\rangle=1\), then by Remark 2.7 we are in case (i); if \(\dim\langle\operatorname{Res}_{Q}(Z)\rangle=2\), by Remark 5.1 we are in case (i) or (ii). Indeed we can exclude that \(z-w=3(d-2)\), because in this case \(w\leq 9\) and this contradicts again the assumption on \(Q\).
First we exclude case (ii): assume the existence of the conic \(D\). Since \(\dim|\mathcal{I}_{D}(2)|=4\), from the assumption on \(Q\) we have that \(w\geq\max(2d-2+4,z)\). Hence \(\deg(\operatorname{Res}_{Q}(Z))=z-w<2d-2\), a contradiction.
Hence assume the existence of a line \(L\) as in case (i). Note that there is a plane \(H\) such that \(L\subset H\) and \(\deg(H\cap Z)\geq d+1\). We have \(h^{1}(\mathcal{I}_{\operatorname{Res}_{H}(Z)}(d-1))>0\), by the minimality of \(S\), and \(\deg(\operatorname{Res}_{H}(Z))\leq 3d+3-d-1=2d+2<3(d-1)\).
Now we consider again the following possibilities: if \(\dim\langle\operatorname{Res}_{H}(Z)\rangle=1\) then obviously there is a line \(R\) such that \(\deg(\operatorname{Res}_{H}(Z)\cap R)\geq d+1\). If \(\dim\langle\operatorname{Res}_{H}(Z)\rangle=2\) then by Remark 5.1 either there is a line \(R\) as above, or there is a conic \(D^{\prime}\) such that \(\deg(\operatorname{Res}_{H}(Z)\cap D^{\prime})\geq 2d\).
Finally if \(\operatorname{Res}_{H}(Z)\) spans \(\mathbb{P}^{3}\), then we can apply Proposition 6.1 and we have again the same two cases: a line \(R\) or a conic \(D^{\prime}\) as above.
We exclude first the existence of the conic \(D^{\prime}\). As above we get \(w\geq\max(2d+3,z)\) and hence \(z=3d+3\), \(d\) odd, \(\deg(\operatorname{Res}_{Q}(Z))=d\) and \(\operatorname{Res}_{Q}(Z)=Z\cap L\). We also have \(\deg(Z\cap D^{\prime})=2d\) and \(Q\supset D^{\prime}\). There exists a quadric \(Q^{\prime}\) containing \(D^{\prime}\) and a degree \(3\) subscheme \(W\) of \(L\cap Z\). We get \(\deg(\operatorname{Res}_{Q^{\prime}}(Z)=d\) and the existence of a line \(L^{\prime}\neq L\) such that \(\operatorname{Res}_{Q^{\prime}}(Z)\subset R\) and \(\deg(\operatorname{Res}_{Q^{\prime}}(Z))=d\). Since \(\deg(L\cap L^{\prime})\leq 1\) and \(\operatorname{Res}_{Q}(Z)\) and \(\operatorname{Res}_{Q^{\prime}}(Z)\) differ at most for a degree \(3\) scheme, we get a contradiction (since \(d>4\)).
Now assume the existence of the line \(R\).
(a1) First assume \(R=L\subset H\). Since \(Z\) is critical, every connected component of \(\operatorname{Res}_{H}(Z)\) is a simple point, hence we get \(\#(S\cap R)\geq d+1\), contradicting the minimality of \(S\).
(a2) Now assume \(R\neq L\) and \(R\cap L\neq\emptyset\). Consider the plane \(M=\langle L\cup R\rangle\). Since \(\deg(L\cap R)=1\), then \(\deg(Z\cap M)\geq 2d\). Since \(h^{1}(\mathcal{I}_{\operatorname{Res}_{M}(Z)}(d-1))>0\) and \(\deg(\operatorname{Res}_{M}(Z))\leq d+3\), there is a line \(E\) such that \(\deg(E\cap\operatorname{Res}_{M}(Z))\geq d+1\). As above we get \(E\neq L\) and \(E\neq R\). Take \(Q^{\prime}\in|\mathcal{I}_{E\cup L\cup R}(2)|\). Since \(Z\nsubseteq Q\) and \(w\) is maximal, \(Z\nsubseteq Q^{\prime}\). Hence \(h^{1}(\mathcal{I}_{\operatorname{Res}_{Q^{\prime}}(Z)}(d-2))>0\), hence by Remark 2.7, we have \(\deg(\operatorname{Res}_{Q^{\prime}}(Z))\geq d-1\). Hence \(z\geq(d-1)+\deg(Z\cap(L\cup R\cup E))=(d-1)+(2d+d+1-3)=4d-3\), a contradiction.
(a3) Now assume \(R\cap L=\emptyset\). Take \(Q^{\prime\prime}\in|\mathcal{I}_{R\cup L}(2)|\) with \(\deg(Z\cap Q^{\prime\prime})\) maximal. The maximality of \(w\) gives \(Z\nsubseteq Q^{\prime\prime}\). Thus \(h^{1}(\mathcal{I}_{\operatorname{Res}_{Q^{\prime\prime}}(Z)}(d-2))>0\) and \(\deg(\operatorname{Res}_{Q^{\prime\prime}})\leq 3d+3-(d+1+d)=d+2\leq 2(d-2)+1\). Hence there is a line \(F\) such that \(\deg(F\cap\operatorname{Res}_{Q^{\prime\prime}}(Z))\geq d\). We conclude as in case (a2), using \(L\), \(R\) and \(F\) instead of \(L\), \(R\) and \(E\).
(b) Now we know that \(Z\subset Q\). Here we will show that \(Q\) is integral.
First note that \(Z\) is not contained in a double plane, \(2H\). Indeed since each component of \(Z\) has degree \(\leq 2\), we would get \(S\subset H\), a contradiction.
Now assume that \(Z\) is contained in a reducible quadric \(H\cup M\).
With no loss of generality we may assume \(a:=\deg(H\cap Z)\geq\deg(Z\cap M)\). Clearly \(a\geq z/2\). Since \(\deg(\operatorname{Res}_{H}(Z))=z-a\leq z/2\leq 2(d-1)+1\), and \(h^{1}(\operatorname{Res}_{H}(Z)(d-1))>0\), then, by Remark 2.7, there is a line \(L\) such that \(\deg(L\cap\operatorname{Res}_{H}(Z))\geq d+1\). Let \(N\) be a general plane containing \(L\). Since \(\deg(\operatorname{Res}_{H\cup N}(Z))\leq x-d-1\), we have \(h^{1}(\mathcal{I}_{\operatorname{Res}_{H\cup N}(Z)}(d-2))=0\), again by Remark 2.7. Thus \(Z\subset H\cup N\) and the generality of \(N\) implies \(Z\subset H\cup L\). Since \(\#(L\cap S)\geq\lceil(d+1)/2\rceil\) and \(S\in\mathbb{T}(3,d;x)^{\prime}\), we get \(d\) odd, \(\#(S\cap L)=(d+1)/2\) and \(L\cap H\notin S\).
Let \(N\) be again a general plane containing \(L\). Since \(\deg(\operatorname{Res}_{N}(Z))\leq 3d+3-d-1\), by Proposition 6.1, either there is a line \(R\) such that \(\deg(R\cap\operatorname{Res}_{N}(Z))\geq d+1\), or there is a conic \(D\) with \(\deg(D\cap\operatorname{Res}_{N}(Z))\geq 2d\).
First assume the existence of \(D\). There exists a cubic surface \(\Sigma\) such that \(L\cup D\subset\Sigma\) and \(H\cap Z\cap\Sigma=D\cap Z\). Since \(\deg(\operatorname{Res}_{\Sigma}(Z))\leq z-(d+1)-2d\leq 2\), we get \(h^{1}(\mathcal{I}_{\operatorname{Res}_{\Sigma}(Z)}(d-3))=0\) and \(Z\subset\Sigma\), hence we have \(Z\subset D\cup L\). Since \(H\cap L\notin S\), we get \(\#(S\cap D)=d+1\). Thus \(S\notin\mathbb{T}(3,d;x)^{\prime}\), a contradiction.
Now assume the existence of the line \(R\). Since \(R\cap L\cap S=\emptyset\) and \(S\in\mathbb{T}(3,d;x)^{\prime}\), we have \(L\cap R=\emptyset\) and \(\deg(R\cap Z)=d+1\). We get \(\#(S\cap R)=(d+1)/2\). Take a general \(Q^{\prime}\in|\mathcal{I}_{R\cup L}(2)|\). Since \(\mathcal{I}_{R\cup L}(2)\) is globally generated, \(Q^{\prime}\) is smooth and \(Z\cap Q^{\prime}=Z\cap(R\cup L)\). Since \(h^{1}(\mathcal{I}_{\operatorname{Res}_{Q^{\prime}}}(d-2))>0\) and \(\deg(\operatorname{Res}_{Q^{\prime}}(Z))\leq d+1\), there is a line \(E\) such that \(\deg(E\cap Z)\geq d\). Since \(d\) is odd, we get \(\#(E\cap S)=(d+1)/2\). Since \(Z\subset H\cup L\) and \(L\subset H\), \(L\cap E\neq\emptyset\). The conic \(L\cup E\) gives \(S\notin\mathbb{T}(3,d;x)^{\prime}\), a contradiction.
(c) We have proved that \(Z\) in contained in an integral quadric \(Q\). Since \(\dim|\mathcal{O}_{\mathbb{P}^{3}}(2)|=9\), there is quadric \(T\subset\mathbb{P}^{3}\) such that \(\deg(T\cap Z)\geq 8-\varepsilon\) and \(T\neq Q\). We want to prove that \(Z\subset T\).
Assume \(Z\not\subset T\). Since \(\deg(\operatorname{Res}_{T}(Z))\leq 3(d-2)+1\), the residual exact sequence of \(T\) gives \(h^{1}(\mathcal{I}_{\operatorname{Res}_{T}(Z)}(d-2))>0\). First assume \(\deg(\operatorname{Res}_{T}(Z))=3d-5\) and that \(\langle\operatorname{Res}_{T}(Z)\rangle\) is contained in a plane \(M\). Since \(Q\) is irreducible, \(Q\cap M\) is a conic containing at least \(\lceil(3d-5)/2\rceil\) points of \(S\), a contradiction with the minimality of \(S\).
Since \(\operatorname{Res}_{T}(Z)\) is not a scheme of degree \(3d-5\) contained in a plane, Proposition 6.1 implies that either there is a line \(L_{1}\) such that \(\deg(L_{1}\cap\operatorname{Res}_{T}(Z))\geq d\) or there is a conic \(D_{1}\) such that \(\deg(D_{1}\cap\operatorname{Res}_{T}(Z))\geq 2d-2\) or there is a plane cubic \(C_{1}\) such that \(\deg(C_{1}\cap\operatorname{Res}_{T}(Z))\geq 3d-6\).
The existence of the plane cubic \(C_{1}\) is excluded, for the following reason: \(Z\) is contained in an integral quadric \(Q\), hence \(\langle C_{1}\rangle\not\subset Q\). Then \(\deg(C_{1}\cap Q)\leq 6\) and this gives a contradiction because \(6<3d-6\).
Assume the existence of the conic \(D_{1}\). The scheme \(\operatorname{Res}_{\langle D_{1}\rangle}(Z)\) has degree \(\leq d+5-\varepsilon\) and \(h^{1}(\mathcal{I}_{\operatorname{Res}_{\langle D_{1}\rangle}(Z)}(d-1))>0\), because \(Z\) is critical. Thus by Remark 2.7, there is a line \(L_{2}\) such that \(\deg(Z\cap(L_{2}\cup D_{1}))\geq(d+1)+(2d-2)=3d-1\). Bezout's theorem gives \(L_{2}\subset Q\) and \(D_{1}\subset Q\). Since \(Q\) is an integral quadric, the conic \(D_{1}\) is a plane section of \(T\). Hence \(L_{2}\cap D_{1}\neq\emptyset\). Since \(Z\) is critical, \(L_{2}\nsubseteq\langle D_{1}\rangle\). Thus \(D_{1}\cup L_{2}\) is a reducible rational normal curve. Note that \(\mathcal{I}_{L_{2}\cup D_{1}}(2)\) is globally generated. Since each connected component of \(Z\) has degree \(\leq 2\), a general
\(|\mathcal{I}_{L_{2}\cup D_{1}}(2)|\) satisfies \(Q_{1}\cap Z=(L_{2}\cup D_{1})\cap Z\). Since \(\deg(\operatorname{Res}_{Q_{1}}(Z))\leq 4-\varepsilon\leq d\) and \(Z\) is critical, \(Z\subset L_{2}\cup D_{1}\), contradicting Proposition 4.7.
Now assume the existence of the line \(L_{1}\). Bezout's theorem gives \(L_{1}\subset Q\). Take a general plane \(U\supset L_{1}\). Since each connected component of \(Z\) has degree \(\leq 2\), then \(L_{1}\cap Z=U\cap Z\). Since \(\deg(\operatorname{Res}_{U}(Z))\leq 2d+3-\varepsilon\) and \(d\geq 6\),by Proposition 6.1 either there is a line \(L_{3}\) such that \(\deg(\operatorname{Res}_{U}(Z)\cap L_{3})\geq d+1\) or there is a conic \(D_{3}\) such that \(\deg(D_{3}\cap\operatorname{Res}_{U}(Z))\geq 2d\).
The existence of \(D_{3}\) is excluded again by Proposition 4.7. Thus there exists \(L_{3}\) and we have \(\#(S\cap(L_{1}\cup L_{3}))\geq\lceil d/2\rceil+\lceil(d+1)/2\rceil=d+1\); we also get that \(d\) is odd. Since \(S\) is minimal, \(L_{1}\cap L_{3}=\emptyset\) and we get a contradiction as in step (a3), using the lines \(L_{1}\) and \(L_{3}\) instead of \(R\) and \(L\).
(d) By the previous steps, \(Z\) is contained in no reducible quadric and in infinitely many integral quadrics. Moreover, every quadric containing a degree \(8-\varepsilon\) subscheme of \(Z\) contains \(Z\).
As in every pencil of quadrics at least one is singular, let \(T\) be a quadric cone containing \(Z\). Call \(o\) its vertex. Every line \(L\) such that \(\deg(L\cap Z)\geq 3\) is contained in \(T\) and any union of \(2\) lines of \(T\) is a reducible conic, because they contain \(o\). Set \(E:=Q\cap T\), where \(Q\) is a general quadric containing \(Z\). Since \(E\) is the complete intersection of \(2\) quadric surfaces, the adjunction formula gives \(\omega_{E}\cong\mathcal{O}_{E}\). The Koszul complex of the equations of \(Q\) and \(T\) gives \(h^{0}(\mathcal{O}_{E})=1\). Hence by duality we have \(h^{1}(\mathcal{O}_{E})=1\).
First assume \(E\) integral. Since the rank \(1\) torsion free sheaf \(\mathcal{I}_{Z,E}(d)\) has degree \(4d-\deg(Z)>0\), then \(h^{1}(\mathcal{I}_{Z,E}(d))=0\). Since \(E\) is arithmetically Cohen-Macaulay, \(h^{1}(\mathcal{I}_{Z}(d))=0\), a contradiction.
Now assume that \(E\) is not integral. If \(E\) is not reduced, it may have multiple components, but no embedded point. If \(E_{\operatorname{red}}\neq E\), then \(E_{\operatorname{red}}\) is a reduced curve of degree \(\leq 3\) containing \(S\). Since \(h^{0}(\mathcal{O}_{E})=1\), \(E_{\operatorname{red}}\) is connected, hence Proposition 4.7 gives a contradiction.
Thus \(E=E_{\operatorname{red}}\) is reduced and each irreducible component of \(E\) is either a line or a smooth conic or a rational normal curve. First assume \(E=E_{1}\cup E_{2}\) with \(E_{1}\) and \(E_{2}\) reduced conics. Since \(Z\) is critical, \(h^{1}(\mathcal{I}_{\operatorname{Res}_{(E_{i})}(Z)}(d-1))>0\), \(i=1,2\) and hence \(\deg(Z\cap E_{1})+\deg(Z\cap E_{2})-\deg(Z\cap E_{1}\cap E_{2})\geq 4d\), a contradiction. Thus \(E\) has at most one smooth conic among its irreducible components and it is not formed by \(4\) lines through \(o\). Hence there is a connected degree three curve \(C\subset E\), which is either a rational normal curve, or a reducible rational normal curve. If \(Z\subset C\), then \(C\) is a rational normal curve by Proposition 4.7. In order to conclude it is enough to prove that this is the only possibility.
Indeed assume \(Z\nsubseteq C\). Since \(\mathcal{I}_{C}(2)\) is globally generated and every connected component of \(Z\) has degree \(\leq 2\), for a general \(Q^{\prime}\in|\mathcal{I}_{C}(2)|\) we have \(Q^{\prime}\cap Z=C\cap Z\). Hence \(h^{1}(\mathcal{I}_{\operatorname{Res}_{Q^{\prime}}(Z)}(d-2))>0\). We write \(E=L_{4}\cup C\) with \(L_{4}\) a line. We have \(\operatorname{Res}_{Q^{\prime}}(Z)\subset L_{4}\) and \(\deg(\operatorname{Res}_{Q^{\prime}}(Z))\geq d\). Take a general plane \(M\supset L_{4}\).
Since \(h^{1}(\mathcal{I}_{\operatorname{Res}_{M}(Z)}(d-1))>0\) and \(\deg(\operatorname{Res}_{M}(Z))\leq 2d+3-\varepsilon\leq 3d\), then by Proposition 6.1 either there is a line \(L_{5}\subset C\) such that \(\deg(L_{5}\cap\operatorname{Res}_{M}(Z))\geq d+1\) (excluded because \(E\) would be a union of \(2\) reduced conics) or there is a conic \(D_{4}\) such that \(\deg(\operatorname{Res}_{M}(Z)\cap D_{4})\geq 2d\) (excluded because because \(E\) would be a union of \(2\) reduced conics).
We are going finally to prove our last main result, which is Theorem 1.5. We point out that the bound obtained in the theorem is sharp, as shown in the following example, which implies that \(\mathbb{T}(3,d;2d)^{\prime}\neq\emptyset\) for all \(d\geq 5\).
**Example 6.2**.: Take \(d\geq 5\). Let \(C\subset\mathbb{P}^{3}\) be a smooth linearly normal elliptic curve. Let \(\mathcal{L}\) be a line bundle on \(C\) such that \(\mathcal{L}^{\otimes 2}\cong\mathcal{O}_{C}(d)\). Since \(\deg(\mathcal{L})=2d\) and \(C\) has genus \(1\), \(\mathcal{L}\) is very ample.
Fix any \(S\subset|\mathcal{L}|\) formed by \(2d\) points. We will show that \(S\in\mathbb{T}(3,d;2d)^{\prime}\). Obviously \(\langle S\rangle=\mathbb{P}^{3}\). Since \(2S\cap C\in|\mathcal{O}_{C}(d)|\), we have \(h^{i}(\mathcal{I}_{2S\cap C,C}(d))=1\), \(i=0,1\).
The curve \(C\) is the smooth complete intersection of \(2\) quadric surfaces, say \(C=Q\cap Q^{\prime}\). Clearly \(Q\) and \(Q^{\prime}\) are smooth at each point of \(S\) and \(\operatorname{Res}_{Q}(2S)=S\) and \(\operatorname{Res}_{Q^{\prime}}(2S\cap Q)=S\), hence the residual exact sequence of \(Q\) in \(\mathbb{P}^{3}\) and of \(C\) in \(Q\) gives:
\[0\xrightarrow{}\mathcal{I}_{S}(d-2)\xrightarrow{}\mathcal{I}_{2S}(d) \xrightarrow{}\mathcal{I}_{2S\cap Q,Q}(d)\xrightarrow{}0, \tag{6}\]
\[0\xrightarrow{}\mathcal{I}_{S,Q}(d-2)\xrightarrow{}\mathcal{I}_{2S\cap Q,Q}(d )\xrightarrow{}\mathcal{I}_{2S\cap C,C}(d)\xrightarrow{}0. \tag{7}\]
Since \(d\geq 5\), we have \(\#S=2d<4d-8=\deg(\mathcal{O}_{C}(d-2))\). Thus \(h^{1}(\mathcal{I}_{S,C}(d-2))=0\). Since \(C\) is arithmetically Cohen-Macaulay, we have \(h^{1}(\mathcal{I}_{S}(d-2))=0\), and hence \(h^{1}(\mathcal{I}_{S,Q}(d-2))=0\). Using (7) and (6), we get \(h^{1}(\mathcal{I}_{2S}(d))=1\) and \(h^{0}(\mathcal{I}_{2S}(d))\geq 1\).
Take now \(S^{\prime}\subsetneq S\). Since \(\deg(2S^{\prime}\cap C)<4d\), we have \(h^{1}(\mathcal{I}_{(2S^{\prime}\cap C,C}(d))=0\). Moreover \(h^{1}(Q,\mathcal{I}_{S^{\prime},Q}(d-2))=0\), by Remark 2.5. Hence, using again (7) and (6) (with \(S^{\prime}\) instead of \(S\)), we get \(h^{1}(\mathcal{I}_{2S^{\prime}}(d))=0\).
Thus \(S\in\mathbb{T}(3,d;2d)^{\prime}\).
From the previous example we can deduce the following remark.
**Remark 6.3**.: Fix integers \(x<2d\). Let \(E\subset\mathbb{P}^{3}\) be an integral complete intersection of two quadric surfaces. Let \(S\) be a collection of points on \(E\), then \(h^{1}(\mathcal{I}_{2S}(d))=0\).
The following technical lemma generalizes the previous remark to reducible quartic curves satisfying further suitable conditions.
**Lemma 6.4**.: _Fix \(d\geq 5\). Let \(T\subset\mathbb{P}^{3}\) be a reduced curve with \(\deg(T)\leq 4\) and such that any irreducible component of \(T\) is a line or a conic or a rational normal cubic. Assume also that no plane contains a subcurve of \(T\) of degree \(\geq 3\). Let \(S\subset T\) be a collection of points such that \(\#(S)\leq 2d-1\) and_
* \(\#(S\cap L)\leq\lceil d/2\rceil\) _for any line_ \(L\subseteq T\)__
* \(\#(S\cap C)\leq d\) _for any conic_ \(C\subseteq T\)__
* \(\#(S\cap D)\leq(3d+1)/2\) _for any rational normal cubic_ \(D\subseteq T\)_._
_Let \(Z\subset T\) be a zero-dimensional scheme such that \(Z_{\operatorname{red}}=S\), any connected component of \(Z\) has degree \(\leq 2\), \(Z\) is contained in an integral quadric surface and \(Z\) is not contained in any reducible quadric. Then \(h^{1}(\mathcal{I}_{Z}(d))=0\)._
Proof.: Since \(h^{1}(\mathcal{I}_{T}(t))=0\) for all \(t\geq 5\), it is sufficient to prove that \(h^{1}(\mathcal{I}_{Z,T}(d))=0\). We already analized all cases with \(\deg(T)\leq 3\) and \(T\) connected. Thus we may assume that \(T\) is connected and \(\deg(T)=4\).
Consider a good ordering \(T_{1},\ldots,T_{s}\) of the irreducible components of \(T\) and set \(Y=T_{1}\cup\cdots\cup T_{s-1}\). The components \(T_{1}\) and \(T_{s}\) are final components and for every final component \(T_{i}\) of \(T\) there is a good ordering with \(T_{i}\) as its first component. Thus, changing if necessary the good ordering, we may assume \(\deg(T_{1})\geq\deg(T_{s})\)
Thus \(\deg(T_{s})\leq 2\) and \(\deg(T_{s})=2\) if and only if \(s=2\) and \(\deg(T_{1})=2\). This case is excluded, because \(T\) would be contained in a reducible quadric.
Hence \(\deg(T_{1})\geq\deg(T_{s})=1\). Set \(E:=T_{s}\cap Y\) (scheme-theoretic intersection). Since \(T\) contains no plane subcurves of degree \(\geq 3\), then we can assume, up to choosing a good ordering that \(\deg(T_{s}\cap Y)\leq 2\). Set \(e:=\#(S\cap E)\) and \(z:=\deg(Z)\leq 2(\#S)\). Note that \(\#S=\#(S\cap T_{s})+\#(S\cap Y)-e\). We have the following Mayer-Vietoris type sequence on \(T\)
\[0\rightarrow{\mathcal{I}}_{Z,T}(d)\rightarrow{\mathcal{I}}_{Z\cap T_{s},T_{s} }(d)\oplus{\mathcal{I}}_{Z\cap Y,Y}(d)\rightarrow{\mathcal{I}}_{Z\cap E,E}(d) \to 0. \tag{8}\]
(a) Assume \(\#(S\cap T_{s})\leq\lceil d/2\rceil-1\). Thus \(h^{1}({\mathcal{I}}_{E\cup(Z\cap T_{s}),T_{s}}(d))=0\), since \(\deg(E\cup(Z\cap T_{s}))\leq 2+2(\lceil d/2\rceil-1)\). Then the restriction map \(H^{0}({\mathcal{I}}_{Z\cap T_{s},T_{s}}(d))\longrightarrow H^{0}({\mathcal{I }}_{Z\cap E,E}(d))\) is surjective. Thus the exact sequence (8) gives \(h^{1}({\mathcal{I}}_{Z,T}(d))=0\) and we conclude.
(b) Assume \(\#(S\cap T_{s})=\lceil d/2\rceil\). If \(S\cap Y\cap T_{s}=\emptyset\), then we have \({\mathcal{I}}_{Z\cap E,E}(d)={\mathcal{O}}_{E}(d)\) and we conclude as in step (a). Thus from now on we assume \(S\cap T_{s}\cap Y\neq\emptyset\). Let \(M\) be a plane containing \(L_{s}\) such that \(\deg(Z\cap M)\) is maximal.
(b1) Assume that \(M\) contains another irreducible component, \(T_{i}\), of \(T\). Since \(T\) contains no planar subcurve of degree \(\geq 3\), \(\deg(T_{i})=1\) and \(T_{i}\) is unique in \(M\). Since \(T_{s}\cup T_{i}\) is a conic, \(\#(S\cap(T_{s}\cup T_{i}))\leq d\). The closure \(A\) of \(T\setminus(T_{s}\cup T_{i})\) is either a reduced conic or the union of \(2\) disjoint lines. The first case is excluded, because \(T\) is not contained in a reducible quadric. Now assume that \(A\) is the union of \(2\) disjoint lines, say \(A=L\cup R\). The lines \(L\) and \(R\) are final components of \(T\). By step (a) we may assume \(\#(S\cap L)=\#(S\cap R)=\lceil d/2\rceil\). Thus \(L\cap T_{s}=L\cap R=\emptyset\). Let \(Q\) be the unique quadric containing \(L\cup R\cup T_{s}\). Since \(L\cap R=\emptyset\), \(Q\) is a smooth quadric. Changing if necessary the names of the \(2\) rulings of \(Q\) we may assume \(L\cup R\cup T_{s}\in|{\mathcal{O}}_{Q}(3,0)|\). Since \(T_{i}\) meets each connected component of \(L\cup R\cup T_{s}\), Bezout's theorem gives \(T_{i}\subset Q\) and \(T_{i}\in|{\mathcal{O}}_{Q}(0,1)|\). Let \(Z^{\prime}\subset Q\) be the residual of \(Z\) with respect to the divisor \(L\cup R\cup T_{s}\). It is sufficient to prove that \(h^{1}(Q,{\mathcal{I}}_{Z^{\prime}}(d-3,d))=0\). Since \(T_{i}\cup T_{s}\) is a reducible conic, \(\#(S\cap T_{i}\cup T_{s})\leq d\) and hence \(\#(S\cap T_{i})\leq d-\lceil d/2\rceil\) with strict inequality if \(S\cap T_{i}\cap T_{s}\neq\emptyset\). Thus \(\deg(Z^{\prime})\leq 4d-2-6\lceil d/2\rceil\leq d-2\) and hence \(h^{1}({\mathcal{I}}_{Z^{\prime},Q}(d-3,d))=0\), and we conclude that \(h^{1}({\mathcal{I}}_{Z,T}(d))=0\).
(b2) Assume that \(T_{s}\) is the unique connected component of \(T\) contained in \(M\). Thus \(\deg((Y\cap(M\setminus T_{s}))\leq 3\). Hence \(h^{1}({\mathcal{I}}_{Z\cap M}(d))=0\). By the residual exact sequence of \(M\) it is sufficient to prove that \(h^{1}({\mathcal{I}}_{{\rm Res}_{M}(Z)}(d-1))=0\). Assume by contradiction that \(h^{1}({\mathcal{I}}_{{\rm Res}_{M}(Z)}(d-1))>0\). Since \(\deg(M\cap Z)>\deg(Z\cap T_{s})\), we have \(\deg({\rm Res}_{M}(Z))\leq 4d-2-2\lceil d/2\rceil-1\leq 3(d-1)\). Since \(T\) contains no plane curve of degree \(\geq 3\), Proposition 6.1 gives that either there is a line \(L_{1}\) such that \(\deg(L_{1}\cap{\rm Res}_{M}(Z))\geq d+1\) or there is a conic \(D_{1}\) such that \(\deg(D_{1}\cap{\rm Res}_{M}(Z))\geq 2d\).
(b2.1) Assume first the existence of the line \(L_{1}\). Since \(\#(S\cap L_{1})\leq\lceil d/2\rceil\), we get \(d\) odd and \(\deg(Z\cap L_{1})=d+1\). Since \(\#(S\cap J)\leq d\) for all conics \(J\subset T\) and \(d\) is odd, \(L_{1}\cap T_{s}=\emptyset\). Let \(A_{1}\) denote the closure of \(T\setminus(L_{1}\cup T_{s})\). Either \(A_{1}\) is a reduced conic or it is the union of \(2\) disjoint lines. We have \(\#(S\cap(T\setminus(T_{s}\cup L_{1})))\leq d-2\). There is an integral quadric \(Q\) containing \(T_{s}\cup L_{1}\) and at least one point of \(S\cap(T\setminus T_{s}\cup R_{1}))\) for each component of \(A_{1}\). Thus \(h^{1}({\mathcal{I}}_{{\rm Res}_{Q}(Z)}(d-2))=0\). Thus it is sufficient to prove that \(h^{1}({\mathcal{I}}_{Z\cap Q,Q}(d))=0\). Since \(L_{1}\cap T_{s}=\emptyset\), \(Q\) is a smooth quadric. We get \(h^{1}({\mathcal{I}}_{Z\cap Q,Q}(d))=0\), unless \(Q\) contains another irreducible component of \(T\). First assume \(A_{1}\subset Q\). Since \(Q\) is a smooth quadric, we get (for a suitable choice
of the 2 rulings of \(Q\)) that either \(T\in|\mathcal{O}_{Q}(4,0)|\) (excluded, because \(T\) is reduced and connected) or \(T\in|\mathcal{O}_{Q}(3,1)|\) or \(T\in|\mathcal{O}_{Q}(2,2)|\), which are also excluded. Now assume that \(Q\) only contains one component, \(R\), of \(A_{1}\). Write \(A_{1}=R\cup R_{2}\) and \(A_{2}:=L_{1}\cup T_{s}\cup R\). Either \(A_{2}\in|\mathcal{O}_{Q}(3,0)|\) or \(A_{2}\in|\mathcal{O}_{Q}(2,1)|\). In both cases we get \(h^{1}(Q,\mathcal{I}_{Z\cap A_{2},Q}(d,d))=0\). To conclude the proof we need to consider \(R_{2}\cap Z\cap Q\). We have \(\deg(R_{2}\cap Z\cap Q)\leq 4\) and hence \(h^{1}(Q,\mathcal{I}_{Z\cap Q,Q}(d,d))=0\).
(b2.2) Assume the existence of the conic \(D_{1}\). Since \(\#(S\cap D_{1})\leq d\), we get \(\#(S\cap D_{1})=d\) and hence \(\deg(Z\cap D_{1})=2d\). By step (b1) we may assume that if \(D_{1}\) is reducible, then none of its component contains \(\lceil d/2\rceil\) points of \(S\). We get \(T=D_{1}\cup R\cup T_{s}\) with \(R\) a line and \(\#(S\cap(T\setminus D_{1}\cup T_{s}))\leq d-1-\lceil d/2\rceil\). If \(R\) is a final component of \(T\), then we use step (a) and that \(\#(R\cap S)<\lceil d/2\rceil\). Now assume that \(R\) is not a final component of \(T\). Assume for the moment \(T_{s}\cap D_{1}\neq\emptyset\). Since \(T\) contains no degree 3 planar subcurve, \(D_{1}\cup T_{s}\) is a reducible rational normal curve and we may find a quadric \(Q_{1}\) containing \(D_{1}\cup T_{s}\), but not \(R\). To conclude in this case we need \(\deg(\operatorname{Res}_{Q_{1}}(Z))\leq d-1\). We have \(\#(S\cap R)\leq 2d-1-d-\lceil d/2\rceil\), and we can conclude. Now assume \(D_{1}\cap T_{s}=\emptyset\). Since \(T\) is connected, \(R\) meet \(T_{s}\) and \(D_{1}\) at a different point. In this case \(T\) is contained in the reducible quadric \(\langle R\cup T_{s}\rangle\cup\langle D_{1}\rangle\), a contradiction.
We give now the proof of Theorem 1.5, which states that \(\mathbb{T}(3,d;x)^{\prime}\) is empty if \(1+\lceil 3d/2\rceil<x<2d\).
Proof of Theorem 1.5:.: Assume the existence of \(S\in\mathbb{T}(3,d;x)^{\prime}\) and fix a critical scheme \(Z\) of \(S\). Set \(z:=\deg(Z)\leq 4d-2\).
Set \(Z_{0}=Z\). For any \(i>0\), let \(Q_{i}\) be a quadric surface such that \(z_{i}:=\deg(Z_{i-1}\cap Q_{i})\) is maximal and set \(Z_{i}:=\operatorname{Res}_{Q_{i}}(Z_{i-1})\). The sequence \(\{z_{i}\}_{i\geq 1}\) is weakly decreasing. Let \(e\) be the maximal \(i\) such that \(z_{i}\neq 0\). Then \(z=z_{1}+\cdots+z_{e}\) and \(Z_{e}=\emptyset\). Since \(h^{0}(\mathcal{O}_{\mathbb{P}^{3}}(2))=10\), \(z_{i}\geq 9\) for all \(i<e\), hence we have \(e\leq(4d+6)/9\), for \(z\leq 4d-2\). By Lemma 2.12, since \(Z\) is critical and \(S\in\mathbb{T}(3,d;x)^{\prime}\), we have \(h^{1}(\mathcal{I}_{Z_{e-1}}(d-2e+2))>0\).
(I) Assume first \(e\geq 2\), i.e. \(Z\) is not contained in any quadric surface. Since \(h^{1}(\mathcal{I}_{Z_{e-1}}(d-2e+2))>0\), then Proposition 6.1 implies that either \(z_{e}\geq 3(d-2e+2)+1\) or there is a line \(L\) such that \(\deg(Z_{e-1}\cap L)\geq d-2e+4\) or there is a plane conic \(D\) such that \(\deg(Z_{e-1}\cap D)\geq 2d-4e+6\).
(I.a) First assume \(z_{e}\geq 3(d-2e+2)+1\). Since the sequence \(z_{i}\) is weakly decreasing, we get \(z\geq e(3d-6e+6)\). It is easy to check that \(e(3d-6e+6)>4d-2\) for any \(d\geq 13\) and \(2\leq e\leq(4d+6)/9\). This contradicts our hypotesis.
(I.b) Now assume the existence of a plane conic \(D\) such that \(\deg(Z_{e-1}\cap D)\geq 2d-4e+6\). Since \(h^{0}(\mathcal{I}_{D}(2))=5\), we get \(z_{i}\geq(2d-4e+6)+4\) for all \(i<e\). Thus \(z\geq e(2d-4e+10)-4\). It is easy to check that \(e(2d-4e+10)-4>4d-2\) for any \(2\leq e\leq(4d+6)/9\), and this gives again a contradiction.
(I.c) Finally assume the existence of a line \(L\) such that \(\deg(Z_{e-1}\cap L)\geq d-2e+4\). Since \(h^{0}(\mathcal{I}_{L}(2))=7\), we have \(z_{i}\geq(d-2e+4)+6\) for all \(i<e\). Hence \(z\leq e(d-2e+10)-6\). It is easy to check that \(e(2d-4e+11)-6>4d-2\) for any \(4\leq e\leq(4d+6)/9\). Hence we get \(e\in\{2,3\}\).
Let \(H\) be a general plane containing \(L\). Since each connected component of \(Z\) has degree \(\leq 2\), we may assume \(Z\cap L=Z\cap H\).
(I.c1) First assume \(e=3\). Since \(z_{1}\geq z_{2}\geq z_{3}\geq d-2\) and \(z_{1}+z_{2}\geq\lceil 2z/3\rceil\), we have \(\deg(\operatorname{Res}_{Q_{1}\cup Q_{2}\cup H}(Z))\leq z-\lceil 2z/3\rceil-(d-2)= \lfloor z/3\rfloor-d+2<d-3=(d-5)+2\)
since \(d\geq 7\). Since \(S\) is minimally Terracini, we get \(Z\subset Q_{1}\cup Q_{2}\cup L\). Since \(e>2\) and \(H\) is contained in a quadric surface, \(Z\nsubseteq Q_{1}\cup H\). Since \(S\) is minimally Terracini, \(h^{1}(\mathcal{I}_{\operatorname{Res}_{Q_{1}\cup H}(Z)}(d-3))>0\). We have: \(\deg(\operatorname{Res}_{Q_{1}\cup H}(Z))\leq(z-z_{1})-(d-2)\leq z-\lceil z/3 \rceil-d+2\leq\frac{5d+2}{3}\leq 2(d-3)+1\), for \(d\geq 17\). Hence there is a line \(R\) such that \(\deg(R\cap\operatorname{Res}_{Q_{1}\cup H}(Z))\geq d-1\). Taking a general plane containing \(R\) and taking again the residual, we get \(Z\subset Q_{1}\cup L\cup R\). But since \(h^{0}(\mathcal{I}_{R\cup L}(2))>0\) and \(e\leq 2\), we have a contradiction.
(I.c2) Now assume \(e=2\) and hence \(z_{1}\geq\lceil z/2\rceil\). We have \(\deg(\operatorname{Res}_{H}(Z))\leq z-d\) and \(h^{1}(\mathcal{I}_{\operatorname{Res}_{H}(Z)}(d-1))>0\).
First assume \(\langle\operatorname{Res}_{H}(Z)\rangle=\mathbb{P}^{3}\). Since \(z-d\leq 3(d-1)+1\), Proposition 6.1 implies that either there is a plane cubic \(T_{3}\) with \(T_{3}\cap\operatorname{Res}_{H}(Z)\) the complete intersection of \(T_{3}\) and a degree \(d-1\) plane curve or there is a conic \(T_{2}\) such that \(\deg(T_{2}\cap\operatorname{Res}_{H}(Z))\geq 2d\) or there is a line \(T_{1}\) such that \(\deg(T_{1}\cap\operatorname{Res}_{H}(Z))\geq d+1\).
First assume the existence of \(T_{3}\). Since \(\deg(\operatorname{Res}_{H\cup(T_{3})}(Z))\leq 1\), by minimality of \(S\) we get \(Z\subset H\cup\langle T_{3}\rangle\), contradicting the assumption \(e>1\).
Assume the existence of \(T_{2}\). Since \(\deg(\operatorname{Res}_{H\cup(T_{2})}(Z))\leq z-3d\leq d-1\), we get \(Z\subset H\cup\langle T_{2}\rangle\), again a contradiction.
Now assume the existence of \(T_{1}\). Take a general quadric \(U\in|\mathcal{I}_{L\cup T_{1}}(2)|\). Since \(\deg(\operatorname{Res}_{U}(Z))\leq z-2d-1\leq 2(d-2)+1\), by Remark 2.7 there is a line \(R_{1}\) such that \(\deg(R_{1}\cap\operatorname{Res}_{U}(Z))\geq d\). Take a general \(U^{\prime}\in|\mathcal{I}_{L\cup T_{1}\cup R_{1}}(2)|\). Since \(\deg(\operatorname{Res}_{U^{\prime}}(Z))\leq z-3d-1<d\) and \(S\) is minimally Terracini, \(Z\subset U^{\prime}\), contradicting the assumption \(e>1\).
Now assume \(\dim\langle\operatorname{Res}_{H}(Z)\rangle\leq 2\). The only new case is if \(\deg(\operatorname{Res}_{H}(Z))=3d-2\) and \(\operatorname{Res}_{H}(Z)\) is contained in a plane cubic \(C\). Since \(\deg(\operatorname{Res}_{(C)}(Z))\leq d\), \(S\) is not minimally Terracini.
(II). Assume now \(e=1\), that is \(Z\) is contained in a quadric \(Q\).
If \(Q\) is reducible we argue as in step (b) of the proof of Theorem 1.4 and we get a contradiction. So we can assume that \(Z\) is not contained in any reducible quadric. In particular \(Q\) is irreducible and reduced.
Set \(W_{0}:=Z\). Take \(D_{1}\in|\mathcal{O}_{Q}(2)|\), such that \(w_{1}=\deg(W_{0}\cap D_{1})\) is maximal and set \(W_{1}:=\operatorname{Res}_{D_{1}}(W_{0})\). For \(i\geq 2\), we iterate the construction: choose divisors \(D_{i}\in|\mathcal{O}_{Q}(2)|\) such that \(w_{i}:=\deg(W_{i-1}\cap D_{i})\) is maximal and set \(W_{i}:=\operatorname{Res}_{D_{i}}(W_{i-1})\). The sequence \(\{w_{i}\}_{i\geq 1}\), is weakly decreasing. Let \(c\geq 1\) be the maximal \(i\) such that \(w_{i}\neq 0\), i.e. \(W_{c}=\emptyset\) and and \(z=w_{1}+\ldots+w_{c}\).
By Lemma 2.12, since \(Z\) is critical for \(S\) minimal, we have \(h^{1}(\mathcal{I}_{W_{c-1}}(d-2c+2))>0\). Since \(\dim|\mathcal{O}_{Q}(2)|=8\), if \(w_{i}\leq 7\), then \(w_{i+1}=0\) and \(W_{i+1}=\emptyset\). Thus \(w_{i}\geq 8\) for \(1\leq i<c\), hence we get \(c\leq\frac{4d+5}{8}\), since \(z\leq 4d-2\).
(II.a) If \(c=1\), then we have \(Z\subset D_{1}=Q\cap Q^{\prime}\) where \(Q^{\prime}\) is an integral quadric. Hence \(D_{1}\) is a complete intersection of two quadrics. If \(D_{1}\) is integral, then by Remark 6.3 we have \(h^{1}(\mathcal{I}_{Z}(d))=0\), a contradiction. If \(D_{1}\) is reducible we have again a contradiction by Lemma 6.4 and by the minimality of \(S\).
(II.b) Now we assume \(c=\lceil d/2\rceil\). Hence either \(d\) is even and \(h^{1}(\mathcal{I}_{W_{c-1}}(2))>0\), or \(d\) is odd and \(h^{1}(\mathcal{I}_{W_{c-1}}(1))>0\).
First assume \(d\) odd and \(c=\lceil d/2\rceil\). Then we have \(8(\lceil d/2\rceil-1)+\deg(W_{c-1})\leq 4d-2\), then \(\deg(W_{c-1})\leq 2\), which is a contradiction. Now assume \(d\) even and \(c=d/2\). Since \(8(d/2-1)+\deg(W_{c-1})\leq 4d-2\), then \(\deg(W_{c-1})\leq 6\). Thus either there is a line \(L\) such that \(\deg(W_{c-1}\cap L)\geq 4\) or \(\deg(W_{c-1})=6\) and \(W_{c-1}\) is contained in a conic \(D\).
First assume the existence of the line \(L\) such that \(\deg((W_{c-1})\cap L)\geq 4\). Bezout's theorem implies \(L\subset Q\). Since \(h^{0}(\mathcal{I}_{L,Q}(2))=6\), the maximality of the integer \(w_{c-1}\) implies \(w_{c-1}\geq w_{c}+5\geq 9\). Thus \(4d-2\geq(d/2-1)9+4\), a contradiction, since \(d\geq 7\).
Now assume \(\deg(W_{c-1})=6\) and that \(W_{c-1}\) is contained in a conic \(D\). If \(D\) is reducible we may assume that no irreducible component \(J\) of \(D\) satisfied \(\deg(J\cap W_{c-1})\geq 4\). With these assumptions Bezout's theorem implies \(D\subset Q\). Since \(h^{0}(\mathcal{I}_{D,Q}(2))=4\), the maximality of the integer \(w_{c-1}\) gives \(w_{c-1}\geq w_{c}+3=9\), which leads again to a contradiction.
(II.c) Now we may assume \(2\leq c<d/2\).
Assume for the moment \(w_{c}\geq 3(d-2c+2)\). Since the sequence \(\{w_{i}\}\) is weakly decreasing, \(4d-2\geq z\geq 3c(d-2c+2)\). Since \(c<d/2\), we get \(c=1\) a contradiction.
Now assume \(w_{c}<3(d-2c+2)\). By applying Proposition 6.1 we know that either there is a conic \(D\) such that \(\deg(D\cap W_{c-1})\geq 2(d-2c+2)+2=2d-4c+6\), or there is a line \(L\) such that \(\deg(L\cap W_{c-1})\geq d-2c+4\).
(II.c1) In the first case, since \(h^{0}(\mathcal{I}_{D,Q}(2))=4\), we have \(w_{i}\geq(2d-4c+6)+3\) for all \(i<c\). Hence \(z\geq c(2d-4c+9)-3\). Since \(z\leq 4d-2\), then we have again a contradiction.
(II.c2) Assume now the existence of \(L\). Since \(h^{0}(\mathcal{I}_{L,Q}(2))=6\), we get \(w_{i}\geq(d-2c+4)+5\) for all \(i<c\). Thus \(z\geq c(d-2c+9)-5\). It is easy to check that \(2\leq c\leq 3\), hence \(\deg(L\cap Z)\geq d-2\).
Take a quadric \(U\in|\mathcal{O}_{Q}(2))|\) containing \(L\) and such that \(\deg(Z\cap U)\) is maximal. Since \(h^{0}(\mathcal{I}_{L,Q}(2))=6\), we have \(\deg(L\cap U)\geq(d-2)+5=d+3\). Thus \(\deg(\mathrm{Res}_{U}(Z))\leq 4d-2-d-3=3(d-2)+1\). By Proposition 6.1 either there is a plane cubic \(E\) such that \(\deg(E\cap\mathrm{Res}_{U}(Z))\geq 3(d-2)\) or there is a conic \(F\) such that \(\deg(\mathrm{Res}_{U}(Z)\cap F)\geq 2d-2\) or there is a line \(R\) such that \(\deg(\mathrm{Res}_{U}(Z)\cap R)\geq d\). In all cases (since \(d\geq 5\)) Bezout's theorem implies that \(R\), \(F\) and \(E\) are contained in \(Q\) (or at least all the components supporting \(Z\)). Since \(Q\) is an integral quadric, we exclude the plane cubic \(E\).
(II.c2.1) Assume the existence of a conic \(F\). Even if \(Q\) is not assumed to be smooth, \(F\) is a plane section of \(Q\) and \(F\cup L\) is a reducible rational normal curve.
Thus \(Z\nsubseteq F\cup L\).
Since \(\mathcal{I}_{F\cup L}(2)\) is globally generated, a general \(Q^{\prime}\in|\mathcal{I}_{F\cup L}(2)|\) has \(Q^{\prime}\cap Z=(F\cup L)\cap Z\) and hence \(\mathrm{Res}_{Q^{\prime}}(Z)\neq\emptyset\). Since \(h^{1}(\mathcal{I}_{\mathrm{Res}_{Q^{\prime}}(Z)}(d-2))>0\) and \(\deg(\mathrm{Res}_{Q^{\prime}}(Z))\leq 4d-2-3d+4\) and \(d\geq 7\), there is a line \(R^{\prime}\) such that \(\deg(\mathrm{Res}_{Q^{\prime}}(Z)\cap R^{\prime})\geq d\). Since \(\mathcal{I}_{F\cup L\cup R^{\prime}}(t)\) is globally generated for, say, \(t=4\), we get \(Z\subset F\cup L\cup R^{\prime}\). Hence we conclude by Lemma 6.4.
(II.c2.2) Assume finally the existence of the line \(R\). Since each connected component of \(Z\) has degree \(\leq 2\) and no line contains \(d-2\) points of \(S\), \(R\neq L\).
(II.c2.2.1) First assume \(R\cap L\neq\emptyset\). Thus \(H:=\langle R\cup L\rangle\) is a plane. Since \(\deg(\mathrm{Res}_{H}(Z))\leq 4d-2-2d+2\) and \(h^{1}(\mathcal{I}_{\mathrm{Res}_{H}(Z)}(d-1))>0\) either \(\deg(\mathrm{Res}_{H}(Z))=2d\) and \(\mathrm{Res}_{H}(Z)\) is contained in a conic \(F_{1}\) or there is a line \(R_{1}\) such that \(\deg(R_{1}\cap\mathrm{Res}_{H}(Z))\geq 2\). In the first case we get \(Z\subset L\cup R\cup F_{1}\) and we conclude by Lemma 6.4.
(II.c2.2.2) Now assume \(R\cap L=\emptyset\). Take a general \(Q_{1}\in|\mathcal{I}_{R\cup L}(2)|\). Thus \(Q_{1}\cap Z=(R\cup L)\cap Z\). We get \(h^{1}(\mathcal{I}_{\mathrm{Res}_{Q_{1}}(Z)}(d-2))>0\) with \(\deg(\mathrm{Res}_{Q_{1}}(Z))\leq 2d\). We get that either there is a conic \(F_{2}\) with \(\deg(F_{2}\cap\mathrm{Res}_{Q_{1}}(Z))\geq 2d-2\) or a line \(R_{2}\) such that \(\deg(R_{2}\cap\mathrm{Res}_{Q_{1}}(Z))\geq d\). If \(F_{2}\) exist, we get \(Z\subset R\cup L\cup F_{2}\) and we
use Lemma 6.4. If \(R_{2}\) exists, we take a general \(U_{1}\in|\mathcal{I}_{R\cup L\cup R_{2}}(3)|\) and get that \(Z\) is contained in the union of \(4\) lines. Hence we conclude again by Lemma 6.4.
|
2305.11253 | Dipole Screening in Pure Shear Strain Protocols of Amorphous Solids | When amorphous solids are subjected to simple or pure strain, they exhibit
elastic increase in stress, punctuated by plastic events that become denser (in
strain) upon increasing the system size. It is customary to assume in
theoretical models that the stress released in each plastic event is
redistributed according to the linear Eshelby kernel, causing avalanches of
additional stress release. Here we demonstrate that contrary to the uniform
affine strain resulting from simple or pure strain, each plastic event is
associated with a non-uniform strain that gives rise to a displacement field
that contains quadrupolar and dipolar charges that typically screen the linear
elastic phenomenology and introduce anomalous length-scales and influence the
form of the stress redistribution. An important question that opens up is how
to take this into account in elasto-plastic models of shear induced phenomena
like shear-banding. | Chandana Mondal, Michael Moshe, Itamar Procaccia, Saikat Roy | 2023-05-18T18:37:10Z | http://arxiv.org/abs/2305.11253v1 | # Dipole Screening in Pure Shear Strain Protocols of Amorphous Solids
###### Abstract
When amorphous solids are subjected to simple or pure strain, they exhibit elastic increase in stress, punctuated by plastic events that become denser (in strain) upon increasing the system size. It is customary to assume in theoretical models that the stress released in each plastic event is redistributed according to the linear Eshelby kernel, causing avalanches of additional stress release. Here we demonstrate that contrary to the uniform affine strain resulting from simple or pure strain, each plastic event is associated with a non-uniform strain that gives rise to a displacement field that contains quadrupolar and dipolar charges that typically screen the linear elastic phenomenology and introduce anomalous length-scales and influence the form of the stress redistribution. An important question that opens up is how to take this into account in elasto-plastic models of shear induced phenomena like shear-banding.
**Introduction:** Amorphous solids, including a host of substances, from metallic and silica glasses to gels and powders, pose exciting theoretical challenges in understanding their mechanical properties and failure modes [1; 2]. Contrary to perfect elastic media, amorphous solids experience plastic events in response to any amount of external stress [3; 4]. For large external shear strain, accumulation of plastic responses can lead to mechanical failure of amorphous solids through shear-banding and the appearance of cracks [5; 6].
The phenomenon of shear banding is a limiting factor for the usefulness of amorphous solids in applications, and as such it attracted enormous amount of attention, especially in the context of failure under pure or simple shear. Both simulations and experiments abound, leading to an active developments of models which are collectively known as 'elastoplastic' models [7; 8; 9]. While the available models differ in detail, elastoplastic models handle the material as a collection of'mesoscopic' blocks alternating between an elastic behavior and plastic relaxation, when they are loaded above a threshold. Plastic relaxation events redistribute stresses in the system; the lost stress is distributed between all the other cells, such that the amount of stress that each cell receives is determined by the 'Eshelby kernel', a function that was computed by Eshelby in the 1950's for a quadrupolar strain perturbation in a perfectly elastic medium [10]. This protocol can induce avalanches of 'plastic events' and at a certain global strain the avalanche causes a shear band.
Even before the onset of shear banding, plastic responses can not only renormalize the elastic properties of the system, but can also induce a qualitative deviation from an elastic response. This puts doubts on the relevance of Eshelby's kernel as solved within linear elasticity theory. In fact, we have recently developed a geometric model of mechanical screening via quadrupole and dipole elastic charges, which predicted new phenomenology within linear response, that was later fully observed in experimental and numerical systems [11; 12; 13; 14; 15]. In this theory the response to local perturbation is screened by various geometric multipoles.
It therefore behooves on us to examine the role of screening before the onset of shear banding, an issue which appears fundamental to elastoplastic models in general. If dipole screening is non-existent at small strains, then the common protocol of using the classical Eshelby's kernel is justified. If, however, dipole screening exists at small strains, it suggests that a modified version of the classical Eshelby kernel should be developed. The aim of this Letter is to test the screening mode prior to shear banding. We provide theoretical and simulational evidence below that in fact every plastic event creates quadrupolar and dipolar effective charges in the displacement field that follows the event. We demonstrate these issues in the context of pure shear strain of a generic model of amorphous solids, but elastoplastic modeling of simple strain will suffer from the very similar issues.
**Simulations**: To demonstrate the issues we chose as our example frictional granular matter, to be as close as possible to realizable experiments. Our simulations employed amorphous granular assemblies of 16000 disks, half of which have a radius \(R_{1}=0.35\) and the other half with \(R_{2}=0.49\). The details of the contact forces and the protocols for creating an equilibrated configuration at any desired pressure \(P_{0}\) are standard, and are presented in the appendix.
Having a mechanically stable configurations at different pressure values \(P_{0}\) with box dimensions \(Lx_{0}\) and \(Ly_{0}\) along x and y directions respectively, we apply volume-preserving pure shear on the samples, involving the following steps: (i) we reduce the box lengths along \(x\) by \(0.00002\%\) and expand it along \(y\) directions such that volume of the system remains constant at \(Lx_{0}\times Ly_{0}\); (ii) we run constant NVE simulation, until the force and torque on each and every particle are smaller than \(10^{-7}\) in reduced units. We repeat these two steps 2000 times for all
the pressures. We measure the instantaneous pressure \(P\) and the accumulated _affine_ strain
\[u_{\rm aff}\equiv\frac{1}{2}\big{(}\frac{Lx_{0}-Lx}{Lx_{0}}+\frac{Ly-Ly_{0}}{Ly_{ 0}}\big{)}\, \tag{1}\]
where Lx and Ly are the instantaneous box-lengths along x and y directions respectively. Typical shear stress vs. (affine) strain plots are shown in Fig. 1 for our lowest and highest initial pressures. As is usual in such simulations, we observe intervals of increase in stress when the strain increases, interrupted by sharp drops in stress due to plastic events. These are the events that we focus on next.
**Displacement fields associated with plasticity**: presently we focus on the displacement field that is triggered by the plastic drop. Denoting the positions of our \(N\) disks before and after the event as \(\mathbf{r}_{i}^{a}\) and \(\mathbf{r}_{i}^{b}\) respectively, we compute the displacement field as \(\mathbf{d}_{i}\equiv\mathbf{r}_{i}^{a}-\mathbf{r}_{i}^{b}\). Next we compute the total strain field as
\[u_{ij}=0.5(\nabla_{i}d_{j}+\nabla_{j}d_{i}) \tag{2}\]
The non-affine strain \(\mathbf{u}_{q}\) is obtained by subtracting the affine strain generated in the last step from \(u_{\rm tot}\),
\[u_{11}^{q} \equiv u_{11}-\frac{1}{2}\big{(}\frac{Lx^{b}-Lx^{a}}{Lx^{b}}\big{)}\,\] \[u_{22}^{q} \equiv u_{22}-\frac{1}{2}\big{(}\frac{Ly^{a}-Ly^{b}}{Ly^{b}} \big{)}\,\] \[u_{12}^{q} \equiv u_{12}\,\quad u_{21}^{q}\equiv u_{21}. \tag{3}\]
where again 'a' and 'b' refer to after and before. Having the non-affine strain we decompose it into its trace and its traceless components (cf. Ref. [16] page 6):
\[\mathbf{u}^{q}=m\mathbf{I}+Q\mathbf{u}^{ts}\, \tag{4}\]
where \(\mathbf{I}\) is the identity tensor and \(\mathbf{u}^{ts}\) a traceless symmetric tensor. In the last equation \(m=0.5\,{\rm Tr}\,u_{q}\) and
\[Q^{2}=(u_{11}^{ts})^{2}+(u_{22}^{ts})^{2}. \tag{5}\]
The quadrupolar charge \(Q\) is obtained as the square root, and its orientation is computed from [16]:
\[\Theta=0.5\arctan((u_{12}^{ts})/(u_{11}^{ts})). \tag{6}\]
A typical map of the quadrupolar fields computed in this fashion, with the arrows in the direction of the angle \(\Theta\), are shown in Fig. 2 for the low pressure exhibited in Fig. 1. The upper panel shows the map for the whole system and below a zoom on the most active region. The map for the high pressure is similar, but with a difference in scale - the quadrupolar field is considerably more intense in the case of lower pressure. The arrows are pointing in the direction of the angle \(\Theta\), note that here there is no preferred angle with respect to the principal stress axis [5; 6].
Since the quadrupolar field is obviously non-uniform, we expect that its divergence would be quite important. Thus we swiftly proceed to compute the dipolar field \(\mathbf{\mathcal{P}}\), as the latter is expected to be crucial for the way stress is distributed as a result of the plastic event. The dipolar field is simply computed as \(\mathcal{P}^{\alpha}\equiv\partial_{\beta}Q^{\alpha\beta}\)[11; 12; 13; 14; 15]. In the upper panel of Fig. 3 we present the divergence of the quadrupolar field \(\mathbf{Q}\) that is shown in lower panel of Fig. 2. At this point the important observation is that this field is not zero.
**Theoretical considerations**: examining the dipolar heat maps and the direction of the dipoles one gets the impression that this field is quite disordered, with arrows pointing in all directions. In fact, the theory presented in Refs. [11; 12; 13; 14; 15] predicts that the dipole field should be proportional to the displacement field, and the latter is indeed quite disordered. As a brief summary of the theory, we recall that classical elasticity in two dimensions can be derived from a Lagrangian by minimizing the energy \(F\),
\[F =\int\mathcal{L}\,\mathrm{d}x\mathrm{d}y-\oint t^{\beta}d_{\beta} \,\mathrm{d}S\,\] \[\mathcal{L} =\frac{1}{2}A^{\alpha\beta\gamma\delta}u_{\alpha\beta}u_{\gamma \delta}=\frac{1}{2}\sigma^{\alpha\beta}u_{\alpha\beta}\, \tag{7}\]
where \(\mathbf{A}\) is the usual elastic tensor, and \(\mathrm{d}S\) is the area element on the boundary. Minimizing the energy one derives the classical result \(\partial_{\alpha}\sigma^{\alpha\beta}=0.\) In Refs. [11; 12; 13; 14; 15] it was shown that in the presence of quadrupolar plastic response the elastic tensor is renormalized, yielding a new
Figure 1: Shear stress vs accumulated affine strain in pure shear. Shown are two initial pressures \(P_{0}=720\) (upper panel) and \(P_{0}=4.5\), our highest and lowest pressures. In both cases one sees elastic increase in stress punctuated by plastic events, that are denser and more violent when the pressure is smaller.
tensor \(\tilde{A}^{\alpha\beta\gamma\delta}\) and a renormalized stress field satisfying yet the same equation \(\partial_{\alpha}\tilde{\sigma}^{\alpha\beta}=0\). On the other hand, once there exist gradients of the quadrupolar field, generating dipoles, \(\mathcal{P}^{\alpha}\equiv\partial_{\beta}Q^{\alpha\beta}\), the appropriate Lagrangian takes into account the dipoles in the form
\[\mathcal{L}=\frac{1}{2}\tilde{\mathcal{A}}^{\mu\nu\rho\sigma}u_{ \mu\nu}u_{\rho\sigma}+\frac{1}{2}\Lambda_{\alpha\beta}\partial_{\mu}Q^{\mu \alpha}\partial_{\nu}Q^{\nu\beta}+\Gamma_{\alpha}^{\;\beta}\partial_{\mu}Q^{ \mu\alpha}d_{\beta}\, \tag{8}\]
where the tensors \(\mathbf{\Lambda}\) and \(\mathbf{\Gamma}\) are new coupling tensors that do not exist in classical elasticity theory. Minimizing the energy associated with this Lagrangian results in a new equation satisfied by the stress field,
\[\partial_{\alpha}\sigma^{\alpha\beta}=-\Gamma_{\alpha}^{\beta} \mathcal{P}^{\alpha}. \tag{9}\]
One should note that this equation breaks translational symmetry as explained in [11; 12; 13; 14; 15]. In isotropic homogeneous media the coupling tensors simplify, reading \(\Gamma_{\alpha}^{\beta}=\mu_{1}g_{\beta}^{\alpha}\), \(\Lambda^{\alpha\beta}=\mu_{2}g^{\alpha\beta}\) where \(\mathbf{g}\) is the Euclidean metric tensor, and \(\mu_{1},\mu_{2}\) being scalar novel moduli that do not exist in classical elasticity. Finally, and importantly for our purposes here, it was shown that the diplolar field satisfies an equation
\[\mathbf{\mathcal{P}}=-\kappa^{2}\mathbf{d}\, \tag{10}\]
where \(\kappa\) is an inverse scale that acts as a screening parameter. This is the reason that the dipole field appears as chaotic as the displacement field. To establish that the theory is relevant in the present context we test Eq. (10) in our simulations.
**Test of theory:** Equation (10) is an important constitutive relation that is predicted by the theory, but was never put to a direct test as we can do here. In the lower panel of Fig. 3 we show (minus) the displacement field
Figure 2: Heat map of the quadrupolar field for our system after a plastic event at a lower pressure \(P_{0}=4.5\). The darker region indicate high values of \(Q\) cf. Eq. (5), and light region low values. The arrows are in the direction of the angle \(\Theta\), cf. Eq. (6). In the upper panel we show the whole system and then a zoom into the most active region.
Figure 3: Upper panel: heat map of the dipole field \(\mathcal{P}^{\alpha}\equiv\partial_{\beta}Q^{\alpha\beta}\) for \(P_{0}=4.5\), in the window of the lower panel of Fig. 2. Lower panel: minus the displacement field in the same window. The arrows in both panels are in the local direction of the respective field.
from which the data of the upper panel of Fig. 3 was computed, following the recipe presented above. Indeed, to the eye it appears that the two fields are proportional to each other, as expected from the theory. To provide a quantitative test we can integrate Eq. (10) around any closed loop and test whether
\[\oint_{\partial\Omega}\mathbf{\mathcal{P}}(x,y)\cdot\mathbf{n}\,\mathrm{dl}=-\kappa^ {2}\oint_{\partial\Omega}\mathbf{d}(x,y)\cdot\mathbf{n}\,\mathrm{dl}\;, \tag{11}\]
where \(\mathbf{n}\) is the unit vector normal to the integration path, pointing outward. In the present case it is natural to choose square trajectories for the integrals, thus using the \(x\) component of the field for paths along \(y\) and the \(y\) components for paths along \(x\), with appropriate signs. We have chosen 20 central points on the grid that was used to digitize the displacement field, and for each such point we computed the two line integrals on squares of edge sizes 6-23. After taking the ratio of the two integrals in Eq. (11) we computed the square root and averaged \(\kappa\) over the twenty central points. One should point out that the protocol described in Eqs. (2)-(6), including the computation of the divergence of the quadrupolar field at the end, is not free of numerical noise (at each step). It is therefore quite remarkable that the resulting value of \(\kappa\) as shown in Fig. 4 is quite stable, \(\kappa\approx 0.68\pm 0.2\). A priori it is not even guaranteed that the ratio of the two integrals would be negative definite, resulting in a real value of \(\kappa\). We thus interpret the results of the calculation as a strong support for the constitutive relation Eq. (10).
Having demonstrated that generic plastic drops induce a displacement field that is typically exhibiting effective dipoles, we must realize that the fundamental change in physics that is embodied in Eq. (9) requires reassessment of the redistribution of the stress that is lost in the plastic drop. It is no longer likely that the regular power law decay of the Eshelby kernel would describe properly this redistribution. It was amply demonstrated that the appearance of dipoles results in the introduction of a typical scale (which is actually of the order of \(\kappa^{-1}\)) and it can even reverse the displacement field that is expected from linear elasticity to decay monotonically. It is our proposition, on the basis of the analysis presented above, that the consequences of these results in the context of elastoplastic models should be carefully assessed.
In the future it would be important to seek similar clarification of the role of dipole charges also in three spatial dimensions. Contrary to the Hexatic [17] and the Kosterlitz-Thouless [18] phase transitions which are relevant in two-dimensions, the presence of dipoles as divergences of quadrupolar fields has been recently demonstrated in three dimensions [15]. The use of Eshelby kernels that were derived for purely elastic media must be reassessed.
**Appendix**
The contact forces, which include both normal and tangential components due to friction, are modeled according to the discrete element method developed by Cundall and Strack [19], combining a Hertzian normal force and a tangential Mindlin component. Full details of these forces and the equations of motion solved can be found in Refs. [20; 21; 22; 23]. Simulations are performed using the open source codes, LAMMPS [24] and LIGGGHTS [25] to properly keep track of both the normal and the history-dependent tangential force. Initially, the grains are placed randomly in a large two dimensional box while forbidding the existence of overlaps or contacts. The system is then isotropically compressed along \(x\) and \(y\) directions while integrating Newton's second law with total forces and (scalar) torques acting on particle \(i\) given by \(\mathbf{F}_{i}=\sum_{j}\mathbf{F}_{ij}^{(n)}+\mathbf{F}_{ij}^{(t)}\), and \(\tau_{i}=\sum_{j}\tau_{ij}\) with
\[\tau_{ij}\equiv-\frac{1}{2}\left(\mathbf{r}_{ij}\times\mathbf{F}_{ij}^{(t)}\right) \cdot\mathbf{e}_{z} \tag{12}\]
the torque exerted by \(j\) onto \(i\). Compression is performed using a series of steps which involve: (i) one MD step during which we reduce the box lengths along \(x\) and \(y\) directions by \(0.002\%\); (ii) a constant NVE run, until the force and torque on each and every particle are smaller than \(10^{-7}\) in reduced units. This guarantees that the cell remains square throughout the process. We repeat these compression and relaxation cycles until the system attains a jammed (mechanically balanced) configuration at the different final pressure, fixed to \(P_{0}=4.5,18,72.0,144,288,720\) (in reduced units) [23]. Of course, in the final _mechanically equilibrated states_ obtained at the end of compression the total forces and torques \(\mathbf{F}_{i}\) and \(\tau_{i}\) vanish with \(10^{-7}\) accuracy, as well as all the velocities.
**Acknowledgments**: This work has been supported in part by the the joint grant between the Israel Science Foundation and the National Science Foundation of China, and by the Minerva Foundation, Munich, Germany.
Figure 4: The screening parameter \(\kappa\approx 0.68\pm 0.2\) computed by dividing the two integrals in Eq. (11) computed on square loops of different sizes and taking the square root. Results pertain to an average over 20 central grid points, error bars reflect statistical error. |
2307.00527 | Graph Neural Networks based Log Anomaly Detection and Explanation | Event logs are widely used to record the status of high-tech systems, making
log anomaly detection important for monitoring those systems. Most existing log
anomaly detection methods take a log event count matrix or log event sequences
as input, exploiting quantitative and/or sequential relationships between log
events to detect anomalies. Unfortunately, only considering quantitative or
sequential relationships may result in low detection accuracy. To alleviate
this problem, we propose a graph-based method for unsupervised log anomaly
detection, dubbed Logs2Graphs, which first converts event logs into attributed,
directed, and weighted graphs, and then leverages graph neural networks to
perform graph-level anomaly detection. Specifically, we introduce One-Class
Digraph Inception Convolutional Networks, abbreviated as OCDiGCN, a novel graph
neural network model for detecting graph-level anomalies in a collection of
attributed, directed, and weighted graphs. By coupling the graph representation
and anomaly detection steps, OCDiGCN can learn a representation that is
especially suited for anomaly detection, resulting in a high detection
accuracy. Importantly, for each identified anomaly, we additionally provide a
small subset of nodes that play a crucial role in OCDiGCN's prediction as
explanations, which can offer valuable cues for subsequent root cause
diagnosis. Experiments on five benchmark datasets show that Logs2Graphs
performs at least on par with state-of-the-art log anomaly detection methods on
simple datasets while largely outperforming state-of-the-art log anomaly
detection methods on complicated datasets. | Zhong Li, Jiayang Shi, Matthijs van Leeuwen | 2023-07-02T09:38:43Z | http://arxiv.org/abs/2307.00527v3 | # Graph Neural Network based Log Anomaly Detection and Explanation
###### Abstract.
Event logs are widely used to record the status of high-tech systems, making log anomaly detection important for monitoring those systems. Most existing log anomaly detection methods take a log event count matrix or log event sequences as input, exploiting quantitative and/or sequential relationships between log events to detect anomalies. Unfortunately, only considering quantitative or sequential relationships may result in many false positives and/or false negatives. To alleviate this problem, we propose a graph-based method for unsupervised log anomaly detection, dubbed _Log3Graphs_, which first converts event logs into attributed, directed, and weighted graphs, and then leverages graph neural networks to perform graph-level anomaly detection. Specifically, we introduce One-Class Digraph Inception Convolutional Networks, abbreviated as OCDiGCN, a novel graph neural network model for detecting graph-level anomalies in a collection of attributed, directed, and weighted graphs. By coupling the graph representation and anomaly detection steps, OCDiGCN can learn a representation that is especially suited for anomaly detection, resulting in a high detection accuracy. Importantly, for each identified anomaly, we additionally provide a small subset of nodes that play a crucial role in OCDiGCN's prediction as explanations, which can offer valuable cues for subsequent root cause diagnosis. Experiments on five benchmark datasets show that _Log3Graphs_ performs at least on par state-of-the-art log anomaly detection methods on simple datasets while largely outperforming state-of-the-art log anomaly detection methods on complicated datasets.
Log Analysis, Log Anomaly Detection, Graph Neural Networks +
Footnote †: ccs: 2024
most existing log anomaly detection methods focus exclusively on detection performance without giving any explanations.
To overcome these limitations, we propose _Logs2Graphs_, a graph-based unsupervised log anomaly detection approach by designing a novel one-class graph neural network. Specifically, _Logs2Graphs_ first utilises off-the-shelf methods to learn a semantic embedding for each log event, and then assigns log messages to different groups. Second, _Logs2Graphs_ converts each group of log messages into an attributed, directed, and weighted graph, with each node representing a log event, the node attributes containing its semantic embedding, a directed edge representing how an event is followed by another event, and the corresponding edge weight indicating the number of times the events follow each other. Third, by coupling the graph representation learning and anomaly detection objectives, we introduce One-Class Digraph Inception Convolutional Networks (OCDiGCN) as a novel method to detect anomalous graphs from a set of graphs. As a result, _Logs2Graphs_ leverages the rich and expressive power of attributed, directed and edge-weighted graphs to represent logs, followed by using graph neural networks to effectively detect graph-level anomalies, taking into account both semantic information of log events and structure information (including sequential information as a special case) among log events. Importantly, by decomposing the anomaly score of a graph into individual nodes and visualizing these nodes based on their contributions, we provide straightforward and understandable explanations for identified anomalies.
Overall, our contributions can be summarised as follows: (1) We introduce _Logs2Graphs_, which formalises log anomaly detection as a graph-level anomaly detection problem and represents log sequences as directed graphs to capture more structure information than previous approaches; (2) We introduce OCDiGCN, the first end-to-end unsupervised graph-level anomaly detection method for attributed, directed and edge-weighted graphs. By coupling the graph representation and anomaly detection objectives, we improve the potential for accurate anomaly detection over existing approaches; (3) For each detected anomaly, we identify important nodes as explanations, offering valuable cues for subsequent root cause diagnosis; (4) We empirically compare our approach to eight state-of-the-art log anomaly detection methods on five benchmark datasets, showing that _Logs2Graphs_ performs at least on par and often better than its competitors.
The reminder of this paper is organised as follows. Section 2 revisits related work, after which Section 3 formalises the problem. Section 4 describes Digraph Inception Convolutional Networks (Dosovitskiy et al., 2016), which are used for _Logs2Graphs_ in Section 5. We then evaluate _Logs2Graphs_ in Section 6 and conclude in Section 7.
## 2. Related Work
Graph-based log anomaly detection methods usually comprise five steps: log parsing, log grouping, graph construction, graph representation learning, and anomaly detection. In this paper we focus on graph representation learning, log anomaly detection and explanation, thus only revisiting related work in these fields.
### Graph Representation Learning
Graph-level representation learning methods, such as GIN (Shi et al., 2017) and Graph2Vec (Kipf and Welling, 2017), are able to learn a mapping from graphs to vectors. Further, graph kernel methods, including Weisfeiler-Lehman (WL) (Welker and Hinton, 2010) and Propagation Kernels (PK) (Kipf and Welling, 2017), can directly provide pairwise distances between graphs. Both types of methods can be combined with off-the-shelf anomaly detectors, such as OCSVM (Kipf and Welling, 2017) and iForest (Kipf and Welling, 2017), to perform graph-level anomaly detection.
To improve on these naive approaches, efforts have been made to develop graph representation learning methods especially for anomaly detection. For instance, OCGIN (Wang et al., 2017) and GLAM (Wang et al., 2017) combine the GIN (Shi et al., 2017) representation learning objective with the SVDD objective (Dosovitskiy et al., 2016) to perform graph-level representation learning and anomaly detection in an end-to-end manner. GLocalKD (Kipf and Welling, 2017) performs random distillation of graph and node representations to learn 'normal' graph patterns. Further, OCGTL (Shi et al., 2017) combines neural transformation learning and one-class classification to learn graph representations for anomaly detection. Although these methods are unsupervised or semi-supervised, they can only deal with attributed, undirected, and unweighted graphs.
iGAD (Wang et al., 2017) considers graph-level anomaly detection as a graph classification problem and combines attribute-aware graph convolution and substructure-aware deep random walks to learn graph representations. However, iGAD is a supervised method, and can only handle attributed, undirected, and unweighted graphs. CODEtect (Kipf and Welling, 2017) takes a pattern-based modelling approach using the minimum description length (MDL) principle and identifies anomalous graphs based on _motifs_. CODEtect can (only) deal with labelled, directed, and edge-weighted graphs, but is computationally very expensive. To our knowledge, we introduce the first unsupervised method for graph-level anomaly detection that can handle attributed, directed and edge-weighted graphs.
### Log Anomaly Detection and Explanation
Log anomaly detection methods can be roughly subdivided into three categories: 1) traditional,'shallow' methods, such as principal component analysis (PCA) (Shen et al., 2016), one-class SVM (OCSVM) (Kipf and Welling, 2017), isolation forest (iForest) (Kipf and Welling, 2017) and histogram-based outlier score (HOS) (Kipf and Welling, 2017), which take a log event count matrix as input and analyse quantitative relationships; 2) deep learning based methods, such as DeepLog (Chen et al., 2017), LogAnomaly (Kipf and Welling, 2017), and AutoEncoder (Chen et al., 2017), which employ sequences of log events (and sometimes their semantic embeddings) as input, analysing sequential information and possibly semantic information of log events to identify anomalies; and 3) graph-based methods, such as TCFG (Kipf and Welling, 2017) and GLAD-PAW (Wang et al., 2017), which first convert logs into graphs and then perform graph-level anomaly detection.
To our knowledge, only a few works (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017) have capitalised on the powerful learning capabilities of graph neural networks for log anomaly detection. GLAD-PAW (Wang et al., 2017) first transforms logs into attributed and undirected graphs and then uses a Position Aware Weighted Graph Attention Network to identify anomalies. However, converting logs into undirected graphs may result in loss of important sequential information. Further, DeepTraLog (Wang et al., 2017) combines traces and logs to generate a so-called Trace Event Graph, which is an attributed and directed graph. On this basis, they train
a Gated Graph Neural Networks based Deep Support Vector Data Description model to identify anomalies. However, their approach requires the availability of both traces and logs, and is unable to handle edge weights. In contrast, like LogGD (Wang et al., 2017), our proposed _Logs2Graphs_ approach is applicable to generic logs by converting logs into attributed, directed, and edge-weighted graphs. However, LogGD is a supervised method that requires fully labelled training data, which is usually impractical and even impossible. In contrast, our proposed algorithm OCDiGCN is the first _unsupervised_ graph-level anomaly detection method for attributed, directed, and edge-weighted graphs.
Although anomaly explanation has received much attention in traditional anomaly detection (Kumar et al., 2017), only a few studies (Wang et al., 2017) considered log anomaly explanation. Specifically, PLELog (Wang et al., 2017) offers explanations by quantifying the significance of individual log events within an anomalous log sequence, thereby facilitating improved identification of relevant log events by operators. Similarly, our method provides straightforward explanations for anomalous log groups by identifying and visualising a small subset of important nodes.
## 3. Problem Statement
Before we state the log anomaly detection problem, we first introduce the necessary notations and definitions regarding event logs and graphs.
**Event logs**. _Logs_ are used to record system status and important events, and are usually collected and stored centrally as log files. A _log file_ typically consists of many _log messages_. Each _log message_ is composed of three components: a timestamp, an event type (_log event_ or _log template_), and additional information (_log parameter_). _Log parsers_ are used to extract log events from log messages.
Further, log messages can be grouped into _log groups_ (a.k.a. _log sequences_) using certain criteria. Specifically, if a _log identifier_ is available for each log message, one can group log messages based on such identifiers. Otherwise, one can use a _fixed_ or _sliding window_ to group log messages. The _window size_ can be determined according to timestamp or the number of observations. Besides, counting the occurrences of each log event within a log group results in a _event count vector_. Consequently, for a log file consisting of many log groups, one can obtain a _event count matrix_. The process of generating an _event count matrix_ (or other feature matrix) is known as _feature extraction_. Extracted features are often used as input to an anomaly detection algorithm to identify _log anomalies_, i.e., log messages or log groups that deviate from what is considered 'normal'.
**Graphs**. We consider an attributed, directed, and edge-weighted graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{X},\mathbf{Y})\), where \(\mathcal{V}=\{v_{1},...,v_{|\mathcal{V}|}\}\) denotes the set of _nodes_ and \(\mathcal{E}=\{e_{1},...,e_{|\mathcal{E}|}\}\subseteq\mathcal{V}\times\mathcal{ V}\) represents the set of edges between nodes. If \((v_{i},v_{j})\in\mathcal{E}\), then there is an edge from node \(v_{i}\) to node \(v_{j}\). Moreover, \(\mathbf{X}\in\mathbb{R}^{|\mathcal{V}|\times d}\) is the node attribute matrix, with the \(i\)-th row representing the attributes of node \(v_{i}\), and \(d\) is the number of attributes. Besides, \(\mathbf{Y}\in\mathbb{N}^{|\mathcal{E}|\times|\mathcal{E}|}\) is the edge-weight matrix, where \(\mathbf{Y}_{ij}\) represents the weight of the edge from node \(v_{i}\) to \(v_{j}\).
Equivalently, \(\mathcal{G}\) can be described as \((\mathbf{A},\mathbf{X},\mathbf{Y})\), with adjacency matrix \(\mathbf{A}\in\mathbb{R}^{|\mathcal{V}|\times|\mathcal{V}|}\), where \(\mathbf{A}_{ij}=\mathbb{I}[(v_{i},v_{j})\in\mathcal{E}]\) indicates whether there is an edge from node \(v_{i}\) to node \(v_{j}\), for \(i,j\in\{1,...,|\mathcal{V}|\}\).
### Graph-based Log Anomaly Detection
Given a set of log files, we let \(\mathcal{L}=\{L_{1},...,L_{|\mathcal{L}|}\}\) denote the set of unique log events. We divide the log messages into \(M\) log groups \(\mathbf{Q}=\{\mathbf{q}_{1},...,\mathbf{q}_{m},...,\mathbf{q}_{M}\}\), where \(\mathbf{q}_{m}=\{\mathbf{q}_{m1},...,\mathbf{q}_{mmv},...,\mathbf{q}_{mN}\}\) is a log group and \(\mathbf{q}_{mn}\) a log message.
For each log group \(\mathbf{q}_{m}\), we construct an attributed, directed, and edge-weighted graph \(\mathcal{G}_{m}=(\mathcal{V}_{m},\mathcal{E}_{m},\mathbf{X}_{m},\mathbf{Y}_{m})\) to represent the log messages and their relationships. Specifically, each node \(v_{i}\in\mathcal{V}_{m}\) corresponds to exactly one log event \(L\in\mathcal{L}\) (and vice versa). Further, an edge \(e_{ij}\in\mathcal{E}_{m}\) indicates that log event \(i\) is at least once immediately followed by log event \(j\) in \(\mathbf{q}_{m}\). Attributes \(\mathbf{x}_{i}\in\mathbf{X}_{m}\) represent the semantic embedding of log event \(i\), and \(y_{ij}\in\mathbf{Y}_{m}\) is the weight of edge \(e_{ij}\), representing the number of times event \(i\) was immediately followed by event \(j\). In this manner, we construct a set of log graphs \(\{\mathcal{G}_{1},...,\mathcal{G}_{m},...,\mathcal{G}_{M}\}\).
We can use these definitions to define graph-level anomaly detection:
**Problem 1 (Graph-based Log Anomaly Detection)**. _Given a set of attributed, directed, and weighted graphs that represent logs, find those graphs that are notably different from the majority of graphs._
What we mean by 'notably different' will have to be made more specific when we define our method, but we can already discuss what types of anomalies can potentially be detected. Most methods aim to detect two types of anomalies:
* A log group (namely a graph) is considered a _quantitative anomaly_ if the occurrence frequencies of some events in the group are higher or lower than expected from what is commonly observed. For example, if a file is opened (event \(A\)) twice, it should normally also be closed (event \(B\)) twice. In other words, the number of event occurrences \(\#A=\#B\) in a normal pattern and an anomaly is detected if \(\#A\neq\#B\).
* A log group (namely a graph) is considered to contain _sequential anomalies_ if the order of certain events violates the normal order pattern. For instance, a file can be closed only after it has been opened in a normal workflow. In other words, the order of event occurrences \(A\to B\) is considered normal while \(B\to A\) is considered anomalous.
An advantage of graph-based anomaly detection is that it detect these two types of anomalies, but also anomalies reflected in the structure of the graphs. Moreover, no _unsupervised_ log anomaly detection approaches represent event logs as attributed, directed, weighted graphs, which allow for even higher expressiveness than undirected graphs (and thus limiting the information loss resulting from the representation of the log files as graphs).
## 4. Preliminaries: Digraph Inception Convolutional Nets
To learn node representations for attributed, directed, and edge-weighted graphs, (Wang et al., 2017) proposed Digraph Inception Convolutional Networks (DiGCN).
Specifically, given a graph \(\mathcal{G}\) described by an adjacency matrix \(\mathbf{A}\in\mathbb{R}^{|\mathcal{V}|\times|\mathcal{V}|}\), a node attribute matrix \(\mathbf{X}\in\mathbb{R}^{|\mathcal{V}|\times d}\), and an edge-weight matrix \(\mathbf{Y}\in\mathbb{R}^{|\mathcal{V}|\times|\mathcal{V}|}\), DiGCN defines the \(k\)-th order digraph convolution as
\[\mathbf{Z}^{(k)}=\begin{cases}\mathbf{X}\Theta^{(0)}&k=0\\ \Psi\mathbf{X}\mathbf{Q}^{(1)}&k=1\\ \Phi\mathbf{X}\mathbf{Q}^{(k)}&k\geq 2,\end{cases} \tag{1}\]
where \(\Psi=\frac{1}{2}\left(\Pi^{(1)}\frac{1}{2}\mathbf{P}^{(1)}\Pi^{(1)}\Pi^{(1)} \frac{-1}{2}+\Pi^{(1)}\frac{-1}{2}\mathbf{P}^{(1)T}\Pi^{(1)}\frac{1}{2}\right)\) and \(\Phi=\mathbf{W}^{(k)}\frac{-1}{2}\mathbf{P}^{(k)}\mathbf{W}^{(k)}\frac{-1}{2}\). Particularly, \(\mathbf{Z}^{(k)}\in\mathcal{R}^{|\mathcal{V}|\times f}\) denote the convolved output with \(f\) output dimension, and \(\Theta^{(0)},\Theta^{(1)},\Theta^{(k)}\) represent the trainable parameter matrices.
Moreover, \(\mathbf{P}^{(k)}\) is the \(k\)-th order proximity matrix defined as
\[\mathbf{P}^{(k)}=\begin{cases}\mathbf{I}&k=0\\ \tilde{\mathbf{D}}^{-1}\tilde{\mathbf{A}}&k=1\\ Ins\left((\mathbf{P}^{(1)})^{(k-1)}(\mathbf{P}^{(1)T})^{(k-1)}\right)&k\geq 2,\end{cases} \tag{2}\]
where \(\mathbf{I}\in\mathcal{R}^{|\mathcal{V}|\times|\mathcal{V}|}\) is an identity matrix, \(\tilde{\mathbf{A}}=\mathbf{A}+\mathbf{I}\), and \(\tilde{\mathbf{D}}\) denotes the diagonal degree matrix with \(\tilde{\mathbf{D}}_{ii}=\sum_{j}\tilde{\mathbf{A}}_{ij}\). Besides, \(Ins\left((\mathbf{P}^{(1)})^{(k-1)}(\mathbf{P}^{(1)T})^{(k-1)}\right)\) is defined as
\[\frac{1}{2}Intersect\left((\mathbf{P}^{(1)})^{(k-1)}(\mathbf{P}^{(1)T})^{(k-1) },(\mathbf{P}^{(1)T})^{(k-1)}(\mathbf{P}^{(1)})^{(k-1)}\right) \tag{3}\]
, with \(Intersect(\cdot)\) denoting the element-wise intersection of two matrices (see [33] for computation details). In addition, \(\mathbf{W}^{(k)}\) is the diagonalized weight matrix of \(\mathbf{P}^{(k)}\), and \(\Pi^{(1)}\) is the approximate diagonalized eigenvector of \(\mathbf{P}^{(1)}\). Particularly, the approximate diagonalized eigenvector is calculated based on personalised PageRank [2], with a parameter \(\alpha\) to control the degree of conversion from a digraph to an undirected graph. We omit the details to conserve space, and refer to [33] for more details.
After obtaining the multi-scale features \(\{\mathbf{Z}^{(0)},\mathbf{Z}^{(1)},...,\mathbf{Z}^{(k)}\}\), DiGCN defines an Inception block as
\[\mathbf{Z}=\sigma\left(\Gamma\left(\mathbf{Z}^{(0)},\mathbf{Z}^{(1)},..., \mathbf{Z}^{(k)}\right)\right), \tag{4}\]
where \(\sigma\) represents an activation function, and \(\Gamma(\cdot)\) denotes a fusion operation, which can be summation, normalisation, and concatenation. In practice, we often adapt a fusion operation that keeps the output dimension unchanged, namely \(\mathbf{Z}\in\mathcal{R}^{|\mathcal{V}|\times f}\). As a result, the \(i\)-th row of \(\mathbf{Z}\) (namely \(\mathbf{Z}_{i}\)) denotes the learned vector representation for node \(v_{i}\) in a certain layer.
## 5. Graph-Based Anomaly Detection for Event Logs
We propose _Logs2Graphs_, a graph-based log anomaly detection method tailored to event logs. The overall pipeline consists of the usual main steps, i.e., log parsing, log grouping, graph construction, graph representation learning, and anomaly detection, and is illustrated in Figure 1. Note that we couple the graph representation learning and anomaly detection steps to accomplish end-to-end learning once the graphs have constructed.
First, after collecting logs from a system, the _log parsing_ step extracts log events and log parameters from raw log messages. Since log parsing is not the primary focus of this article, we use Drain [11] for this task. Drain is a log parsing technique with fixed depth tree, and has been shown to generally outperform its competitors [47]. We make the following assumptions on the log files:
* Logs files are written in English;
* Each log message contains at least the following information: date, time, operation detail, and log identifier;
* The logs contain enough events to make the mined relationships (quantitative, sequential, structural) statistically meaningful, i.e., it must be possible to learn from the logs what the 'normal' behaviour of the system is.
Second, the _log grouping_ step uses the log identifiers to divide the parsed log messages into log groups. Third, for each resulting group of log messages, the _graph construction_ steps builds an attributed, directed, and edge-weighted graph, as described in more detail in Subsection 5.1. Fourth and last, in an integrated step for _graph representation learning and anomaly detection_, we learn a One-Class Digraph Inception Convolutional Network (OCDiGCN) based on the obtained set of log graphs. The resulting model can be used for graph-level anomaly detection. This model couples the graph representation learning objective and anomaly detection objective, and is thus trained in an end-to-end manner. The model, its training, and its use for graph-level anomaly detection are explained in detail in Subsection 5.2.
### Graph Construction
We next explain how to construct an attributed, directed, and edge-weighted graph given a group of parsed log messages, and illustrate this in Figure 2. Particularly, the motivation of graph construction is to keep everything relevant in log data.
First, we construct nodes to represent the different log events. That is, the number of nodes depends on the number of unique log events that occur in the log group. Second, starting from the first line of log messages in chronological order, we add an directed edge from log event \(L_{i}\) to \(L_{j}\) and set its edge-weight to \(1\) if the next event after \(L_{i}\) is \(L_{j}\). If the corresponding edge already exists, we increase its edge-weight by \(1\). In this manner, we obtain a labelled, directed, and edge-weighted graph.
However, using only the labels (e.g., _open_ or _write_) of log events for graph construction may lead to missing important information. That is, we can improve on this by explicitly taking the semantic information of log events into account, by which we mean that we should look at the text of the log event in entirety. Specifically, we generate a vector representation for each log event as follows:
1. _Preprocessing:_ for each log event, we first remove non-character words and stop words, and split compound words into separate words;
2. _Word embedding:_ we use Glove [25], a pre-trained word embedding model with \(200\) embedding dimensions to generate a vector representation for each English word in a log event;
3. _Sentence embedding:_ we generate a vector representation for each log event. Considering that the words in a sentence are usually not of equal importance, we use Term Frequency-Inverse Document frequency (TF-IDF) [27] to measure the importance of words. As a result, the weighted sum of word embedding vectors composes the vector representation of a log event.
By augmenting the nodes with the vector representations of the log events as attributes, we obtain an attributed, directed, and edge-weighted graph.
### OCDiGCN: One-Class Digraph Inception Convolutional Nets
We next describe One-Class Digraph Inception Convolutional Networks, abbreviated as OCDiGCN, a novel method for end-to-end graph-level anomaly detection. We chose to build on Digraph Inception Convolutional Networks (DiGCN) (Shen et al., 2017) for their capability to handle directed graphs, which we argued previously is an advantage in graph-based log anomaly detection.
Considering that DiGCN was designed for node representation learning, we repurpose it for graph representation learning as follows:
\[\mathbf{z}=\text{Readout}(\mathbf{Z}_{i}\mid i\in\{1,2,...,|\mathcal{V}|\}). \tag{4}\]
That is, at the final iteration layer, we utilise a so-called Readout(\(\cdot\)) function to aggregate node vector representations to obtain a graph vector representation. Importantly, Readout(\(\cdot\)) can be a simple permutation-invariant function such as maximum, sum or mean, or a more advanced graph-level pooling function (Zhu et al., 2017).
Next, note that DiGCN work did not explicitly enable learning edge features (i.e., \(\mathbf{Y}\)). However, as DiGCN follows the Message Passing Neural Network (MPNN) framework (Dong et al., 2017), incorporating \(\mathbf{Y}\) into Equation (1) and conducting computations in Equations (2-4) analogously enables learning edge features.
Now, given a set of graphs \(\{\mathcal{G}_{1},...,\mathcal{G}_{m},...,\mathcal{G}_{M}\}\), we can use Equation (4) to obtain an explicit vector representation for each graph, respectively. We denote the vector presentation of \(\mathcal{G}_{m}\) learned by the DiGCN model as DiGCN(\(\mathcal{G}_{m}\);\(\mathcal{H}\)).
In graph anomaly detection, anomalies are typically identified based on a reconstruction or distance loss (Krizhevsky et al., 2014). In particular, the One-Class Deep SVDD objective (Wang et al., 2017) is commonly used for two reasons: it can be easily combined with other neural networks, and more importantly, it generally achieves a state-of-the-art performance (Wang et al., 2017). Therefore, to detect anomalies, we train a one-class classifier by optimising the following One-Class Deep SVDD objective:
\[\min_{\mathcal{H}}\frac{1}{M}\sum_{m=1}^{M}\lVert\text{DiGCN}(\mathcal{G}_{m} ;\mathcal{H})-\mathbf{o}\rVert_{2}^{2}+\frac{\lambda}{2}\sum_{l=1}^{L}\lVert \mathbf{H}^{(l)}\rVert_{\text{F}}^{2}, \tag{5}\]
where \(\mathbf{H}^{(l)}\) represents the trainable parameters of DiGCN at the \(l\)-th layer, namely \((\Theta^{(0)(l)},\Theta^{(1)(l)},...,\Theta^{(k)(l)})^{T},\mathcal{H}\) denotes \(\{\mathbf{H}^{(1)},...,\mathbf{H}^{(L)}\}\), \(\lambda>0\) represents the weight-decay hyperparameter, \(\lVert\cdot\rVert_{2}\) is the Euclidean norm, and \(\lVert\cdot\rVert_{F}\) denotes the Frobenius norm. Moreover, \(\mathbf{o}\) is the center of the hypersphere in the learned representation space. Ruff et al. (Ruff et al., 2017) found empirically that setting \(\mathbf{o}\) to the average of the
Figure 1. The Logs2Graphs pipeline. We use attributed, directed, and weighted graphs for representing the log files with high expressiveness, and integrate representation learning and anomaly detection for accurate anomaly detection. We use off-the-shelf methods for log parsing, log grouping, and graph construction.
Figure 2. The construction of an attributed, directed, and edge-weighted graph from a group of log messages.
network representations (i.e., graph representations in our case) obtained by performing an initial forward pass is a good strategy.
Ruff et al. (Ruff et al., 2017) also pointed out, however, that One-Class Deep SVDD classification may suffer from a hypersphere collapse, which will yield trivial solutions, namely mapping all graphs to a fixed center in the representation space. To avoid a hypersphere collapse, the hypersphere center \(\mathbf{o}\) is set to the average of the network representations, the bias terms in the neural networks are removed, and unbounded activation functions such as ReLU are preferred.
After training the model on a set of non-anomalous graphs (or with a very low proportion of anomalies), given a test graph \(\mathcal{G}_{m}\), we define its distance to the center in the representation space as its anomaly score, namely
\[score(\mathcal{G}_{m})=\|\mathrm{DiGCN}(\mathcal{G}_{m};\mathcal{H})-\mathbf{o }\|_{2}. \tag{6}\]
**Training and hyperparameters:** In summary, OCDiGCN is composed of an \(L\)-layer DiGCN architecture to learn node representations, plus a Readout(\(\cdot\)) function to obtain the graph representation. It is trained in an end-to-end manner via optimising the SVDD objective, which can be optimised using stochastic optimisation techniques such as Adam (Kingmaa and Ba, 2014). Overall, OCDiGCN takes a collection of non-anomalous graphs and a set of hyperparameters, which are outlined in Table 2, as inputs. Importantly, the pseudo-code for Logs2Graphs is given in Algorithm 1.
```
0: Training dataset \(D_{tr}\), testing dataset \(D_{ts}\), model \(\theta\)
0: Predicted labels and explanations for \(D_{ts}\)
1: Parse \(D_{tr}\) and \(D_{ts}\) using Drain (Kingmaa and Ba, 2014)\(\rightarrow\) Obtain parsed datasets \(\hat{D}_{tr}\) and \(\hat{D}_{ts}\)
2: Group \(\hat{D}_{tr}\) and \(\hat{D}_{ts}\) based on log identifier \(\rightarrow\) Obtain grouped dataset \(\hat{D}_{tr}\) and \(\hat{D}_{ts}\)
3: Construct graphs using \(\hat{D}_{tr}\) and \(\hat{D}_{ts}\)\(\rightarrow\) Obtain graph sets \(\mathbf{Q}_{tr}\) and \(\mathbf{Q}_{ts}\)
4: Train the OCDiGCN model using Equation (5) with \(\mathbf{Q}_{tr}\)\(\rightarrow\) Obtain trained model \(\hat{\theta}\)
5: Use \(\hat{\theta}\) to predict anomalies in \(\mathbf{Q}_{ts}\)\(\rightarrow\) Obtain a set of anomalies \(\{Q_{1},...,Q_{n}\}\)
6: Generate explanations for \(Q_{i}\in\{Q_{1},...,Q_{n}\}\)
```
**Algorithm 1** Pseudo-code of Logs2Graphs
### Anomaly Explanation
Our anomaly explanation method can be regarded as a decomposition method (Kingmaa and Ba, 2014), that is, we build a score decomposition rule to distribute the prediction anomaly score to the input space. Concretely, a graph \(\mathcal{G}_{m}\) is identified as anomalous if and only if its graph-level representation has a large distance to the hyper-sphere center (Equation 6). Further, the graph-level representation is obtained via a Readout(\(\cdot\)) function applied on the node-level representations (Equation 4). Therefore, if the Readout(\(\cdot\)) function is attributable (such as sum or mean), we can easily obtain the a small subset of important nodes (in the penultimate layer) whose node embeddings contribute the most to the distance. Specifically, the importance score of node \(v_{j}\) (in the penultimate layer) in a graph \(\mathcal{G}_{m}\) is defined as:
\[\frac{|score(\mathcal{G}_{m})-score(\mathcal{G}_{m}/\{\mathbf{Z}_{j}\})|}{score (\mathcal{G}_{m})} \tag{7}\]
where \(score(\mathcal{G}_{m})\) is defined in Equation 6 and \(score(\mathcal{G}_{m}/\{\mathbf{Z}_{j}\})\) is the anomaly score by removing the embedding vector of \(v_{j}\) (namely \(\mathbf{Z}_{j}\)) when applying Readout function to obtain the graph-level representation.
Next, for each important node in the penultimate layer, we extend the LRP (Layerwise Relevance Propagation) algorithm (Bogorst and Welling, 2014) to obtain a minor set of important nodes in the input layer (this is not the contribution of our paper and we simply follow the practice in (Bogorst and Welling, 2014; Lees and Vanhoucke, 2015)). If certain of these nodes are connected by edges, the resulting subgraphs can provide more meaningful explanations. As the LRP method generates explanations utilizing the hidden features and model weights directly, its explanation outcomes are deemed reliable and trustworthy (Kingmaa and Ba, 2014).
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Name & \#Events & \#Graphs & \#Anomalies & \#Nodes & \#Edges \\ \hline HDFS & 48 & 575,061 & 16,838 & 7 & 20 \\ Hadoop & 683 & 978 & 811 & 34 & 120 \\ BGL & 1848 & 69,251 & 31,374 & 10 & 30 \\ Spirit & 834 & 10,155 & 4,432 & 6 & 24 \\ Thunderbird & 1013 & 52,160 & 6,814 & 16 & 52 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Summary of datasets. #Events refers to the number of log event templates obtained using log parser Drain (Kingmaa and Ba, 2014). #Groups means the number of generated graphs. #Anomalies represents the number of anomalous graphs. #Nodes denotes the average number of nodes in generated graphs. #Edges indicates the average number of edges in the generated graphs.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Symbol** & **Meaning** & **Range** \\ \hline _Bs_ & Batch size & \{16, 32, 64, **128**, 256, 512, 1024, 1536, 2048, 2560\} \\ \hline _Op_ & optimisation method & Adam, **SGD** \\ \hline \(L\) & number of layers & \{\(1,2,3,4,5\)\} \\ \hline \(\lambda\) & weight decay parameter & \{**0.0001**, 0.001, 0.01, 0.1\} \\ \hline \(\eta\) & learning rate & \{0.0001, 0.001, **0.01\} \\ \hline \(k\) & proximity parameter & \{\(1,2\)\} \\ \hline \(\alpha\) & teleport probability & \{0.05, **0.1**, 0.2\} \\ \hline \(\Gamma\) & fusion operation if \(k\geq 2\) & sum, concatenation \\ \hline _Re_ & readout function & **mean**, sum, max \\ \hline \(d\) & embedding dimension & \{32, 64, **128**, 256, 300\} \\ \hline _Ep_ & Epochs for training & range(100,1000,50) \\ \hline \hline \end{tabular}
\end{table}
Table 2. Description of hyperparameters involved in OCDiGCN. Range indicates the values that we have tested, and boldfaced values represent the values suggested to use in experiments. Particularly, for the embedding dimensions: 300 is suggested for BGL and 128 for others. For the batch sizes: 32 is suggested for HDFS and 128 for others. For the training epochs: 100 for BGL and Thunderbird, 200 for HDFS, 300 for Hadoop and 500 for Spirit are suggested.
## 6. Experiments
We perform extensive experiments to answer the following questions:
1. **Detection accuracy:** How effective is _Logs2Graphs_ at identifying log anomalies when compared to state-of-the-art methods?
2. **Directed vs. undirected graphs:** Is the directed log graph representation better than the undirected version for detecting log anomalies?
3. **Node Labels vs. Node Attributes**: How important is it to use semantic embedding of log event template as node attributes?
4. **Robustness analysis:** To what extent is Logs2Graphs robust to contamination in training data?
5. **Ability to detect structural anomalies:** Can Logs2Graphs better capture structural anomalies and identify structurally equivalent normal instances than other contenders?
6. **Explainability Analysis:** How understandable are the anomaly detection results delivered by Logs2Graphs?
7. **Sensitivity analysis:** How do the values of the hyperparameters influence the detection accuracy?
8. **Runtime analysis:** What are the runtime for different methods?
### Experiment Setup
#### 6.1.1. Datasets
The five datasets that we use, summarised in Table 1, were chosen for three reasons: 1) they are commonly used for the evaluation of log anomaly detection methods; 2) they contain ground truth labels that can be used to calculate evaluation metrics; and 3) they include log identifiers that can be used for partitioning log messages into groups. For each group of log messages in a dataset, we label the group as anomalous if it contains at least one anomaly. More details are given as follows:
* HDFS (Han et al., 2017) consists of Hadoop Distributed File System logs obtained by running 200 Amazon EC2 nodes. These logs contain _block_id_, which can be used to group log events into different groups. Moreover, these logs are manually labeled by Hadoop experts.
* Hadoop (Han et al., 2017) was collected from a Hadoop cluster consisting of 46 cores over 5 machines. The _ContainerID_ variable is used to divide log messages into different groups.
* BGL, Spirit and Thunderbird contain system logs collected from the BlueGene/L (BGL) supercomputing system, Spirit supercomputing system, and Thunderbird supercomputing system located at Sandia National Labs, respectively. For those datasets, each log message was manually inspected by engineers and labelled as normal or anomalous. For BGL, we use all log messages, and group log messages based on the _Node_ variable. For Spirit and Thunderbird, we only use the first 1 million and first 5 million log messages for evaluation, respectively. Furthermore, for these two datasets, the _User_ is used as log identifier to group log messages. However, considering that an ordinary user may generate hundreds of thousands of logs, we regard every 100 consecutive logs of each user as a group. If the number of logs is less than 100, we also consider it as a group.
#### 6.1.2. Baselines
To investigate the performance of _Logs2Graphs_, we compare it with the following seven state-of-the-art log anomaly detection methods: Principal Component Analysis (PCA) (Han et al., 2017), OneClass SVM (OCSVM) (Krishnan et al., 2016), Isolation Forest (iForest) (Han et al., 2017), HBOS (Han et al., 2017), DeepLog (Chen et al., 2017), LogAnomaly (Han et al., 2017) and AutoEncoder (Chen et al., 2017), and one state-of-the-art graph level anomaly detection method: GLAM (Zhou et al., 2017).
We choose these methods as baselines because they are often regarded to be representatives of traditional machine learning-based (PCA, OCSCM, IForest, HBOS) and deep learning-based approaches (DeepLog, LogAnomaly and AutoEncoder), respectively. All methods are unsupervised or semi-supervised methods that do not require labeled anomalous samples for training the models.
#### 6.1.3. Evaluation Metrics
The Area Under Receiver Operating Characteristics Curve (AUC ROC) and the Area Under the Precision-Recall Curve (AUC PRC) are widely used to quantify the detection accuracy of anomaly detection. Therefore, we employ both to evaluate and compare the different log anomaly detection methods. AUC PRC is also known as Average Precision (AP). For both AUC ROC and AUC PRC, values closer to 1 indicate better performance.
### Model Implementation and Configuration
Traditional machine learning based approaches--such as PCA, OCSVM, iForest, and HBOS--usually first transform logs into log event count vectors, and then apply traditional anomaly detection techniques to identify anomalies. For these methods, we utilise their resource implementations provided in PyOD (Zhou et al., 2017). Meanwhile, for deep learning methods DeepLog, LogAnomaly, and AutoEncoder, we use their open-source implementations in Deep-Loglizer (Chen et al., 2017). For these competing methods, we use their default hyperparameter values.
For all deep learning based methods, the experimental design adopted in this study follows a train/validation/test strategy with a distribution of 70% : 5% : 25% for normal instances. Specifically, the model was trained using 70% of normal instances, while 5% of normal instances and an equal number of abnormal instances were employed for validation (i.e., hyperparameter tuning). The remaining 25% of normal instances and the remaining abnormal instances were used for testing. Specifically, Table 2 summarises the hyperparameters involved in OCDiGCN as well as their recommended values.
We implemented and ran all algorithms in Python 3.8 (using PyTorch (Paszasz et al., 2017) and PyTorch Geometric (Chen et al., 2017) libraries when applicable), on a computer with Apple M1 chip 8-core CPU and 16GB unified memory. For reproducibility, all code and datasets will be released on GitHub.
### Comparison to the state of the art
We first compare _Logs2Graphs_ to the state of the art. The results are shown in Table 3, based on which we make the following main observations:
* In terms of AUC ROC, _Logs2Graphs_ achieves the best performance against its competitors on four out of five datasets. Particularly, _Logs2Graphs_ outperforms the closet competitor on BGL with 9.6% and delivers remarkable results (i.e., an AUC ROC larger than 0.99) on Spirit and Thunderbird. Similar observations can be made for Average Precision.
* Deep learning based methods generally outperform the traditional machine learning based methods. One possible reason is that traditional machine learning based methods only leverage log event count vectors as input, which makes them unable to capture and exploit sequential relationships between log events and the semantics of the log templates.
* The performance of (not-graph-based) deep learning methods is often inferior to that of _Log2Graphs_ on the more complex datasets, i.e., Hadoop, BGL, Spirit, and Thunderbird, which all contain hundreds or even thousands of log templates. This suggests that LSTM-based models may not be well suited for logs with a large number of log templates. One possible reason is that the test dataset contains many unprecedented log templates, namely log templates that are not present in the training dataset.
* In terms of ROC AUC score, all methods except for OCSVM and AutoEncoder achieve impressive results (with \(RC>0.91\)) on HDFS. One possible reason is that HDFS is a relatively simple log dataset that contains only 48 log templates. Concerning AP, PCA and LSTM-based DeepLog achieve impressive results (with \(AP>0.89\)) on HDFS. Meanwhile, _Logs2Graphs_ obtains a competitive performance (with \(AP=0.87\)) on HDFS.
### Directed vs. undirected graphs
To investigate the practical added value of using _directed_ log graphs as opposed to _undirected_ log graphs, we convert the logs to attributed, undirected, and edge-weighted graphs, and apply GLAM (Wang et al., 2019), a graph-level anomaly detection method for undirected graphs. We use the same graph construction method as for _Logs2Graphs_, except that we use undirected edges. Similar to our method, GLAM also couples the graph representation learning and anomaly detection objectives by optimising a single SVDD objective. The key difference with OCDiGCN is that GLAM leverages GIN (Wang et al., 2019), which can only tackle undirected graphs, while OCDiGCN utilises DiGCN (Wang et al., 2019) that is especially designed for directed graphs.
The results in Table 3 indicate that GLAM's detection performance is comparable to that of most competitors. However, it consistently underperforms on all datasets, except for Hadoop, when compared to _Logs2Graphs_. Given that the directed vs undirected representation of the log graphs is the key difference between the methods, a plausible explanation is that directed graphs have the capability to retain the temporal sequencing of log events, whereas undirected graphs lack this ability. Consequently, GLAM may encounter difficulties in detecting sequential anomalies and is outperformed by _Logs2Graphs_.
### Node Labels vs. Node Attributes
To investigate the importance of using semantic embedding of log event template as node attributes, we replace the node semantic attributes with one-hot-encoding of node labels (i.e., using an integer to represent a log event template). The performance comparisons in terms of ROC AUC for Logs2Graphs are depicted in Figure 3, which shows that using semantic embedding is always superior to using node labels. Particularly, it can lead to a substantial performance improvement on Hadoop, Spirit and HDFS datasets. The PR AUC results show a similar behaviour and thus are omitted.
### Robustness to Contamination
To investigate the robustness of Logs2Graphs when the training dataset is contaminated, we report its performance in terms of ROC AUC under a wide range of contamination levels. Figure 4 shows that the performance of Logs2Graphs decreases with an increase of contamination in the training data. The PR AUC results show a similar behaviour and thus are omitted. Hence, it is important to ensure that the training data contains only normal graphs (or with a very low proportion of anomalies).
### Ability to Detect Structural Anomalies and Recognise Unseen Normal Instances
To showcase the effectiveness of different neural networks in detecting structural anomalies, we synthetically generate normal and anomalous directed graphs as shown in Figure 5. As Deeplog, LogAnomaly and AutoEncoder require log sequences as inputs, we convert directed graphs into sequences by sequentially presenting the endpoints pair of each edge. Moreover, for GLAM we convert directed graphs into undirected graphs by turning each directed edge into undirected edge.
Figure 4. ROC results of Logs2Graphs w.r.t. a wide range of contamination levels. Results are averaged over 10 runs. Particularly, HDFS contains only 3% anomalies and thus results at 5% and 10% are not available.
Figure 3. The comparative performance analysis of Logs2Graphs, measured by ROC AUC, demonstrating the distinction between utilizing node semantic attributes and node labels.
Moreover, to investigate their capability of recognising unseen but structurally equivalent normal instances, we generate the following normal log sequences based on the synthetic normal graph as training data: \(A\to B\to C\to D\to A\) (1000), \(B\to C\to D\to A\to B\) (1000) and \(C\to D\to A\to B\) (1000), and the following as test dataset: \(D\to A\to B\to C\to D\) (1000).
Specifically, the results in Table 4 indicate that Logs2Graphs, Deelog and LogAnomaly can effectively detect structural anomalies while AutoEncoder and GLAM fail in some cases. However, log sequences based methods, namely Deelog, LogAnomaly and AutoEncoder, can lead to high positive rates due to their inability of recognising unseen but structurally equivalent normal instances.
### Anomaly Explanation
Particularly, Figure 6 provides an example of log anomaly explanation with the HDFS dataset. For each detected anomalous log graph (namely a group of logs), we first quantify the importance of nodes according to the description in Section 5.3. Next, we visualise the anomalous graph by assigning darker shade of red to more important nodes. In this example, the node "WriteBlock(WithException)" contributes the most to the anomaly score of an anomalous log group and thus is highlighted in red.
### Sensitivity Analysis
We examine the effects of three hyperparameters in OCDiGCN on the detection performance.
**The Number of Convolutional Layers:**\(L\) is a potentially important parameter as it determines how many convolutional layers to use in OCDiGCN. Figure 7 (top row) depicts PR AUC and ROC AUC for the five benchmark datasets when \(L\) is varied from 1 to 5. We found that \(L=1\) yields consistently good performance. As the value of \(L\) is increased, there is only a slight enhancement in the resulting performance or even degradation, while the associated
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Case** & \multicolumn{2}{c}{Deelog LogAnomaly AutoEncoder GLAM Ours} \\ \hline S1 (RUC) & 1.0 & 1.0 & 0.0 & 0.0 & 1.0 \\ S2 (RUC) & 1.0 & 1.0 & 0.50 & 1.0 & 1.0 \\ S3 (RUC) & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 \\ S4 (RUC) & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 \\ \hline N1 (FPR) & 100\% & 100\% & 100\% & 0\% & 0\% \\ \hline \hline \end{tabular}
\end{table}
Table 4. ROC AUC Results (higher is better) of detecting structural anomalies and False Positive Rate (lower is better) of recognising unseen normal instances. S1: Reverse Edge Direction; S2: Change Edge Endpoint; S3: Delete Edge; S4: Add Edge; N1: Unseen normal instances.
Figure 5. Synthetic generation of normal (10000) and structurally anomalous (200 each) graphs.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**HDFS**} & \multicolumn{2}{c}{**Hadoop**} & \multicolumn{2}{c}{**BGL**} & \multicolumn{2}{c}{**Spirit**} & \multicolumn{2}{c}{**Thunderbird**} \\ \cline{2-11} Method & AP & RC & AP & RC & AP & RC & AP & RC & AP & RC \\ \hline PCA & 0.91\(\pm\)0.03 & **1.0**\(\pm\)0.00 & 0.84\(\pm\)0.00 & 0.52\(\pm\)0.00 & 0.73\(\pm\)0.01 & 0.82\(\pm\)0.00 & 0.31\(\pm\)0.00 & 0.19\(\pm\)0.00 & 0.11\(\pm\)0.00 & 0.34\(\pm\)0.01 \\ OCSVM & 0.18\(\pm\)0.01 & 0.88\(\pm\)0.01 & 0.83\(\pm\)0.00 & 0.45\(\pm\)0.00 & 0.47\(\pm\)0.00 & 0.47\(\pm\)0.01 & 0.34\(\pm\)0.00 & 0.29\(\pm\)0.00 & 0.12\(\pm\)0.00 & 0.45\(\pm\)0.01 \\ IForest & 0.73\(\pm\)0.04 & 0.97\(\pm\)0.01 & 0.85\(\pm\)0.01 & 0.55\(\pm\)0.01 & 0.79\(\pm\)0.01 & 0.83\(\pm\)0.01 & 0.32\(\pm\)0.03 & 0.23\(\pm\)0.02 & 0.11\(\pm\)0.01 & 0.24\(\pm\)0.10 \\ HBOS & 0.74\(\pm\)0.04 & 0.99\(\pm\)0.00 & 0.84\(\pm\)0.00 & 0.50\(\pm\)0.00 & 0.84\(\pm\)0.02 & 0.87\(\pm\)0.03 & 0.35\(\pm\)0.00 & 0.22\(\pm\)0.00 & 0.15\(\pm\)0.01 & 0.29\(\pm\)0.05 \\ \hline DeepLog & **0.92\(\pm\)**0.07 & 0.97\(\pm\)0.04 & **0.96\(\pm\)**0.00 & 0.47\(\pm\)0.00 & 0.89\(\pm\)0.00 & 0.72\(\pm\)0.00 & 0.99\(\pm\)0.00 & 0.97\(\pm\)0.00 & 0.91\(\pm\)0.01 & 0.96\(\pm\)0.00 \\ LogAnomaly & 0.89\(\pm\)0.09 & 0.95\(\pm\)0.05 & **0.96\(\pm\)**0.00 & 0.47\(\pm\)0.00 & 0.89\(\pm\)0.00 & 0.72\(\pm\)0.00 & 0.99\(\pm\)0.00 & 0.97\(\pm\)0.00 & 0.90\(\pm\)0.01 & 0.96\(\pm\)0.00 \\ AutoEncoder & 0.71\(\pm\)0.03 & 0.84\(\pm\)0.01 & **0.96\(\pm\)**0.00 & 0.52\(\pm\)0.00 & 0.91\(\pm\)0.01 & 0.79\(\pm\)0.02 & 0.96\(\pm\)0.00 & 0.92\(\pm\)0.01 & 0.44\(\pm\)0.02 & 0.46\(\pm\)0.05 \\ \hline GLAM & 0.78\(\pm\)0.08 & 0.89\(\pm\)0.04 & 0.95\(\pm\)0.00 & **0.61\(\pm\)**0.00 & 0.94\(\pm\)0.02 & 0.90\(\pm\)0.03 & 0.93\(\pm\)0.00 & 0.91\(\pm\)0.00 & 0.75\(\pm\)0.02 & 0.85\(\pm\)0.01 \\ Logs2Graphs & 0.87\(\pm\)0.04 & 0.91\(\pm\)0.02 & 0.95\(\pm\)0.00 & 0.59\(\pm\)0.00 & **0.96\(\pm\)**0.01 & **0.93\(\pm\)**0.01 & **1.0\(\pm\)**0.00 & **1.0\(\pm\)**0.00 & **0.99\(\pm\)**0.00 & **1.0\(\pm\)**0.00 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Anomaly detection accuracy on five benchmark datasets for _Logs2Graphs_ and its eight competitors. AP and RC denote Average Precision and AUC ROC, respectively. HDFS, BGL, and Thunderbird have been downsampled to 10,000 graphs each while maintaining the original anomaly rates. For each method on each dataset, to mitigate potential biases arising from randomness, we conducted ten experimental runs with varying random seeds and report the average values along with standard deviations of AP and RC. Moreover, we highlight the best results with bold and the runner-up with underline.
computational burden increases substantially. We thus recommend and use \(L=1\).
**The Embedding Dimension \(d\):** From Table 7 (middle row), one can see that \(d=128\) yields good performance on Spirit, Hadoop, HDFS and Thunderbird, while further increasing \(d\) obtains negligible performance improvement or even degradation. However, an increase of \(d\) on BGL leads to significantly better performance. One possible reason is that BGL is a complex dataset wherein anomalies and normal instances are not easily separable on lower dimensions.
**The Proximity Parameter \(k\):** As this parameter increases, a one can gain more information from its further neighbours. Figure 7 (bottom row) contrasts the detection performance when \(k\) is set to \(1\) and \(2\), respectively. Particularly, we construct one Inception Block when \(k=2\), using concatenation to fuse the results.
We observe that there is no significant improvement in performance when using a value of \(k=2\) in comparison to \(k=1\). It is important to recognize that a node exhibits 0th-order proximity with itself and 1st-order proximity with its immediately connected neighbors. If \(k=2\), a node can directly aggregate information from its \(2\)-order neighbours. As described in Table 1, graphs generated from logs usually contain a limited number of nodes, varying to \(6\) to \(34\). Therefore, this is no need to utilise the Inception Block, which was originally designed to handle large graphs in (Zhu et al., 2018).
### Runtime Analysis
Note that traditional machine learning methods, including PCA, OCSVM, IForest and HBOS, usually perform log anomaly detection in a transductive way. In other words, they require the complete dataset beforehand and do not follow a train-and-test strategy. In contrast, neural network based methods, such as DeepLog, LogAnomaly, AutoEncoder, and Logs2Graphs, perform log anomaly detection in an inductive manner, namely following a train-and-test strategy.
Figure 8 shows that most computational time demanded by _Logs2Graphs_ is allocated towards the graph generation phase. In contrast, the training and testing phases require a minimal time budget. The graph generation phase can be amenable to parallelisation though, thereby potentially reducing the overall processing time. As a result, _Logs2Graphs_ shows great promise in performing online log anomaly detection. Meanwhile, other neural networks based models--such as DeepLog, LogAnomaly, and AutoEncoder--demand considerably more time for the training and testing phases.
## 7. Threats to Validity
We have discerned several factors that may pose a threat to the validity of our findings.
**Limited Datasets.** Our experimental protocol entails utilizing five publicly available log datasets, which have been commonly employed in prior research on log-based anomaly detection. However, it is important to acknowledge that these datasets may not fully encapsulate the entirety of log data characteristics. To address this limitation, our future work will involve conducting experiments on additional datasets, particularly those derived from industrial settings, in order to encompass a broader range of real-world scenarios.
**Limited Competitors.** This study focuses solely on the experimental evaluation of eight competing models, which are considered representative and possess publicly accessible source code. However, it is worth noting that certain models such as GLAD-PAW did not disclose their source code and it requires non-trivial efforts
Figure 8. Runtime for all eight methods on all datasets, wherein HDFS, BGL, and Thunderbird have been downsampled to 10,000 graphs. Runetimes are averaged over 10 repetitions. We report the training time per epoch for neural network based methods.
Figure 7. The effects of the number of layers (top row), the embedding dimensions (middle row) and the proximity parameter (bottom row) on AP (left column) and AUC ROC (right column).
to re-implement these models. Moreover, certain models such as CODEtect require several months to conduct the experiments on our limited computing resources. For these reasons, we exclude them from our present evaluation. In subsequent endeavors, we intend to re-implement certain models and attain more computing resources to test more models.
**Purity of Training Data.** The purity of training data is usually hard to guarantee in practical scenarios. Although Logs2Graphs is shown to be robust to very small contamination in the training data, it is critical to improve the model robustness by using techniques such as adversarial training (Bahmani et al., 2010) in the future.
**Graph Construction.** The graph construction process, especially regarding the establishment of edges and assigning edge weights, adheres to a rule based on connecting consecutive log events. However, this rule may be considered overly simplistic in certain scenarios. Therefore, more advanced techniques will be explored to construct graphs in the future.
## 8. Conclusions
We introduced _Logs2Graphs_, a new approach for unsupervised log anomaly detection. It first converts log files to attributed, directed, and edge-weighted graphs, translating the problem to an instance of graph-level anomaly detection. Next, this problem is solved by OCDDiGCN, a novel method based on graph neural networks that performs graph representation learning and graph-level anomaly detection in an end-to-end manner. Important properties of OCDDiGCN include that it can deal with directed graphs and do unsupervised learning.
Extensive results on five benchmark datasets reveal that _Logs2Graphs_ is at least comparable to and often outperforms state-of-the-art log anomaly detection methods such as DeepLog and LogAnomaly. Furthermore, a comparison to a similar method for graph-level anomaly detection on _undirected_ graphs demonstrates that directed log graphs lead to better detection accuracy in practice.
## Acknowledgments
**Zhong Li and Matthijs van Leeuwen:** this publication is part of the project Digital Twin with project number P18-03 of the research programme TTV Perspective, which is (partly) financed by the Dutch Research Council (NWO). **Jiayang Shi:** This research is co-financed by the European Union H2020-MSCA-ITN-2020 under grant agreement no. 956172 (xCTing).
|
2302.03828 | Medium-Assisted Enhancement of $X(3872)$ Production from Small to Large
Colliding Systems | Studies of exotic hadrons such as the $\chi_{c1} (3872)$ state provide
crucial insights into the fundamental force governing the strong interaction
dynamics, with an emerging new frontier to investigate their production in high
energy collisions where a partonic medium is present. Latest experimental
measurements from the Large Hadron Collider show an intriguing evolution
pattern of the $\chi_{c1} (3872)$-to-$\psi(2S)$ yield ratio from proton-proton
collisions with increasing multiplicities toward proton-lead and lead-lead
collisions. Here we propose a novel mechanism of medium-assisted enhancement
for the $\chi_{c1} (3872)$ production, which competes with the more
conventional absorption-induced suppression and results in a non-monotonic
trend from small to large colliding systems. Realistic simulations from this
model offer the first quantitative description of all available data.
Predictions are made for the centrality dependence of this observable in PbPb
collisions as well as for its system size dependence from OO and ArAr to XeXe
and PbPb collisions. In both cases, a non-monotonic behavior emerges as the
imprint of the competition between enhancement and suppression and can be
readily tested by future data. | Yu Guo, Xingyu Guo, Jinfeng Liao, Enke Wang, Hongxi Xing | 2023-02-08T01:46:05Z | http://arxiv.org/abs/2302.03828v2 | # Medium-Assisted Enhancement of \(X(3872)\) Production
###### Abstract
Studies of exotic hadrons such as the famous \(X(3872)\) state provide crucial insights into the fundamental force governing the strong interaction dynamics, with an emerging new frontier to investigate their production in high energy collisions where a partonic medium is present. Latest experimental measurements from the Large Hadron Collider show an intriguing evolution pattern of the \(X(3872)\)-to-\(\Psi(2s)\) yield ratio from proton-proton collisions with increasing multiplicities toward proton-lead and lead-lead collisions. Here we propose a novel mechanism of medium-assisted enhancement for the \(X(3872)\) production, which competes with the more conventional absorption-induced suppression and results in a non-monotonic trend from small to large colliding systems. Realistic simulations from this model offer the first quantitative description of all available data. Predictions are made for the centrality dependence of this observable in PbPb collisions as well as for its system size dependence from OO and ArAr to XeXe and PbPb collisions. In both cases, a non-monotonic behavior emerges as the imprint of the competition between enhancement and suppression and can be readily tested by future data.
_Introduction._ The overwhelming majority of the energy and mass in the visible component of our universe comes from the strongly interacting elementary particles, or hadrons, such as protons and neutrons. According to the fundamental theory of elementary particles known as the Standard Model, these hadrons are themselves made from quarks and antiquarks whose interactions are governed by a basic theory of strong interaction -- the Quantum Chromodynamics (QCD). While the QCD equations are known, their full consequences are difficult to decipher. One of the outstanding challenges is about the so-called exotic hadrons, whose quark/antiquark configurations do not follow the established normal patterns like the protons and neutrons. A most notable example of such states is the X(3872) particle, first discovered by the Belle experiment[1] in 2003. Subsequently extensive efforts [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17] have been made to study it as well as many other candidates of exotic hadrons: see recent reviews in e.g. [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28]. What kinds of exotic hadrons could exist? What are the internal structures and properties of them? How could they be produced and detected experimentally? Exploration of these questions is an active frontier of physics research that helps advance our understanding of the strong interaction dynamics.
A new avenue of investigating exotic hadrons has recently emerged and rapidly developing, namely to study their formation in high energy hadron and nuclear collisions where a partonic medium is present. In such collisions, a fireball with many thousands of light flavor quarks/antiquarks is created together with a considerable number of charm quarks/antiquarks. This provides an ideal environment for creating heavy flavor exotic states and probing their properties, as demonstrated in the latest theoretical works [29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49]. Most importantly, experimental measurements of \(X(3872)\) production in these collisions have started to arrive in the last few years, including LHCb data from high multiplicity proton-proton (pp) collisions [50] and proton-lead (pPb) collisions [51] as well as CMS data from lead-lead (PbPb) collisions [8] at the Large Hadron Collider (LHC). Already this first batch of empirical information shows an unusual pattern of the partonic medium's influence on the \(X(3872)\) yield with respect to the yield of another particle called \(\Psi(2s)\) which is a normal hadronic state serving as a benchmark for comparison, by virtue of its similar heavy flavor content and decay channel (\(J/\Psi\pi\pi\)) as well as close mass value to the exotic \(X(3872)\). (These data points are shown in Fig. 2.) The LHCb pp results suggest the yield ratio of \(X(3872)\) relative to \(\Psi(2s)\) decreases with increasing event multiplicity, which would hint at a suppression effect due to the medium. On the other hand, the LHCb pPb results and the CMS PbPb results, for which the generated medium is expected to be larger in terms of parton density and system size as compared with pp collisions, show a strong increase in this observable, which would indicate an opposite trend to the pp results. So far there has lacked a consistent explanation that reconciles this intriguing behavior of \(X(3872)\) production from small to large colliding systems.
In this Letter, we present a phenomenological model for the partonic medium attenuation effects on the production of \(X(3872)\) in high energy hadron and nuclear collisions. In particular, a novel mechanism of medium-assisted enhancement effect will be proposed which com
petes with the more conventional absorption-induced suppression effect. Based on this important feature, it will first be demonstrated qualitatively how the competition leads to a nontrivial pattern in the yield ratio of \(X(3872)\) relative to the \(\Psi(2s)\) while the partonic medium evolves from the smaller to the larger systems. We will then utilize realistic simulations to show how such a model offers the first quantitative description of all available experimental data. Further predictions will also be made for observables that can be verified in the future.
_Method._ In the high \(p_{T}\) region where recent CMS and LHCb measurements were made, the production of \(\Psi(2s)\) and \(X(3872)\) should dominantly come from virtual \(c\bar{c}\) pairs generated in the initial hard scatterings. Suppose the number of such pairs that would eventually turn into \(\Psi(2s)\) and \(3872)\), in the absence of any medium effect, would be \(N_{\Psi(2s)}\) and \(N_{X}\) respectively. However, in nucleus-nucleus (AA) or high-multiplicity pp and proton-nucleus (pA) collisions, these pairs will need to first travel through the created partonic medium before producing those final hadrons. The influence of the medium on the evolution of such \(c\bar{c}\) pairs is the focus of our analysis.
The first important effect is the medium absorption. Random collisions with quarks and gluons from the medium result in the dissociation of the correlated comoving \(c\bar{c}\) pair, which is akin to the well-known \(J/\Psi\) suppression effect as well as jet energy loss. We model this effect as the geometric absorption along the in-medium path of a \(c\bar{c}\) pair:
\[\frac{\mathrm{d}N_{i}}{\mathrm{d}x}=-\alpha_{i}n(x)N_{i}, \tag{1}\]
where \(i\to\Psi(2s),X\). The \(n(x)\) is the local parton density of the medium along the path of a surviving \(c\bar{c}\) pair. The coefficient \(\alpha_{i}\) describes the likelihood of a given state to be dissociated, with the dimension of a cross-section. It is plausible to expect that the absorption effect is stronger for less tightly bound states such that \(\alpha_{\Psi(2s)}<\alpha_{X}\). As is typically done in geometric models for jet energy loss or for charmonium suppression, one can evaluate the overall suppression by first integrating the above equation along any given path, then averaging over all possible in-medium paths, and finally averaging over collision events. This leads to the following expression for the suppression factor of \(\Psi(2s)\):
\[R^{\Psi(2s)}=\langle\langle e^{-\alpha_{\Psi(2s)}\int_{\mathrm{ path}}n(x)\mathrm{d}x}\rangle\rangle \tag{2}\]
where the notation \(\langle\langle...\rangle\rangle\) means \(\langle\langle...\rangle_{\mathrm{path}}\rangle_{\mathrm{event}}\). We note that such a suppression effect applies similarly to the \(X(3872)\) production.
For \(X(3872)\), however, there is another medium effect that can actually help enhance its production. In addition to the \(c\bar{c}\), the formation of \(X(3872)\) requires two light quarks/antiquarks. Scatterings with the partonic medium, which serves as a reservoir of numerous light quarks/antiquarks, could lead to "picking up" of light quarks/antiquarks which then co-move with the \(c\bar{c}\) pair. This enhances the probability to form the \(X(3872)\) state in the end. One could consider this as a two-step process, in which the \(c\bar{c}\) pair picks up the first needed light parton and subsequently a second needed light parton. Therefore one can model such a _medium-assisted enhancement_ effect as follows:
\[\frac{\mathrm{d}N_{X}}{\mathrm{d}x}=\beta_{X}n(x)\left[\int_{0}^{x}\beta_{X}n (y)\mathrm{d}y\right]N_{X}, \tag{3}\]
where \(\beta_{X}\) is a parameter characterizing the probability of picking up a single light parton, which also has the dimension of a cross-section. An important feature of this effect is that it scales as square power of the medium parton density. Combining this enhancement together with the previous suppression effect, one obtains:
\[R^{X}=\langle\langle e^{\int_{\mathrm{path}}[-\alpha_{X}n(x)+\beta_{X}^{2}n(x) \int_{0}^{x}n(y)\mathrm{d}y]\mathrm{d}x}\rangle\rangle\,. \tag{4}\]
Now we can compare the production of \(X(3872)\) relative to \(\Psi(2s)\). This is quantified by the ratio of their baseline pp production cross-section, modulated by their respective suppression/enhancement effects along the in-medium paths:
\[\frac{N^{X}}{N^{\Psi(2s)}} =\frac{\sigma_{pp}^{X}}{\sigma_{pp}^{\Psi(2s)}}\times\frac{R^{X} }{R^{\Psi(2s)}}\] \[\approx\frac{\sigma_{pp}^{X}}{\sigma_{pp}^{\Psi(2s)}}\times \mathcal{R}_{med.}, \tag{5}\] \[\mathcal{R}_{med.} \equiv\langle\langle e^{\int_{\mathrm{path}}[-(\alpha_{X}-\alpha_{ \Psi(2s)})n(x)+\beta_{X}^{2}n(x)\int_{0}^{x}n(y)\mathrm{d}y]\mathrm{d}x}\rangle\rangle. \tag{6}\]
In the above, the pp baseline \(\sigma_{pp}^{X}/\sigma_{pp}^{\Psi(2s)}\) could be inferred from experimental data. We will focus on analyzing the medium attenuation factor \(\mathcal{R}_{med.}\). Clearly, \(\mathcal{R}_{med.}>1\) implies an overall medium enhancement while \(\mathcal{R}_{med.}<1\) means an overall medium suppression for the \(X(3872)\) production relative to the \(\Psi(2s)\).
Let us first examine the qualitative feature of the partonic medium effect. Since \(\alpha_{X}>\alpha_{\Psi(2s)}\) and thus \(\alpha_{X}-\alpha_{\Psi(2s)}>0\), the first term in the exponential of \(\mathcal{R}_{med.}\) is a suppression term. Its contribution grows linearly with the medium parton density and the path length. The second term is an enhancement term and its contribution grows quadratically with the parton density and path length. As a result, for relatively low medium density and/or small medium size, the first term will dominate and therefore the overall medium effect would be a suppression of \(X(3872)\) relative to \(\Psi(2s)\). On the other hand, for relatively high medium density and large medium size, the second term will dominate and therefore the overall medium effect would be an enhancement, instead. This nonlinear feature, arising from the competition between suppression and enhancement, points to a non-monotonic evolution of medium effect that can help
explain and provide a unified interpretation of the recent measurements by both LHCb and CMS from small to large colliding systems. For a simple illustration, let us assume that the average medium effect can be approximated by an average parton density \(\bar{n}\) and average path length \(\bar{L}\). Further defining a medium thickness parameter \(\bar{W}\equiv\bar{n}\cdot\bar{L}\), the factor \(\mathcal{R}_{med.}\) can be simplified as
\[\mathcal{R}_{med.}=e^{\left[-\left(\alpha_{X}-\alpha_{\Psi(2s)}\right)\bar{W} +\frac{1}{2}\beta_{X}^{2}\bar{W}^{2}\right]}. \tag{7}\]
The above result clearly suggests that changing from pp with increasing multiplicities through pA eventually to AA collisions, the medium thickness \(\bar{W}\) will monotonically increase so that the medium attenuation factor \(\mathcal{R}_{med.}\) should first decrease and then increase, for which a minimum would occur at \(\bar{W}=\bar{n}\bar{L}=\frac{\alpha_{X}-\alpha_{\Psi(2s)}}{\beta_{X}^{2}}\).
To further demonstrate this feature, let us compute the \(\mathcal{R}_{med.}\) for such a simple partonic medium, essentially a QGP "brick" [52] with constant and homogeneous temperature which extends along a given direction with a fixed width. In Fig. 1, the individual contributions from suppression term (blue) and enhancement term (red) as well as the overall \(\mathcal{R}_{med.}\) (black) are plotted as functions of the QGP "brick" length, showing the nontrivial decrease-then-increase behavior of \(\mathcal{R}_{med.}\) due to the competition between suppression and enhancement. This behavior already qualitatively agrees with the trends seen in experimental data. Of course, the partonic medium created in those collisions is much more complicated than the simple approximation here, due to non-trivial spacetime-dependent initial conditions, event-by-event fluctuations, dynamical expansions, etc. To fully verify the feasibility of this idea, one needs to perform quantitative and realistic simulations, which we report next.
_Results._ To quantitatively evaluate the medium effect on the \(X(3872)\) production relative to the \(\Psi(2s)\), we utilize event-by-event simulations based on the iEBE-VISHNU hydrodynamic model [53] which has been well tested by experimental data from small to large colliding systems [54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68]. The iEBE-VISHNU provides event-wise time-dependent evolution information of the bulk medium as well as initial conditions for pp, pA, and AA collisions. We generate \(c\bar{c}\) pairs at different spots on the event plane according to the initial binary collision density profiles. The pairs then move along straight paths whose directions are randomly chosen, and along each path the bulk medium is evolving in time. While the local parton density is not directly available from the iEBE-VISHNU, one can reasonably assume it is proportional to the local entropy density \(s(x,\tau)\) available from the hydrodynamic code. In this work, we simulated a total of 500,000 events for pp collisions at \(\sqrt{s_{NN}}=8\) TeV, 200,000 events for pPb collisions at \(\sqrt{s_{NN}}=8.16\) TeV and 100,000 events for PbPb collisions \(\sqrt{s}=5.02\) TeV. For each event, we further simulate about 100 to 10000 in-medium paths depending on the medium size. Due to the substantial amount of needed computing time, we chose to simplify the calculations by first performing average over the path integrations in each event and then computing the exponential for the ratio between X(3872) and \(\Psi(2s)\). That is:
\[\mathcal{R}_{med.}\approx\langle e^{-\alpha^{\prime}\cdot P_{1}+\beta^{\prime 2 }\cdot P_{2}}\rangle_{\text{event}}, \tag{8}\]
where
\[P_{1} =\left\langle\int_{\text{path}}s(x)\mathrm{d}x\right\rangle_{ \text{path}}, \tag{9}\] \[P_{2} =\left\langle\int_{\text{path}}s(x)\left(\int_{0}^{x}s(y)\mathrm{ d}y\right)\mathrm{d}x\right\rangle_{\text{path}}. \tag{10}\]
where we introduce \(\alpha^{\prime}=\left(\alpha_{X}-\alpha_{\Psi(2s)}\right)\left(\frac{n}{s}\right)\) and \(\beta^{\prime}=\beta_{X}\left(\frac{n}{s}\right)\) whose definitions absorb the proportionality constant between parton density and entropy density.
To determine the two key parameters \(\alpha^{\prime}\) and \(\beta^{\prime}\), we take the LHCb pp (at \(\sqrt{s}=8\) TeV) and preliminary pPb (at \(\sqrt{s}=8.16\)TeV) as well as the CMS PbPb (at \(\sqrt{s}=5.02\) TeV) data for a global fitting analysis, with results shown in Fig. 2. The best fit, with \(\chi^{2}/d.o.f=1.78\), gives the following numbers with \(1\sigma\) level uncertainty: \(\alpha^{\prime}=(7.0\pm 3.2)\times 10^{-3}fm^{2}\), \(\beta^{\prime}=(6.14\pm 0.97)\times 10^{-3}fm^{2}\) and \(\sigma_{pp}^{X}/\sigma_{pp}^{\Psi(2s)}=0.135\pm 0.038\). As one can see, our model with just two parameters characterizing a competition between suppression and enhancement can well describe the quantitative trends of all global data from small to large systems. The newly proposed medium-assisted enhancement is particularly important for understanding the rapid increase of \(X(3872)\) yield relative to \(\Psi(2s)\) in the pPb and PbPb collisions.
Figure 1: Individual contributions from absorption-induced suppression (blue) and medium-assisted enhancement (red) as well as the overall \(\mathcal{R}_{med.}\) (black) are plotted as functions of the QGP “brick” length (in fm unit). The QGP is set at a temperature of 360MeV corresponding to an entropy density of 105fm\({}^{-3}\). The dashed line represents a baseline of \(\mathcal{R}_{med.}=1\) in the absence of any medium effect.
A natural next step would be testing our model predictions for which experimental data are not yet available and can serve as a future validation. To do that, we further investigate the centrality dependence of the medium effect in the PbPb collisions. In Fig 3, we show the yield ratio of \(X(3872)\) to \(\Psi(2s)\) in three centrality bins: \(60-90\%\), \(30-60\%\), and \(0-30\%\). Interestingly, the results again show a non-monotonic behavior. In peripheral collisions the ratio is about 0.5, while in middle-centrality class it will drop to around 0.01 which is comparable to that in the pPb collisions. Finally in the central collisions, it rockets up to be as high as 10. This finding also suggests that the most central collisions actually contribute most of the \(X(3872)\) particles observed in the minimal bias measurements from CMS. We emphasize that the model parameters were already fixed in the aforementioned fitting analysis, so the highly non-trivial centrality trend predicted by the model here will be an important verification by future measurements.
Finally, the system size scan for AA collisions could offer yet another independent validation of our model predictions. For that purpose, we have computed the \(X(3872)\) to \(\Psi(2s)\) yield ratio for the following systems: OO collisions at \(\sqrt{s_{NN}}=6.5\) TeV, ArAr collisions at \(\sqrt{s_{NN}}=5.85\) TeV, and XeXe collisions at \(\sqrt{s_{NN}}=5.44\) TeV. These results are shown in Fig. 4 in comparison with PbPb collisions at \(\sqrt{s_{NN}}=5.02\) TeV. Again, one observes a nontrivial trend that first decreases and then increases when changing from smaller to larger colliding systems. Such prediction of the system size dependence in AA collisions, which is the consequence of competing suppression and enhancement effects, can be readily tested with future measurements.
_Conclusion._ To conclude, we present a phenomenological model for the partonic medium attenuation effects on the production of \(X(3872)\) and \(\Psi(2s)\) particles in high energy hadron and nuclear collisions. In particular, a novel mechanism of medium-assisted enhancement effect is proposed for the \(X(3872)\) production, which leads to a competition with the more conventional absorption-induced suppression effect and becomes more dominant for higher parton densities and larger medium size. As a consequence of this important feature, the yield ratio of \(X(3872)\) relative to the \(\Psi(2s)\) develops a nontrivial pattern, first decreasing then increasing, when the partonic medium evolves from small to large colliding systems. Utilizing realistic simulations, we show that this model
Figure 3: The predicted centrality dependence of the X(3872) yield relative to \(\Psi(2s)\) in PbPb collisions at \(\sqrt{s_{NN}}=5.02\)TeV collisions. The blue uncertainty band is from the same source as in Fig. 2.
Figure 2: A comparison of the \(X(3872)\) yield relative to \(\Psi(2s)\) between model simulation results (blue curve) and experimental data from LHCb pp collisions at \(\sqrt{s_{NN}}=8\) TeV (red circle), LHCb preliminary pPb collisions at \(\sqrt{s_{NN}}=8.16\) TeV (orange triangle) and CMS PbPb collisions at \(\sqrt{s_{NN}}=5.02\) TeV (green box) [50; 51; 8]. The model parameters are determined from the global fitting analysis with the blue band showing the \(1\sigma\) level uncertainty. (See text for details.)
Figure 4: The predicted trend of the X(3872) yield relative to \(\Psi(2s)\) in AA collisions from small to large systems, including: OO collisions at \(\sqrt{s_{NN}}=6.5\)TeV, ArAr collisions at \(\sqrt{s_{NN}}=5.85\)TeV, XeXe collisions at \(\sqrt{s_{NN}}=5.44\)TeV, as well as PbPb collisions at \(\sqrt{s_{NN}}=5.02\)TeV. The blue uncertainty band is from the same source as in Fig. 2.
offers the first quantitative description of all available experimental measurements, including the LHCb pp (at \(\sqrt{s}=8\) TeV) and preliminary pPb (at \(\sqrt{s}=8.16\)TeV) as well as the CMS PbPb (at \(\sqrt{s}=5.02\) TeV) data. We further make predictions for the centrality dependence of the \(X(3872)\)-to-\(\Psi(2s)\) yield ratio in PbPb collisions as well as for its system size dependence from OO and ArAr to XeXe and PbPb collisions. In both cases, a non-monotonic pattern emerges as the imprint of the competition between enhancement and suppression. Given the expected abundance of experimental data from planned runs as well as anticipated upgrades at the LHC, it would be exciting to test these predictions with future high precision measurements.
_Acknowledgments._ This research was supported in part by the National Natural Science Foundation of China (NSFC) under Grants No. 12035007, No. 12022512 and No. 11905066, by Guangdong Major Project of Basic and Applied Basic Research No. 2020B0301030008, by the National Science Foundation in US under Grant No. PHY-2209183 (J.L.), and by the DOE through Exo-Had Topical Collaboration.
|
2305.01040 | CLIP-S$^4$: Language-Guided Self-Supervised Semantic Segmentation | Existing semantic segmentation approaches are often limited by costly
pixel-wise annotations and predefined classes. In this work, we present
CLIP-S$^4$ that leverages self-supervised pixel representation learning and
vision-language models to enable various semantic segmentation tasks (e.g.,
unsupervised, transfer learning, language-driven segmentation) without any
human annotations and unknown class information. We first learn pixel
embeddings with pixel-segment contrastive learning from different augmented
views of images. To further improve the pixel embeddings and enable
language-driven semantic segmentation, we design two types of consistency
guided by vision-language models: 1) embedding consistency, aligning our pixel
embeddings to the joint feature space of a pre-trained vision-language model,
CLIP; and 2) semantic consistency, forcing our model to make the same
predictions as CLIP over a set of carefully designed target classes with both
known and unknown prototypes. Thus, CLIP-S$^4$ enables a new task of class-free
semantic segmentation where no unknown class information is needed during
training. As a result, our approach shows consistent and substantial
performance improvement over four popular benchmarks compared with the
state-of-the-art unsupervised and language-driven semantic segmentation
methods. More importantly, our method outperforms these methods on unknown
class recognition by a large margin. | Wenbin He, Suphanut Jamonnak, Liang Gou, Liu Ren | 2023-05-01T19:01:01Z | http://arxiv.org/abs/2305.01040v1 | # CLIP-S\({}^{4}\): Language-Guided Self-Supervised Semantic Segmentation
###### Abstract
Existing semantic segmentation approaches are often limited by costly pixel-wise annotations and predefined classes. In this work, we present CLIP-S\({}^{4}\) that leverages self-supervised pixel representation learning and vision-language models to enable various semantic segmentation tasks (e.g., unsupervised, transfer learning, language-driven segmentation) without any human annotations and unknown class information. We first learn pixel embeddings with **pixel-segment contrastive learning** from different augmented views of images. To further improve the pixel embeddings and enable language-driven semantic segmentation, we design two types of consistency guided by vision-language models: 1) **embedding consistency**, aligning our pixel embeddings to the joint feature space of a pre-trained vision-language model, CLIP [34]; and 2) **semantic consistency**, forcing our model to make the same predictions as CLIP over a set of carefully designed target classes with both known and unknown prototypes. Thus, CLIP-S\({}^{4}\) enables a new task of class-free semantic segmentation where no unknown class information is needed during training. As a result, our approach shows consistent and substantial performance improvement over four popular benchmarks compared with the state-of-the-art unsupervised and language-driven semantic segmentation methods. More importantly, our method outperforms these methods on unknown class recognition by a large margin.
## 1 Introduction
Semantic segmentation aims to partition an input image into semantically meaningful regions and assign each region a semantic class label. Recent advances in semantic segmentation [6, 27, 48] heavily rely on pixel-wise human annotations, which have two limitations. First, acquiring pixel-wise annotations is extremely labor intensive and costly, which can take up to 1.5 hours to label one image [31]. Second, human annotations are often limited to a set of predefined semantic classes, with which the learned models lack the ability to recognize unknown classes [25].
Various approaches have been proposed to tackle these limitations, among which we are inspired by two lines of recent research in particular. First, for unsupervised semantic segmentation (i.e., without human annotations), self-supervised pixel representation learning approaches [14, 18, 19, 23, 40] have shown promising results on popular unsupervised benchmarks. The main idea is to extend self-supervised contrastive learning [7, 16] from images to pixels by attracting each pixel's embedding to its positive pairs and repelling it from negative pairs. The prior of pairs can be contours [18, 19], hierarchical groups [23], salience maps [40], and pre-trained models [14]. Although these approaches can group pixels into semantically meaningful clusters, human annotations are still needed to assign class labels to the clusters for semantic segmentation [37].
Second, for unknown classes in semantic segmentation, large-scale vision-language models such as CLIP [34]
Figure 1: (a) Pixel embeddings from different CLIP-based unsupervised methods: Our method, CLIP-S\({}^{4}\), generates sharper and more coherent pixel embeddings than MaskCLIP [49] and MaskCLIP+s [49]; (b) Language-driven semantic segmentation by different methods: CLIP-S\({}^{4}\) can recognize challenging unknown classes (e.g., moon); (c) The key idea behind CLIP-S\({}^{4}\): aligning the pixel embeddings and their semantics with CLIP feature space.
have shown great potential. This line of research, called _language-driven semantic segmentation_, aims to segment images with arbitrary classes defined by texts during testing time [25, 37, 46, 49]. Among these methods, most still need training time annotations, such as pixel annotations [25] and captions [46]. Only a few recent work, MaskCLIP & MaskCLIP+ [49] attempts to address this without using additional supervision: MaskCLIP directly extracts pixel embeddings correlated with texts from CLIP, but these pixel embeddings are coarse and noisy (Fig. 1a). To address this issue, MaskCLIP+ [49] trains a segmentation model on the pseudo-labels generated by MaskCLIP for a set of predefined classes. However, the pixel embeddings of MaskCLIP+ are distorted by the predefined classes (Fig. 1a), which limits its ability to recognize unknowns (Fig. 1b). Also, it needs unknown class information during training, which hinders its real-world applications.
We propose a language-guided self-supervised semantic segmentation approach, CLIP-S\({}^{4}\), which takes advantage of the strengths from both lines of research and addresses their limitations accordingly. The key idea is to learn consistent pixel embeddings with respect to visual and conceptual semantics using self-supervised learning and the guidance of a vision-language model, CLIP.
Specifically, we first train pixel embeddings with _pixel-segment contrastive learning_ from different augmented image views [18, 19, 23] such that images can be partitioned into visually meaningful regions. To further improve pixel embedding quality and enable language-driven semantic segmentation, we introduce vision-language model guided consistency to regularize our model (Fig. 1c). The consistency is enforced from two aspects: _embedding consistency_ and _semantic consistency_. First, embedding consistency aims to align the pixel embeddings generated by our model with the joint feature space of texts and images of CLIP by minimizing the distance between the pixel embeddings generated by our model and CLIP. Second, semantic consistency forces our model to make the same prediction as CLIP over a set of carefully designed target classes with both _known_ and _unknown_ prototypes. Note that unlike the previous methods [25, 49] that use a predefined set of _known classes_, CLIP-S\({}^{4}\) also learns the representation of _unknown classes_ from images during training.
In the end, CLIP-S\({}^{4}\) also enables a new task, namely _class-free semantic segmentation_, as shown in Tab. 1. This new task does not need any human annotations and even assumes NO class names are given during training. This is a more challenging task than the recent work [49] that requires class names of both known and unknown.
In summary, the contributions of this paper are threefold:
* We propose a self-supervised semantic segmentation approach that combines pixel-segment contrastive learning with the guidance of pre-trained vision language models. Our method can generate high-quality pixel embeddings without any human annotations and be applied to a variety of semantic segmentation tasks.
* We open up new research potentials for language-driven semantic segmentation without any human annotations by introducing and addressing a new task of _class-free semantic segmentation_ (Tab. 1). Unlike previous work that assumes all the class names are known during training, our method can discover unknown classes from unlabelled image data without even knowing unknown class names.
* Consistent and substantial gains are observed with our approach over the state-of-the-art unsupervised and language-driven semantic segmentation methods on four popular datasets. More importantly, our method significantly outperforms the state-of-the-art on the segmentation of unknown classes.
## 2 Related Work
**Unsupervised Semantic Segmentation.** There are two groups of recent unsupervised semantic segmentation methods. One group of methods learns to generate consistent pixel representations or predictions between different augmentation of images with the guidance of mutual information [22, 30], clusters [8], contours [18, 19], hierarchical groups [47, 23], and saliency masks [40]. The other group of methods extracts dense features from pre-trained models based on saliency maps [36], augmentations [39], spectral decomposition [28], and feature correspondences [14]. While these methods can generate pixel embeddings with semantically meaningful clusters, annotations are needed to assign class labels to the clusters (e.g., \(k\)-nearest neighbor search [19] and Hungarian algorithm [40]). Our work combines pixel-segment self-supervision with pre-trained vision-language models to enable semantic segmentation without any human annotations.
**Language-Driven Semantic Segmentation.** Recently, vision-language models (e.g., CLIP [34]) trained on large
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & & Known & Unknown & & \\ \cline{2-5} & & & & & \\ \hline Un/Self-supervised ([19] etc.) & ✗ & ✗ & ✗ & ✗ & Fine-Tuning \\ Supervised ([27] etc.) & ✓ & ✓ & N/A & N/A & N/A \\ Zero-shot ([3] etc.) & ✓ & ✓ & ✗ & ✓ & Word2Vec, etc. \\ \hline Language- & _MaskCLIP+_[49] & ✗ & ✓ & ✗ & ✓ & CLIP \\ Driven & _CLIP-S\({}^{4}\)_ & ✗ & ✓ & ✗ & ✗ & CLIP \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of information required for training over different tasks. CLIP-S\({}^{4}\) enables a new task called _class-free semantic segmentation_. Compared with MaskCLIP+ [49], the new task assumes unknown class names are NOT given during training.
scale image-text datasets have shown great potential on various downstream tasks such as image synthesis [21, 41], out-of-distribution detection [11], and object detection [13]. To extend vision-language models for semantic segmentation, one active research, _language-driven semantic segmentation_, aims to segment images with arbitrary unknown classes defined by texts during testing time [25, 37, 46, 49]. Some methods [25, 44] use pixel-wise annotations to train language-guided semantic segmentation models. Other methods [10, 46] perform large-scale pre-training on image-text pairs specifically for semantic segmentation.
By contrast, we directly use vision-language models that are pre-trained for classification tasks. Along this line of research, a few approaches have been proposed [35, 37, 49]. The most relevant approach to our method is MaskCLIP [49], which extends the embeddings generated by pre-trained vision-language models from image to pixel level, but these embeddings are often coarse and noisy. To address this issue, MaskCLIP+ [49] fine-tunes the pixel embeddings by the pseudo-labels of a specific set of classes on top of MaskCLIP. However, it needs unknown class names during training, which may not be possible in real-world cases. Compared with [49], our method can recognize unknown classes without knowing any unknown class information during training time, and also learns fine-grained and sharper pixel embeddings with self-supervision.
## 3 Method
Our method (Fig. 2) segments images by learning a pixel embedding function with self-supervised contrastive learning and the guidance of a pre-trained vision-language model, CLIP. We use self-supervised contrastive learning to force the consistency of pixel embeddings within visually coherent regions (e.g., superpixels) and among different augmented views of the same image (Sec. 3.1). We also introduce two vision-language model guided consistency (i.e., _embedding consistency_ and _semantic consistency_) to further regularize the model (Sec. 3.2). The two components are complementary to each other. On the one hand, contrastive learning mitigates the noise introduced by CLIP. On the other hand, with the knowledge extracted from CLIP, the quality of the pixel embeddings can be improved. More importantly, this approach enables us to perform language-driven semantic segmentation with our carefully designed _target class prototypes_ of both knowns and unknowns. In the following, we discuss the two components in detail.
### Pixel-Segment Contrastive Learning
We train a pixel embedding function to generate consistent pixel embeddings within visually coherent regions through pixel-segment contrastive learning [18, 23]. Specifically, the embedding function transforms each pixel \(p\) of an image to a unit-length embedding vector \(\mathbf{z}_{p}\) of dimension \(d\) via a deep neural network. The image is then partitioned into \(|\mathcal{S}|\) segments by clustering the pixel embeddings. The embedding \(\mathbf{v}_{s}\) of each segment \(s\) is calculated as the average of the pixel embeddings \(\mathbf{v}_{s}=\sum_{p\in s}\mathbf{z}_{p}/|s|\), which is also normalized into a unit-length vector \(\mathbf{v}_{s}=\mathbf{v}_{s}/\|\mathbf{v}_{s}\|\). For each pixel \(p\), the segments are grouped into two sets including a positive set \(\mathcal{S}^{+}\) and a negative set \(\mathcal{S}^{-}\). The positive set \(\mathcal{S}^{+}\) of a pixel contains segments within the same visually coherent region of the pixel. Following the prior work [18, 23], the visually coherent region can be derived from super-pixels [1] or contours [2]. We also use data augmentation (e.g., random cropping and color jitter
Figure 2: Framework of CLIP-S4. \⃝
ing) to generate consistent pixel embeddings between different augmented views of the same image. Hence, segments within the same region of the pixel in any augmented views are considered as the positive set \(\mathcal{S}^{+}\). Other segments in the image and segments from other images in the same batch are included in the negative set \(\mathcal{S}^{-}\). The pixel embedding \(\mathbf{z}_{p}\) is then attracted to the segments in positive set \(\mathcal{S}^{+}\) and repelled from the segments in negative set \(\mathcal{S}^{-}\) with _contrastive loss_:
\[\mathcal{L}_{t}(p)=-log\frac{\sum_{s\in\mathcal{S}^{+}}exp(sim(\mathbf{z}_{p},\mathbf{v}_{s})\kappa)}{\sum_{s\in\mathcal{S}^{+}\cup\mathcal{S}^{-}}exp(sim( \mathbf{z}_{p},\mathbf{v}_{s})\kappa)}, \tag{1}\]
where \(\kappa\) is the concentration constant and \(sim(\mathbf{z}_{p},\mathbf{v}_{s})\) is the cosine similarity between the pixel embedding \(\mathbf{z}_{p}\) and the segment embedding \(\mathbf{v}_{s}\).
### Vision-Language Model Guided Consistency
To enable language-driven semantic segmentation and improve the quality of pixel embeddings, we use a pre-trained vision-language model such as CLIP [34] to guide the training of the pixel embedding function. The key idea is to align the output space of our pixel embedding function consistent with the feature space of CLIP (Fig. 1c). Specifically, two types of consistency are considered during training including _embedding consistency_ and _semantic consistency_, which are detailed as follows.
**Embedding Consistency.** Our goal is to align the pixel embeddings generated from our self-supervised method (the green contour in Fig. 1c) with CLIP's pixel embeddings (the orange contour in Fig. 1c). This is done by minimizing the distance between the two pixel embedding spaces.
We first obtain the pixel embeddings of an input image from CLIP by modifying the attention-based pooling layer of the CLIP image encoder following [49]. Specifically, we 1) remove the query and key projection layers and 2) reformulate the value projection layer and the last linear layer as two consecutive fully connected layers. In the following, we use \(clip\)-\(i(\cdot)\) as the modified CLIP image encoder and \(clip\)-\(t(\cdot)\) as CLIP text encoder.
Then we obtain the pixel embeddings of CLIP for different augmented views of the image. Note that we use the original image to generate the CLIP pixel embeddings and perform augmentation afterwards to make sure that the CLIP pixel embeddings are consistent among different augmented views. In the end, we minimize the distance of embeddings between **segments** instead of pixels from our self-supervised and CLIP embedding spaces. This is because the pixel embeddings of CLIP are noisy (Fig. 2), which can be mitigated by aggregating over segments. Hence, we use the pixel embeddings generated by our model to derive segments (clusters) and then apply them to the CLIP's pixel embeddings. In the end, for each segment \(s\), the _embedding consistent loss_ is defined as:
\[\mathcal{L}_{e}(s)=1-sim(\mathbf{v}_{s},\mathbf{i}_{s}), \tag{2}\]
where \(\mathbf{v}_{s}\) and \(\mathbf{i}_{s}\) are the segment embeddings derived from our embedding function and CLIP, respectively. Here, \(\mathbf{i}_{s}\) is the average of the CLIP pixel embedding from segment \(s\), namely, \(\mathbf{i}_{s}=\sum_{p\in s}clip\)-\(i(s)/|s|\).
**Semantic Consistency** In addition to embedding consistency, we introduce semantic consistency by forcing our model to make the same predictions of semantic classes as CLIP. The rationale is that we can generate better pixel embeddings if they can form distinctive clusters corresponding to different semantic classes, as the goal of semantic segmentation is to perform pixel-wise classification.
Semantic consistency is achieved via a similar idea of pseudo-labeling [38]. Again, we force the semantic consistency at the segment level (not the pixel level) to reduce the noise in pseudo-labels. Specifically, for each segment \(s\), we first use CLIP to generate its pseudo-label \(y_{s}\) over a set of target classes, which include both knowns and unknowns (we will introduce how to design these target classes later). The pseudo-label is generated based on the highest similarity between the segment embedding \(\mathbf{i}_{s}\) with a set of prototypes, \(C=\{\mathbf{c}_{l}\}_{0}^{L-1}\), of the target classes in the pixel embedding space of CLIP, namely, \(y_{s}=\mathbf{argmax}_{l\in L}(sim(\mathbf{i}_{s},\mathbf{c}_{l}))\).
Then we define the _semantic consistent loss_ as the cross entropy between our model's prediction \(\varphi(\mathbf{v}_{s})\) over the tar
Figure 3: Computation of **target class prototypes** with both _knowns_ and _unknowns_, \(C=\{C_{k},C_{u}\}\). For a set of known classes (e.g., bird, cat), we first obtain their CLIP text embeddings, \(T\), via a set of prompt templates [13, 49] (shown in ); then, we calculate the normalized (via softmax) similarity between text embeddings, \(T\), and all segments’ CLIP embeddings, \(I\), from training images, and average the top-\(m\) similar segments’ CLIP embedding as the embedding prototype for this class, as shown in ); For each unknown class, we randomly select the CLIP embedding of a segment as the initial prototype (shown in ).
get classes and the pseudo-label \(y_{s}\):
\[\mathcal{L}_{s}(s)=\mathbf{H}(y_{s},\varphi(\mathbf{v}_{s})), \tag{3}\]
where \(\varphi(\mathbf{v}_{s})=\mathbf{softmax}(sim(\mathbf{v}_{s},C))\).
**Target Class Prototypes** The design of _target classes_ and associated _class prototypes_, \(C=\{\mathbf{c}_{l}\}_{0}^{L-1}\), is crucial to achieve the semantic consistency. Here, a class prototype, \(\mathbf{c}_{l}\), is an embedding vector that can represent a class in an embedding space. For example, it can be the mean vector of embeddings of all segments of a class "car". Currently, most existing methods [25, 49] assume that the target classes are already predefined, which is not feasible in real-world use cases without any human annotations. Thus, those methods cannot handle unknown classes hidden in the data. To address this issue, we introduce two sets of class prototypes of _known_, \(C_{k}=\{\mathbf{c}_{0},\ldots,\mathbf{c}_{k-1}\}\), and _unknown classes_, \(C_{u}=\{\mathbf{c}_{k},\ldots,\mathbf{c}_{k+u}\}\), where the known classes are predefined by leveraging CLIP and the unknown classes are learned from image data during training. Thus, we have \(C=\{\mathbf{c}_{l}\}_{0}^{L-1}=\{\mathbf{c}_{0},\ldots,\mathbf{c}_{k-1}, \mathbf{c}_{k},\ldots,\mathbf{c}_{k+u}\},L=k+u\).
For known classes, a natural choice is to use the text embeddings generated by CLIP as their class prototype embeddings [25, 49]. However, even though the text embeddings are trained to align with image/pixel embeddings [34], there is still a huge gap between the text and image/pixel embeddings in the joint space of CLIP (Fig. 1c). Therefore, it is challenging to learn meaningful unknown classes from image features when using text embeddings as class prototypes. Hence, in this work, we use the prototype of CLIP pixel embeddings to represent each known class.
To this end, for a set of known classes (e.g., bird, cat), \(K=\{0,\ldots,k-1\}\), we first obtain their CLIP text embeddings, \(T=\{\mathbf{t}_{k}\}=\{clip\text{-}t(k)\}\), via a set of prompt templates following [49, 13], as shown in Fig. 3a. We also get a set of CLIP segment embeddings, \(I=\{\mathbf{i}_{\hat{s}}\}\), for all training images by a) feeding training images into the modified image encoder of CLIP to get pixel embeddings; b) clustering the pixel embeddings as segments, \(\hat{\mathcal{S}}\); c) averaging the pixel embeddings in each segment, \(\hat{s}\). Hence, we have embeddings for each segment: \(\mathbf{i}_{\hat{s}}=\sum_{p\in\hat{s}}clip\text{-}i(p)/|\hat{s}|\). Then, we calculate the similarity between text embeddings of known classes, \(T\), and all CLIP segment embeddings \(I\), and normalize the similarities over all classes by softmax. Finally, we average the top-\(m\) similar segments' embedding as the embedding prototype for each class, \(C_{k}=\{\mathbf{c}_{k}\}=avg_{m}(top\text{-}m_{\hat{s}}(\mathbf{softmax}_{k} (sim(I,T))))\).
The prototype embeddings of the unknown classes, \(C_{u}\), are randomly initialized by sampling the CLIP embeddings of all segments, namely \(C_{u}=random(clip\text{-}i(\hat{\mathcal{S}}),u)\), with a size of the unknown class of \(u\) (Fig. 3c). During training, the embedding \(\mathbf{c}_{u}\) of each unknown class prototype is updated by minimizing its distance to all segments that are classified as this unknown class (similar to updating the centroids in \(k\)-means clustering):
\[\mathcal{L}_{u}=\sum_{s\in\mathcal{S}_{u}}(1-sim(\mathbf{c}_{u},clip\text{-}i (s)))/|S_{u}|, \tag{4}\]
where \(S_{u}\) are the segments classified as the unknown classes. In this way, our model can also learn the pixel representation of unknown classes.
### Training and Inference
In summary, we train the pixel embedding function by combining the pixel-segment contrastive loss, embedding consistent loss, and semantic consistent loss:
\[\mathcal{L}=\mathcal{L}_{t}+\mathcal{L}_{e}+\mathcal{L}_{s}. \tag{5}\]
During training, we also update the embeddings for the unknown classes with \(\mathcal{L}_{u}\).
For inference, we use the trained model to generate pixel embeddings for each input image and use the pixel embeddings for different downstream tasks, including language-driven and unsupervised semantic segmentation. For language-driven semantic segmentation, we first obtain the text embeddings of arbitrary inference classes by feeding the prompt-engineered texts into the text encoder of CLIP. Then we assign each pixel with the class label whose text embedding is the closest to CLIP-S\({}^{4}\) pixel embedding. For unsupervised semantic segmentation, we follow previous work [19, 40] that uses \(k\) nearest neighbor search or linear classifier to perform semantic segmentation.
## 4 Experiments
We evaluate our model on three tasks: 1) language-driven semantic segmentation for both known and unknown classes; 2) unsupervised semantic segmentation with \(k\)-means clustering/linear classification; 3) transfer learning of generated pixel embeddings for instance mask tracking. We also conduct ablation studies to understand the components of our model.
### Datasets
**Pascal VOC 2012**[12] contains 20 object classes and a background class. It has 1,464 and 1,449 images for training and validation, respectively. Following common practice [27, 48], we augment the training data with additional annotations [15], resulting in 10,582 training images.
**Pascal Context**[29] extends Pascal VOC 2010 [12] with additional annotations on 4,998 training and 5,105 validation images. Following the prior work [49], we use the most common 59 classes for evaluation.
**COCO-Stuff**[5] labels MS COCO [26] with 171 object/stuff classes. It contains 118,287 and 5,000 images for training and validation, respectively.
**DAVIS 2017**[33] contains video sequences for instance mask tracking. Following the prior work [18, 47], we train pixel embeddings on Pascal VOC 2012 and evaluate the validation sequences without fine-tuning.
It is worth mentioning that **no ground truth** labels of any datasets are used during training. Instead, we perform self-supervised learning on pseudo segments generated by contour detectors and out-ucm [2]. Two contour detectors are used including HED [45] for Pascal VOC 2012 and Pascal Context and PMI [20] for COCO-Stuff.
### Implementation Details
For self-supervised contrastive learning, images are augmented with the same set of data augmentations as SimCLR [7], including random resizing, cropping, flipping, color jittering, and Gaussian blurring. The concentration constant \(\kappa\) is set to 10, and the number of segments is set to 36 for each augmented view.
For vision-language guidance, we use pre-trained CLIP models [34] with modified image encoders following [49]. We use prompt-engineered texts with 85 prompt templates to generate text embeddings following [49, 13]. We use the average embedding of the top 32 segments of high probabilities as the prototype of each known class. We set the number of unknown classes to \(u=64\).
Following the prior work [18, 19, 23], we use PSPNet [48] with a dilated ResNet-50 [17] backbone as the network architecture. The backbone is pre-trained on the ImageNet [9] dataset. We train our model on Pascal VOC 2012 and Pascal Context for 20k iterations and on COCO-Stuff for 80k iterations. We set the batch size to 8 with additional memory banks that cache the segment embeddings of the previous 2 batches. We set the initial learning rate to 0.001 and decay it with a polynomial learning rate policy. We use the CLIP model trained with ViT-B/16 backbone unless otherwise stated.
### Language-Driven Semantic Segmentation
For language-driven semantic segmentation, no human annotations are used for either training or inference. At the inference time, each pixel is assigned an arbitrarily given class label whose CLIP text embedding is the closest to this pixel's CLIP-S\({}^{4}\) embedding.
We first compare the performance of our method with the state-of-the-art language-driven semantic segmentation approaches [49, 37, 46] on the Pascal Context and COCO-Stuff datasets. The performance is evaluated with the mean Intersection over Union (mIoU). For MaskCLIP and MaskCLIP+ [49], we obtain the results using the same hyper-parameter setting as our approach with CLIP models of two different backbones of ResNet50 and ViT-B/16. Meanwhile, GroupViT [46], ReCo [37], and ReCo+ [37] use completely different training mechanisms compared with our method. GroupViT is trained on image-caption pairs, and ReCo/ReCo+ combines image retrieval and co-segmentation. For comparison, we take the best results from [46, 37] for GroupViT, ReCo, and ReCo+. Tabel 2 shows the benchmarking results of the aforementioned methods. Our method consistently outperforms the state-of-the-art on both datasets with CLIP models of different backbones.
To evaluate models' performance for _class-free semantic segmentation_ with both known and unknown classes, we split the 59 classes of Pascal Context into 4 folds, where each fold includes around 15 classes. For each experiment, classes from one fold are considered as unknown and **excluded during training**. The mIoUs of known and unknown classes, as well as their harmonic mean (hIoU) are reported in Tab. 3. The performance of our method is averaged across 5 runs with randomly initialized prototypes of unknown classes. Our method achieves significant gains over MaskCLIP+ on unknown classes, which are comparable to MaskCLIP. Also, our method outperforms both MaskCLIP and MaskCLIP+ on known classes, which leads to better overall performance.
Qualitatively, the visualization in Fig. 3(a) offers us some insights into why our approach can achieve better results: our model yields _consistent_ embeddings aligned with the pre-trained CLIP model. Fig. 3(a) visualizes the projection of segment embeddings generated by different methods on Pascal Context and COCO-Stuff. We observe that segment embeddings generated by MaskCLIP+ are distorted by the given text embeddings. Meanwhile, CLIP-S\({}^{4}\) generates segment embeddings that are well aligned with the segment embeddings derived from the pre-trained CLIP model. Hence, segment embeddings generated by CLIP-S\({}^{4}\) can better capture both known and unknown classes. Fig. 3(b) shows
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Method} & CLIP & Pascal & COCO- \\ & Model & Context & Stuff \\ \hline \hline \multicolumn{4}{c}{mIoU} & mIoU \\ \hline GroupViT [46] & - & 22.4 & - \\ \hline ReCo [37] & ResNet50x16 + & 26.6 & - \\ ReCo+ [37]\(\dagger\) & ViT-L/14@336px & - & 18.4 \\ \hline \multirow{2}{*}{MaskCLIP [49]} & ResNet50 & 18.6 & 10.6 \\ & ViT-B/16 & 25.2 & 15.2 \\ \hline \multirow{2}{*}{MaskCLIP+ [49]\(\dagger\)} & ResNet50 & 23.4 & 13.9 \\ & ViT-B/16 & 32.2 & 20.7 \\ \hline \multirow{2}{*}{CLIP-S\({}^{4}\dagger\)} & ResNet50 & **28.5 (+5.1)** & **16.7 (+2.8)** \\ & ViT-B/16 & **33.6 (+1.4)** & **22.1 (+1.4)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Language-guided semantic segmentation benchmarks (mIoU). CLIP-S\({}^{4}\) consistently outperforms the state-of-the-art methods on both Pascal Context and COCO-Stuff datasets with CLIP models of different backbones. \(\dagger\) indicates the models are fine-tuned on target datasets.**
image segments retrieved from the COCO-Stuff validation set using MaskCLIP+ and CLIP-S\({}^{4}\) for a set of classes that are not used in training. For each class, we obtain its text embedding and compare it with segment embeddings to obtain top retrievals from both methods. Due to better alignment with the pre-trained CLIP model, CLIP-S\({}^{4}\) retrieves images that are more closely related to the unknown classes compared with MaskCLIP+.
### Unsupervised Semantic Segmentation
To study whether CLIP-S\({}^{4}\) can generate pixel embeddings that form distinctive clusters, we evaluate CLIP-S\({}^{4}\) on the unsupervised semantic segmentation task for Pascal VOC 2012. To derive semantic segmentation from pixel embeddings, we use and test two approaches, including \(k\) nearest neighbor (\(k\)-NN) search [19] and linear classification [40]. For \(k\)-NN search, we assign each segment a class label by the majority vote of its nearest neighbors from the training set following [19]. For linear classification, we train a linear classifier on the learned pixel embeddings following [40].
We compare our method with the state-of-the-art unsupervised and language-guided semantic segmentation approaches. We train the state-of-the-art models using the same hyper-parameter setting as our approach except for IIC [22] and Hierarchical Grouping [47] as they use different training mechanisms. For comparison, we take the best results for IIC and Hierarchical Grouping. The benchmark results are shown in Tab. 4. With the vision-language guidance, our method achieves significant gains compared with the previous non-CLIP-based approaches (i.e., +9% for both \(k\)-NN search and linear classification). Meanwhile, our method also outperforms the language-guided semantic segmentation approaches by a large margin.
### Instance Mask Tracking
We evaluate the transferability of pixel embeddings learned from the Pascal VOC 2012 dataset. We use the pixel embeddings to track instance masks in the DAVIS 2017 validation set, where the instance masks at the first frame are given for each video. Following the prior work [48], we use the similarity between pixel embeddings cross frames to propagate the instance masks to the rest of the video frames. We evaluate the performance using the region similarity \(\mathcal{J}\)
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & mIoU & mIoU \\ & \(k\)-NN & Linear Classifier \\ \hline IIC [22] & - & 28.0 \\ SegSort [19] & 47.3 & 55.4 \\ Hierarchy. Group. [47] & - & 48.8 \\ MaskContrast (Sup.) [40] & 53.9 & 63.9 \\ ConceptContrast [18] & 58.8 & 60.4 \\ HSG [23] & 61.7 & - \\ \hline MaskCLIP [49] & 67.3 & 69.5 \\ MaskCLIP+ [49] & 65.1 & 70.0 \\ \hline CLIP-S\({}^{4}\) & **72.0** (**+4.7**) & **73.0** (**+3.0**) \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Unsupervised semantic segmentation benchmarks (mIoU) on Pascal VOC 2012.** CLIP-S\({}^{4}\) consistently outperforms the state-of-the-art methods on both \(k\)-NN search and linear classification.
Figure 4: (a) Projection of pixel embeddings generated by CLIP, MaskCLIP+, and CLIP-S\({}^{4}\) trained on Pascal Context with CLIP’s ViT-B/16 model (left) and COCO-Stuff with CLIP’s ResNet50 model (right). MaskCLIP+ distorts pixel embeddings with respect to the given text embeddings, while CLIP-S\({}^{4}\) aligns pixel embeddings with the pre-trained CLIP model. (b) Image segments retrieved for classes that are not used during training. Correct and incorrect retrievals are outlined in green and orange, respectively. Compared with MaskCLIP+, CLIP-S\({}^{4}\) retrieves images that are more closely related to the classes.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline Method & \multicolumn{3}{c}{fold0} & \multicolumn{3}{c}{fold1} & \multicolumn{3}{c}{fold2} & \multicolumn{3}{c}{fold3} \\ & mIoU\({}_{u}\) & mIoU\({}_{k}\) & hIoU & mIoU\({}_{u}\) & mIoU\({}_{k}\) & hIoU & mIoU\({}_{u}\) & hIoU & mIoU\({}_{u}\) & mIoU\({}_{k}\) & hIoU \\ \hline MaskCLIP & 29.7 & 23.7 & 26.3 & **23.7** & 25.7 & 24.6 & **23.9** & 25.7 & 24.7 & 23.4 & 25.8 & 24.5 \\ MaskCLIP+ & 3.6 & 28.5 & 6.3 & 3.0 & 29.2 & 5.4 & 4.8 & 29.2 & 8.2 & 4.5 & 29.9 & 7.8 \\ CLIP-S\({}^{4}\) & **32.0\(\pm\)0.8** & **29.4\(\pm\)0.3** & **30.6\(\pm\)0.5** & 22.3\(\pm\)0.9 & **32.8\(\pm\)0.4** & **26.5\(\pm\)0.6** & 22.4\(\pm\)0.5 & **32.1\(\pm\)0.5** & **26.4\(\pm\)0.4** & **28.6\(\pm\)0.8** & **31.5\(\pm\)0.2** & **30.0\(\pm\)0.5** \\ _vs. MaskCLIP+_ & **+28.4** & **+0.9** & **+24.3** & **+19.3** & **+3.6** & **+21.1** & **+17.6** & **+2.9** & **+18.2** & **+24.1** & **+1.6** & **+22.2** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Language-guided semantic segmentation benchmarks (mIoU) for unknown classes.** The classes of Pascal Context are split into 4 folds with around 15 classes each fold. For each experiment, classes of one fold are considered as unknown. The performance of CLIP-S\({}^{4}\) is averaged over 5 runs with randomly initialized unknown class embeddings. CLIP-S\({}^{4}\) significantly outperforms MaskCLIP+ on unknown classes. Meanwhile, CLIP-S\({}^{4}\) archives consistent gains on known classes over MaskCLIP and MaskCLIP+, and hence leads to better overall performance.
(IoU) and the contour-based accuracy \(\mathcal{F}\) defined by [33].
We compare our method with existing supervised [4, 32], unsupervised [18, 24, 40, 42, 43, 47], and language-guided [49] approaches (Tab. 5). Though not trained on any video sequences, our method outperforms the existing approaches by more than 1.9% and 2.9% in terms of the region similarity \(\mathcal{J}\) and contour accuracy \(\mathcal{F}\), respectively. Note that the pixel embeddings generated by MaskCLIP+ [49] are distorted by the classes from Pascal VOC 2012, which hinder their transferability.
### Ablation Study
We study the contribution of different losses of our method using the Pascal Context dataset and the language-guided semantic segmentation task. The performance is evaluated with pixel accuracy (pAcc) and mIoU. We also calculate the average cosine similarity (\(avgsim\)) between our segment embeddings and CLIP's segment embeddings to quantify the alignment. Tab. 6 shows the study results. We observe that by introducing embedding consistent loss \(\mathcal{L}_{e}\) the learned segment embeddings are well aligned with CLIP's embeddings with an average cosine similarity of 0.79. However, the learned segment embeddings do not perform well on the language-guided semantic segmentation task (24.3 vs. 33.6), because the segment embeddings are not optimized to classify target classes. Meanwhile, by using semantic consistent loss \(\mathcal{L}_{s}\) without embedding consistent loss, the learned segment embeddings have the discriminative power to classify different classes but are not aligned with CLIP's embeddings as the average cosine similarity is 0.33. As a result, the segment embeddings are limited to the target classes used during training. Hence, we combine \(\mathcal{L}_{e}\) and \(\mathcal{L}_{s}\) to balance the discriminative power over target classes and the alignment with CLIP. Meanwhile, we observe that with pixel-segment contrastive learning, the model can archive better performance.
Also, we study the influence of the number of unknown class prototypes on the Pascal VOC dataset for the unsupervised semantic segmentation task. The results Tab. 7 show that the semantic segmentation performance is robust to the tested number of unknown class prototypes as the mIoU varies only 0.6%. Furthermore, we investigate how the size of top-\(m\) segments impacts the embeddings of class prototypes. We compare the embeddings of class prototypes generated with different numbers of top-\(m\) segments on the Pascal VOC dataset. We use the embeddings of class prototypes generated with \(m=32\) segments as the reference, and compute the cosine similarity between the reference prototypes embeddings and ones with different top-\(m\) segments. For each case, the cosine similarity is averaged over all class prototypes. We observe that the embeddings of class prototypes are relatively stable if moderate top-\(m\) segments (e.g., \(m=32\) in this work) are used (Tab. 8).
## 5 Conclusion
We propose CLIP-S\({}^{4}\), a novel pixel representation learning approach for semantic segmentation. Our method combines self-supervised contrastive learning and guidance of CLIP to learn consistent pixel embeddings with respect to visual and conceptual semantics. Our experiments on popular semantic segmentation benchmarks demonstrate consistent gains over the state-of-the-art unsupervised semantic segmentation and language-driven semantic segmentation methods, especially for unknown classes.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \(\mathcal{L}_{t}\) & \(\mathcal{L}_{e}\) & \(\mathcal{L}_{s}\) & pAcc & mIoU & \(avgsim\) \\ \hline ✓ & - & - & 1.6 & 0.5 & -0.01 \\ ✓ & ✓ & - & 48.1 & 24.3 & 0.79 \\ ✓ & - & ✓ & 52.3 & 32.9 & 0.33 \\ - & ✓ & ✓ & 48.6 & 31.3 & - \\ ✓ & ✓ & ✓ & **53.7** & **33.6** & 0.66 \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Ablation study on the contribution of each loss of CLIP-S\({}^{4}\).** Experimented on language-guided semantic segmentation of Pascal Context. \(avgsim\) represents the average cosine similarity between segment embeddings generated by CLIP-S\({}^{4}\)and CLIP. By combining the embedding and semantic consistent losses \(\mathcal{L}_{e}\) and \(\mathcal{L}_{s}\), CLIP-S\({}^{4}\) archives better semantic segmentation performance while maintaining the alignment with CLIP’s embeddings.
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & \(\mathcal{J}\)(Mean)\(\uparrow\) & \(\mathcal{F}\)(Mean)\(\uparrow\) \\ \hline MaskTrack-B [32] & 35.3 & 36.4 \\ OSVOS-B [4] & 18.5 & 30.0 \\ \hline Video Colorization [42] & 34.6 & 32.7 \\ CycleTime [43] & 41.9 & 39.4 \\ mpFFF [24] & 42.2 & 46.9 \\ Hierarch. Group. [47] & 47.1 & 48.9 \\ MaskContrast (Sup.) [40] & 34.3 & 36.7 \\ ConceptContrast [18] & 50.4 & 53.9 \\ \hline MaskCLIP [49] & 48.1 & 49.2 \\ MaskCLIP+ [49] & 42.6 & 44.2 \\ \hline CLIP-S\({}^{4}\) & **52.3 (+1.9)** & **56.8 (+2.9)** \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Ablation study on the influence of the number of unknown class prototypes.**
\begin{table}
\begin{tabular}{l c c c} \hline \hline \(\mathcal{L}_{t}\) & \(\mathcal{L}_{e}\) & \(\mathcal{L}_{s}\) & pAcc & mIoU & \(avgsim\) \\ \hline ✓ & - & - & 1.6 & 0.5 & -0.01 \\ ✓ & ✓ & - & 48.1 & 24.3 & 0.79 \\ ✓ & - & ✓ & 52.3 & 32.9 & 0.33 \\ - & ✓ & ✓ & 48.6 & 31.3 & - \\ ✓ & ✓ & ✓ & **53.7** & **33.6** & 0.66 \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Ablation study on the contribution of each loss of CLIP-S\({}^{4}\).** Experimented on language-guided semantic segmentation of Pascal Context. \(avgsim\) represents the average cosine similarity between segment embeddings generated by CLIP-S\({}^{4}\)and CLIP. By combining the embedding and semantic consistent losses \(\mathcal{L}_{e}\) and \(\mathcal{L}_{s}\), CLIP-S\({}^{4}\) archives better semantic segmentation performance while maintaining the alignment with CLIP’s embeddings. |
2305.19169 | Exploring interacting chiral spin chains in terms of black hole physics | In this paper we explore the properties of a 1-dimensional spin chain in the
presence of chiral interactions, focusing on the system's transition to
distinct chiral phases for various values of the chiral coupling. By employing
the mean field theory approximation we establish a connection between this
chiral system and a Dirac particle in the curved spacetime of a black hole.
Surprisingly, the black hole horizon coincides with the interface between
distinct chiral phases. We examine the chiral properties of the system for
homogeneous couplings and in scenarios involving position dependent couplings
that correspond to black hole geometries. To determine the significance of
interactions in the chiral chain we employ bosonization techniques and derive
the corresponding Luttinger liquid model. Furthermore, we investigate the
classical version of the model to understand the impact of the chiral operator
on the spins and gain insight into the observed chirality. Our findings shed
light on the behavior of the spin chain under the influence of the chiral
operator, elucidating the implications of chirality in various contexts,
including black hole physics. | Ewan Forbes, Matthew D. Horner, Andrew Hallam, Joseph Barker, Jiannis K. Pachos | 2023-05-30T16:15:55Z | http://arxiv.org/abs/2305.19169v2 | # Exploring interacting chiral spin chains in terms of black hole physics
###### Abstract
In this paper we explore the properties of a 1-dimensional spin chain in the presence of chiral interactions, focusing on the system's transition to distinct chiral phases for various values of the chiral coupling. By employing the mean field theory approximation we establish a connection between this chiral system and a Dirac particle in the curved spacetime of a black hole. Surprisingly, the black hole horizon coincides with the interface between distinct chiral phases. We examine the chiral properties of the system for homogeneous couplings and in scenarios involving position dependent couplings that correspond to black hole geometries. To determine the significance of interactions in the chiral chain we employ bosonization techniques and derive the corresponding Luttinger liquid model. Furthermore, we investigate the classical version of the model to understand the impact of the chiral operator on the spins and gain insight into the observed chirality. Our findings shed light on the behavior of the spin chain under the influence of the chiral operator, elucidating the implications of chirality in various contexts, including black hole physics.
## I Introduction
An intriguing family of lattice models can be described by relativistic physics in their continuum limit. One prominent illustration of this phenomenon is graphene, whose behavior at low energy can be effectively described by the renowned Dirac equation [1; 2]. Similar relativistic descriptions can be found in diverse examples such as Kitaev's honeycomb model [3; 4], superconductors [5; 6], and the XX model [7; 8]. These relativistic frameworks not only deepen our understanding of these systems but also pave the way for the simulation of high-energy physics with table-top experiments.
In this paper, we explore a chiral modification of the 1D spin-1/2 XX model [9]. The XX model can be expressed in terms of free fermions and thus it is analytically tractable and well understood. The introduction of a three-spin chiral term renders it interacting and thus hard to investigate analytically or numerically. It is noteworthy that such chiral systems exhibit a rich spectrum of quantum correlations [10] and they can give rise to skyrmions [11]. Remarkably, we demonstrate that these chiral systems can be effectively modeled by the Dirac equation on a curved spacetime. This intriguing connection offers a unique opportunity to realize a black hole background within the laboratory setting.
The emergent black hole physics is explicitly revealed by applying the mean field (MF) approximation, and its existence can be verified by investigating the Hawking effect. Hawking radiation, resulting from vacuum fluctuations of quantum fields near a black hole's horizon, leads to the evaporation of the black hole [12; 13]. The mechanism used to find this [9], involving a wavepacket tunneling across the horizon and escaping with a thermal distribution, originally derived in [14], allows the simulation of Hawking radiation in fermionic lattice models [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. We test the reliability of this approximation through a detailed analysis of the bosonization of the full spin model. We find that the MF approximation faithfully predicts a phase transition between a chiral and non-chiral phase. Remarkably, the emergent event horizon aligns with the interface between chiral and non-chiral phases. In particular, we find that the inside of the black hole corresponds to a chiral region with a central charge of \(c=2\) where the chiral interaction is dominant. The outside corresponds to a non-chiral region where the XX model is dominant, with a central charge \(c=1\). Subsequently, we examine the MF approximation's validity by employing bosonization techniques, which allow us to map the fully interacting Hamiltonian onto a Luttinger liquid [31]. Additionally, we employ a classical version of the system to further analyze and understand these effects in a comprehensive manner. By doing so, we gain valuable insights into the impact of chirality on the spins along the chain and its consequential effects on the entire system. We envision that the presented geometric description provides an elegant formalism to model strongly interacting systems and their interfaces also in higher dimensions and thus predict their behaviour.
This article is organised as follows: In Section II we introduce our model, then diagonalize it to highlight the characteristics of the system, such as the transition in the dispersion relation due to the effect of the chiral order parameter. In Section III we give an effective description
of the chiral chain in terms of black hole geometry. In Section IV we investigate the chiral spin operator and its expectation with the ground state of the system both in the flat and curved space cases. In Section V we use bosonization to ascertain the significance of the interactions, and in Section VI we use a classical version of the system to gain a geometric intuition on chiral interactions. We give concluding remarks and an outlook in Section VII.
## II Chiral chain model
Here we introduce the chiral spin chain, transform it to interacting fermions and then apply mean field theory to determine its properties.
### The mean field approximation
The system we investigate here is the one-dimensional spin-\(\frac{1}{2}\) chain with the Hamiltonian
\[H=\sum_{n=0}^{N-1}\left[-\frac{u}{2}\left(\sigma_{n}^{x}\sigma_{n+1}^{x}+ \sigma_{n}^{y}\sigma_{n+1}^{y}\right)-\frac{v}{4}\chi_{n}\right], \tag{1}\]
where the spin chirality operator is [11; 32]
\[\chi_{n}=\vec{\sigma}_{n}\cdot\left(\vec{\sigma}_{n+1}\times\vec{\sigma}_{n+2 }\right), \tag{2}\]
where \(\vec{\sigma}_{n}=(\sigma_{n}^{x},\sigma_{n}^{y},\sigma_{n}^{z})\) is the spin vector of Pauli operators, and the \(u,v\) couplings are real numbers with dimensions of energy. This model is a modified XX model with an additional three-spin interaction term \(\chi\), as shown in Fig. 1(a). Here, we adopt periodic boundary conditions with \(\vec{\sigma}_{N}=\vec{\sigma}_{0}\). If we introducing \(\sigma_{n}^{\pm}=(\sigma_{n}^{x}\pm i\sigma_{n}^{y})/2\), then by employing the Jordan-Wigner transformation defined by \(\sigma_{n}^{+}=(-1)^{\Sigma_{n}}c_{n}\) where \(\Sigma_{n}=\sum_{m<n}c_{m}^{\dagger}c_{m}\) and \(\sigma^{z}=2c_{n}^{\dagger}c_{n}-1\)[33], we can map the Hamiltonian to
\[\begin{split} H=\sum_{n=0}^{N-1}\bigg{\{}&-uc_{n}^{ \dagger}c_{n+1}-\frac{iv}{2}c_{n}^{\dagger}c_{n+2}\\ &-\frac{iv}{2}\Big{[}c_{n}^{\dagger}c_{n+1}(2c_{n+2}^{\dagger}c_ {n+2}-1)\\ &-c_{n+1}^{\dagger}c_{n+2}(2c_{n}^{\dagger}c_{n}-1)\Big{]}\bigg{\}} +\text{H.c.},\end{split} \tag{3}\]
where \(c_{n}\) are a set of fermionic modes obeying the anti-commutation relations \(\{c_{n},c_{m}\}=\{c_{m}^{\dagger},c_{n}^{\dagger}\}=0\) and \(\{c_{n},c_{m}^{\dagger}\}=\delta_{mn}\). We see that the model is intrinsically interacting as the fermionic Hamiltonian contains quartic terms.
To analyse the behaviour of the interacting model, we apply mean field theory (MFT) to transform the Hamiltonian into an effective quadratic Hamiltonian which can be efficiently diagonalised. MFT defines the fluctuation of an operator \(A\) as \(\delta A=A-\langle A\rangle\), where \(\langle A\rangle\) is the expectation value of the operator \(A\) with respect to the mean field ground state \(|\Omega\rangle\). For a product of two operators we have
\[AB=\langle A\rangle B+A\langle B\rangle-\langle A\rangle\langle B\rangle+ \delta A\delta B, \tag{4}\]
where the second order in fluctuations can be ignored. Applying this to the interacting terms of Eq. (3), the Hamiltonian becomes
\[\begin{split} H_{\text{MF}}(\alpha,Z)&=\sum_{n=0}^{ N-1}\left[-(u-ivZ)c_{n}^{\dagger}c_{n+1}-\frac{iv}{2}c_{n}^{\dagger}c_{n+2} \right]\\ &+\mu\sum_{n=0}^{N-1}c_{n}^{\dagger}c_{n}+E_{0}+\text{H.c.}, \end{split} \tag{5}\]
where \(\mu=2v\text{Im}(\alpha)\) is an effective chemical potential controlling the number of particles in the ground state, \(E_{0}=v(Z-1)\text{Im}(\alpha)\) is a constant energy shift, and \(\langle\sigma_{n}^{z}\rangle=Z\), \(\langle c_{n}^{\dagger}c_{n+1}\rangle=\alpha\), where the expectation value is done with respect to the ground state of the mean field Hamiltonian, \(|\Omega(\alpha,Z)\rangle\), for given values of \(\alpha\) and \(Z\). Self consistency requires \(\langle\Omega(\alpha,Z)|\sigma_{n}^{z}|\Omega(\alpha,Z)\rangle=Z\) and \(\langle\Omega(\alpha,Z)|c_{n}^{\dagger}c_{n+1}|\Omega(\alpha,Z)\rangle=\alpha\) for all \(n\). While these two equations have many solutions, we can single one out on physical grounds: the original Hamiltonian of Eq. (3) has particle-hole symmetry, \([H,U]=0\), where \(U\) is the particle-hole transformation with \(Uc_{n}U^{\dagger}=(-1)^{n}c_{n}^{\dagger}\) and \(Uc_{n}^{\dagger}U^{\dagger}=(-1)^{n}c_{n}\). This symmetry implies that \(\langle c_{n}^{\dagger}c_{n}\rangle=1/2\) and \(\langle c_{n}^{\dagger}c_{n+1}\rangle\in\mathbb{R}\) in the ground state. If
Figure 1: (a) The interactions of the lattice diagrammatically portrayed with nearest neighbour interaction strength defined by \(u\) and next-to-nearest neighbour interactions by \(v\)[9], with the seperations of neighbouring spins into groups \(A\) and \(B\) representing the unit cell described in Eq. (18). The chirality operator calculates the interaction of the three spins in each triangular space [11]. (b) The dispersion relation of the Hamiltonian for various values of \(v\). We see that two additional Fermi points appear if \(v>u\) which divides the negative-energy portion of the Brillouin zone into two disconnected regions.
we require the MFT to retain the particle-hole symmetry, then these conditions imply that \(Z=\mu=0\), and the MFT Hamiltonian becomes
\[H_{\rm MF}=\sum_{n=0}^{N-1}\left(-wc_{n}^{\dagger}c_{n+1}-\frac{iv}{2}c_{n}^{ \dagger}c_{n+2}\right)+{\rm H.c.}. \tag{6}\]
This Hamiltonian is quadratic and periodic, hence it can be diagonalised with a Fourier transform
\[c_{n}=\frac{1}{\sqrt{N}}\sum_{p\in{\rm BZ}}e^{ianp}c_{p}, \tag{7}\]
where \({\rm B.Z.}=[-\pi/a,\pi/a)\) is the Brillouin zone, \(p\) are the momenta quantised as \(p=2n\pi/Na\) for \(n\in\mathbb{Z}\), \(c_{p}\) are the momentum space fermionic modes, and \(a\) is the lattice spacing. This brings the Hamiltonian into the diagonal form
\[H_{\rm MF}=\sum_{p\in{\rm BZ}}E(p)c_{p}^{\dagger}c_{p}, \tag{8}\]
where the dispersion relation is given by
\[E(p)=-2u\cos(ap)+v\sin(2ap), \tag{9}\]
as shown in Fig. 1. The Fermi points of this model, defined as the points \(\{p_{i}\}\) such that \(E(p_{i})=0\), are given by \(p_{\pm}=\pm\pi/2a\) for \(|v|<|u|\), whilst for \(|v|\geq|u|\) we find two additional Fermi points located at
\[p_{1}=\frac{1}{a}\sin^{-1}\left(\frac{u}{v}\right),\quad p_{2}=\frac{\pi}{a}-p _{1}, \tag{10}\]
as shown in Fig. 1(b). These additional points arise due to the Nielsen-Ninomiya theorem.
### Phase transitions
To investigate the nature of quantum phases supported by Eq. (1), and the transitions between them, we consider the case of homogeneous couplings \(u\) and \(v\) along the chain. In this section, we focus on the predictions of the mean field Hamiltonian of Eq. (6) and compare it with the results obtained using matrix product state analysis of the spin Hamiltonian of Eq. (1) [9]. All analytic calculations of this section are done using the mean field theory.
#### ii.2.1 Correlations
The correlation matrix is defined as \(C_{nm}=\langle\Omega|c_{n}^{\dagger}c_{m}|\Omega\rangle\), where \(|\Omega\rangle\) is the ground state of Hamiltonian (6). Mapping to momentum space with a discrete Fourier transform as in Eq. (7), we can write
\[\begin{split} C_{nm}&=\frac{1}{N}\sum_{p,q\in{\rm BZ }}e^{-ipn}e^{iqm}\langle\Omega|c_{p}^{\dagger}c_{q}|\Omega\rangle\\ &=\frac{1}{2\pi}\sum_{p:E(p)<0}\Delta pe^{-ip(n-m)}\\ &=\frac{1}{2\pi}\int_{p:E(p)<0}{\rm d}pe^{-ip(n-m)},\end{split} \tag{11}\]
where in the second equality we used the fact that the ground state \(|\Omega\rangle\) has all negative energy states occupied, so \(\langle\Omega|c_{p}^{\dagger}c_{q}|\Omega\rangle=\delta_{pq}\theta(-E(p))\) and used the fact that eigenstates are separated in momentum space by \(\Delta p=2\pi/N\) for a lattice spacing \(a=1\) to rewrite the sum as a Riemann sum. In the third equality we took the thermodynamic limit \(N\to\infty\) mapping the sum to an integral which can now be solved analytically.
For \(|v|<|u|\) the correlation function is given by
\[C_{nm}=\frac{\sin\left[\frac{\pi}{2}(n-m)\right]}{\pi(n-m)}. \tag{12}\]
For \(|v|>|u|\), the negative energy portion of the Brillouin zone splits into two disconnected regions so the integral splits into two as
\[\begin{split} C_{nm}&=\frac{1}{2\pi}\left(\int_{- \frac{\pi}{2}}^{p_{1}}{\rm d}p+\int_{\frac{\pi}{2}}^{\pi-p_{1}}{\rm d}p\right) e^{-ipa(n-m)}\\ &=\frac{i}{2\pi(n-m)}\bigg{\{}-2\cos\left[(n-m)\frac{\pi}{2} \right]\\ &+(-1)^{n-m}e^{ip_{1}(n-m)}+e^{-ip_{1}(n-m)}\bigg{\}},\end{split} \tag{13}\]
which is now a function of \(v\) and is not smooth. The fact the correlation matrix is not a smooth function of \(v\) is a consequence of the change in topology of the Fermi sea shown by the grey portion of Fig. 1. As observables are derived from the correlation matrix, this behaviour is the root cause of the phase transition exhibited by the model.
#### ii.2.2 Energy density
The ground state \(|\Omega\rangle\) is the state for which the Fermi sea is fully occupied. Therefore, the ground state energy density is given by
\[\rho_{0}=\frac{1}{N}\langle\Omega|H|\Omega\rangle\to\frac{1}{2\pi}\int_{p:E(p) <0}{\rm d}pE(p), \tag{14}\]
where we took the thermodynamic limit \(N\to\infty\) by using the standard trick of moulding the sum into a Riemann
sum and taking the limit. We have
\[\rho_{0}=\begin{cases}-\frac{2u}{\pi}&|v|\leq|u|\\ -\frac{1}{\pi}\left(\frac{u^{2}}{v}+v\right)&|v|>|u|.\end{cases} \tag{15}\]
If we look at the derivatives of the energy density, we see that the model exhibits a second order phase transition as we change \(v\). The first derivative of \(\rho_{0}\) is continuous, but there exists a discontinuity in the second derivative as
\[\frac{\partial^{2}\rho_{0}}{\partial v^{2}}=\begin{cases}0&|v|\leq|u|\\ -\frac{1}{\pi}\frac{u^{2}}{v^{3}}&|v|>|u|\end{cases}, \tag{16}\]
revealing that the point \(|v|=|u|\) corresponds to the critical point of a second order phase transition.
In Fig. 2, we compare the ground state energy density vs. \(v\) for the MPS numerics of the spin model [9] and the mean field approximation for a system of \(N=200\). We see that the mean field agrees well with the spin model, accurately predicting the location of the critical point. Below the critical point, the two models agree exactly, which suggests that the interactions induced by the chiral term are irrelevant in the ground state. Nevertheless, interactions become significant above the critical point.
#### iii.1.3 Central charge
To gain further insight into the nature of the chiral phase transition, we consider the behaviour of the ground state bipartite entanglement entropy as a function of \(v\). Consider partitioning the system into two subsystems, \(\mathcal{A}\) and \(\mathcal{B}\), where \(\mathcal{A}\) contains \(L\ll N\) adjacent spins. We define the reduced density matrix of \(\mathcal{A}\) as the partial trace over the remaining \(N-L\) spins of \(\mathcal{B}\) as \(\rho_{\mathcal{A}}=\mathrm{Tr}_{\mathcal{B}}(\rho)\), where \(\rho\) is the state of the whole system. As we are interested in the ground state only, we have \(\rho=|\Omega\rangle\langle\Omega|\), where \(|\Omega\rangle\) is the (pure) ground state of the total system. The entanglement entropy is defined as \(S_{\mathcal{A}}=-\mathrm{Tr}(\rho_{\mathcal{A}}\ln\rho_{\mathcal{A}})\). As discussed above, the model is gapless for all \(v\) so it can be described by a conformal field theory (CFT) [34]. In this case we expect the ground state entanglement entropy of a partition of spins to obey the Cardy formula
\[S_{\mathcal{A}}(L)=\frac{c}{3}\ln L+S_{0}, \tag{17}\]
where \(c\) is the central charge of the CFT and \(S_{0}\) is a constant [35; 10], which applies to both the original spin model and the mean field approximation. We can measure the entanglement entropy of the mean field model quite simply by using the correlation matrix. We find that scaling behaviour of the entanglement entropy follows this formula, as shown in Fig. 3(a), allowing us to extract the central charge \(c\) for various values of \(v\).
Using the MPS results we compare the spin model and the mean field approximation. In Fig. 3(b) we see that \(c\approx 1\) in the XX phase which jumps to \(c\approx 2\) in the chiral phase, with good agreement between the spin and mean field results. We can clearly interpret this in the mean field model: the additional Fermi points appearing when \(|v|>|u|\) cause the model to transition from a \(c=1\) CFT with a single Dirac fermion to a \(c=2=1+1\) CFT with two Dirac fermions, as seen by the additional Fermi points of the dispersion in Fig. 1(b). This can also be understood from the lattice structure of the MF model, as seen in Fig. 1(a), where for \(|v|\ll|u|\) a single zig-zag fermionic chain dominates (\(c=1\)) while for \(|v|\gg|u|\) two fermionic chains dominate, corresponding to the edges of the ladder, thus effectively doubling the degrees of freedom (\(c=2\)).
Figure 2: A comparison of the ground state energy density vs. \(v\) obtained from MPS simulation of the spin model from Ref. [9] and the mean field (MF) approximation for \(N=200\) spins.
Figure 3: (a) The entanglement entropy \(S_{L}\) of the mean field (MF) model vs. \(L\) for a system of size \(N=200\). We see the entanglement entropy follows Eq. (17), allowing us to extract the central charge. (b) A comparison of the central charge \(c\) of the mean field model and spin model vs. \(v\) for the same system. We see that the central charge jumps from \(c=1\) to \(c=2\) across the phase transition for the mean field, suggesting that the degrees of freedom of the model have changed.
## III Emergent black hole background
### Diatomic model
To make the link with relativity, the lattice sites are now labelled as alternating between sub-lattices \(A\) and \(B\) by introducing a two-site unit cell, as shown in Fig. 4. The mean field Hamiltonian of Eq. (6) can be reparameterised as
\[H_{\rm MF}=\sum_{n}-ua_{n}^{\dagger}(b_{n}+b_{n-1})-\frac{iv}{2}(a_{n}^{ \dagger}a_{n+1}+b_{n}^{\dagger}b_{n+1})+\text{H.c.}, \tag{18}\]
where the fermionic modes \(a_{n}\) and \(b_{n}\) belong to sublattice \(A\) and \(B\), respectively, of the unit cell located at site \(n\). These modes obey the commutation relations \(\{a_{n},a_{m}^{\dagger}\}=\{b_{n},b_{m}^{\dagger}\}=\delta_{nm}\), while all mixed anti-commutators vanish. The index \(n\) now labels the unit cells. A Fourier transform is performed on the Fermions with the definition
\[a_{n}=\frac{1}{\sqrt{N_{c}}}\sum_{p\in\text{B.Z.}}e^{ipa_{c}n}a_{p}, \tag{19}\]
and similarly for \(b_{n}\), where \(N_{c}=N/2\) is the number of unit cells in the system, \(a_{c}=2a\) is the unit cell spacing for a given lattice spacing \(a\), and B.Z. \(=[0,2\pi/a_{c})\) is the Brillouin zone. The Fourier transformed Hamiltonian becomes
\[H_{\rm MF}=\sum_{p\in\text{B.Z.}}\chi_{p}^{\dagger}h(p)\chi_{p},\quad h(p)= \begin{pmatrix}g(p)&f(p)\\ f^{*}(p)&g(p)\end{pmatrix}, \tag{20}\]
where the two-component spinor is defined as \(\chi_{p}=(a_{p},b_{p})^{\rm T}\) and the functions are given by
\[f(p)=-u(1+e^{-ia_{c}p}),\quad g(p)=v\sin(a_{c}p). \tag{21}\]
As usual, the dispersion relation is given by the eigenvalues of the single-particle Hamiltonian \(h(p)\) which yields
\[\begin{split} E(p)&=g(p)\pm|f(p)|\\ &=v\sin(a_{c}p)\pm u\sqrt{2+2\cos(a_{c}p)}.\end{split} \tag{22}\]
In Fig. 4, it is found that the parameter \(v\) has the effect of tilting the cones as it increases. The Fermi points \(\{p_{i}\}\), defined as the points for which \(E(p_{i})=0\), are found at
\[p_{0}=\frac{\pi}{a_{\rm c}},\quad p_{\pm}=\pm\frac{1}{a_{\rm c}}\arccos\left( 1-\frac{2u^{2}}{v^{2}}\right). \tag{23}\]
The roots \(p_{\pm}\) only exist if the argument of arccos is in the range \([-1,1]\) which implies \(|v|\geq|u|\) for these to appear in the dispersion. Therefore, if \(|v|\leq|u|\), the only Fermi point is located at \(p_{0}=\pi/a_{\rm c}\) which is where the Dirac cone is located, as shown in Fig. 4. When the cone over-tilts, so when \(|v|\geq|u|\), then the additional zero-energy crossings at \(p_{\pm}\) appear which is due to the Nielsen-Ninomiya theorem which states that the number of left- and right-movers must be equal [36; 37].
### Continuum limit
The continuum limit is obtained by Taylor expanding the single-particle Hamiltonian \(h(p)\) about the Fermi point \(p_{0}\) to first order in momentum which yields
\[h(p_{0}+p)=u\sigma^{y}p-v\mathbb{I}p\equiv e_{a}^{\ i}\alpha^{a}p_{i}, \tag{24}\]
where we have set \(a_{\rm c}=1\), the coefficients are defined as \(e_{0}^{\ \ x}=-v,e_{1}^{\ \ x}=u\) and the Dirac matrices \(\alpha^{0}=\mathbb{I},\alpha^{1}=\sigma^{y}\). Therefore, the continuum limit Hamiltonian after an inverse Fourier transform to real space is given by
\[H=\int_{\mathbb{R}}\mathrm{d}x\chi^{\dagger}(x)\left(-ie_{a}^{\ i}\alpha^{a} \overleftrightarrow{\partial_{i}}\right)\chi(x), \tag{25}\]
with and the Dirac \(\alpha^{a}=(\mathbb{I},\sigma^{y})\) and \(\beta=\sigma^{z}\). Note that the position is now measured in terms of unit cells with two lattice sites rather than single sites.
Comparing this Hamiltonian to the general one of Dirac particles in curved space [38], the continuum limit of the lattice model can be interpreted as a curved space field theory with zweibein
\[e_{a}^{\ \mu}=\begin{pmatrix}1&-v\\ 0&u\end{pmatrix},\quad e_{\ \mu}^{\ a}=\begin{pmatrix}1&v/u\\ 0&1/u\end{pmatrix} \tag{26}\]
and Dirac gamma matrices \(\gamma^{0}=\sigma^{z}\) and \(\gamma^{1}=-i\sigma^{x}\) which obey the anti-commutation relations \(\{\gamma^{a},\gamma^{b}\}=2\eta^{ab}\), with \(\eta^{ab}=\text{diag}(1,-1)\). The zweibein corresponds to the metric \(g_{\mu\nu}=e_{\ \mu}^{a}e_{\ \nu}^{b}\eta_{ab}\) which gives
\[\mathrm{d}s^{2}=\left(1-\frac{v^{2}}{u^{2}}\right)\mathrm{d}t^{2}-\frac{2v}{u^ {2}}\mathrm{d}t\mathrm{d}x-\frac{1}{u^{2}}\mathrm{d}x^{2}. \tag{27}\]
This is the Gullstrand-Painleve metric [39] also know as the _acoustic metric_ which is the Schwarzschild metric of a \((1+1)\)D black hole expressed in Gullstrand-Painleve
Figure 4: The tilting of the Dirac cones as \(v\) increases and \(u=1\). The blue and orange sections show the dispersions of the two operators \(a\) and \(b\) respectively in the diatomic unit cell for \(a_{c}=1\).
coordinates. This metric is referred to here as an _internal metric_ of the model as it depends upon the internal couplings of the Hamiltonian and not the physical geometry of the lattice. In addition, this is a fixed classical background metric and the quantum fields have no back-reaction on the metric.
In order to bring the metric Eq. (27) into standard form, a coordinate transformation defined as \((t,x)\mapsto(\tau,x)\) is used, where
\[\tau(t,x)=t-\int_{x_{0}}^{x}\mathrm{d}z\frac{v(z)}{u^{2}-v^{2}(z)}, \tag{28}\]
that maps the metric to
\[\mathrm{d}s^{2}=\left(1-\frac{v^{2}}{u^{2}}\right)\mathrm{d}\tau^{2}-\frac{1} {u^{2}\left(1-\frac{v^{2}}{u^{2}}\right)}\mathrm{d}x^{2}, \tag{29}\]
which is the Schwarzschild metric. If the variables \(u\) and \(v\) are upgraded to slowly-varying functions of space, then the preceding calculation is still valid and the event horizon is located at the point \(x_{\mathrm{h}}\), where \(|v(x_{\mathrm{h}})|=|u(x_{\mathrm{h}})|\). In the following we take \(u(x)=1\) so it aligns with the standard Schwarzschild metric in natural units. Quite remarkably, the location of the event horizon coincides with the location of the phase boundaries from the previous analysis of the phase diagram of the model.
## IV Chirality of the model
In this section we investigate the spin-chirality operator from Eq. (2) in detail. From the previous section, we see that the parameter \(v\) has the effect of tilting the Dirac cones in the mean field description, as shown in Fig. 4. Referring back to the original Hamiltonian of Eq. (1), we conclude that the chirality of the system is responsible for this tilting. This tilting emulates the over-tilting of a Dirac cone near a black hole, therefore it is of interest to study this operator and find what it can show about the the black hole system, especially in the case of the spins inside of the black hole horizon, corresponding to the transition into a chiral phase in a homogeneous lattice.
Applying the Jordan-Wigner transformation to the chirality operator of Eq. (2), we arrive at the chirality operator in terms of fermionic modes given by
\[\chi_{n}= -2i(c_{n}^{\dagger}c_{n+1}+c_{n+1}^{\dagger}c_{n+2}-c_{n}^{ \dagger}c_{n+2})\] \[+4i(c_{n}^{\dagger}c_{n+1}c_{n+2}^{\dagger}c_{n+2}+c_{n+1}^{ \dagger}c_{n+2}c_{n}^{\dagger}c_{n})+\mathrm{H.c.}. \tag{30}\]
It can be seen in Fig. 5(c) that the expectation of the chirality operator has a point after the transition \(|v|>|u|\) where it is equal to \(0\). If this operator is to be viewed as an order parameter, giving the transition at the point where \(\chi\) is non-zero, it is unusual for it to return to this value. However, there is a choice now to be made of exactly which chiral operator to analyse. As the original spin Hamiltonian of Eq. (1) contains the chirality operator itself, applying MFT yields a non-interacting version of the chiral operator if we were to interpret this as the coefficient of \(v/4\) in the mean field Hamiltonian of Eq. (6). The MFT version of the chiral operator in fermionic form is therefore given by
\[\chi_{n}^{\mathrm{MF}}=2ic_{n}^{\dagger}c_{n+2}+\mathrm{H.c.}, \tag{31}\]
with ground state expectation given in Fig. 5(d). In the following we will consider both versions of the chirality operator as they give complementary information. We refer to Eq. (30) as the full chirality, whilst Eq. (31) as the mean field chirality.
### Discrete stepping of the chirality
In Fig. 5 it is shown how the chirality of a homogeneous chain system (corresponding to flat space) changes as the next-to-nearest neighbour terms in the Hamiltonian becomes more dominant. Clear discrete jumps in the value
Figure 5: The dispersion relations, with coloured Fermi-points corresponding to those in Fig. (1)(b), when (a) \(v=1.5\), and (b) \(v=5\), showing the difference in number of discrete momentum states (equally space momentum from \(-\pi\) to \(\pi\)), in the spaces \(\zeta_{1}\) and \(\zeta_{2}\). (c) The chirality of any single site in a 100 site homogeneous (‘flat space’) lattice as \(v\) changes for the fully interacting chiral operator, with (d) giving the same for the mean field chiral operator. (c) and (d) also give the analytical solutions as given in Eq. 38 and Eq. 39 respectively.
of chirality are found as the parameter \(v\) is increased, corresponding to a momentum state leaving the left-hand Fermi sea (denoted as \(\zeta_{1}\)) as a momentum state enters the right-hand Fermi sea (denoted as \(\zeta_{2}\)). This is due to the fact that the number of discrete momentum states in the total Fermi sea is equal to \(N/2\), i.e, \(|\zeta_{1}|+|\zeta_{2}|=N/2\), where \(|\zeta_{i}|\) is the number of momentum states in \(\zeta_{i}\). As \(v\) changes, these two disconnected regions of the Fermi sea change size and hence exchange states to keep the total fixed at \(N/2\).
We can determine analytically the behaviour of the chirality jumps shown in Fig. 7. The total chirality in this instance can be diagonalised by the Fourier transform that also diagonalises the Hamiltonian of the system
\[c_{n}=\frac{1}{\sqrt{N}}\sum_{p\in\text{B.Z.}}e^{ipan}c_{p}, \tag{32}\]
to give
\[\begin{split}\langle\chi\rangle=&-4\sum_{p\in \text{B.Z.}}\sin(2ap)\\ &-\frac{8}{N}\sum_{p,k\in\text{B.Z.}}\left[\sin(a(k-2p))+\sin(a(p- 2k))\right],\end{split} \tag{33}\]
where the summed momenta \(p\) and \(k\) satisfy \(E(p)\leq 0\) and \(E(k)\leq 0\). The chirality is seen to jump in discrete steps as the chiral coupling \(v\) is increased. This diagonalised total chirality can be used to find an analytical formula for the size of the jumps. We have
\[\begin{split}&\lim_{\epsilon\to 0}\langle\chi(v+\epsilon) \rangle-\langle\chi(v)\rangle=\sqrt{1-\frac{u^{2}}{v^{2}}}\bigg{[}\frac{16u}{v} \\ &-\frac{32}{N}\sum_{p\in\text{B.Z.}}\left(\sin(2p)-\frac{2u}{v} \cos(p)\right)\bigg{]}.\end{split} \tag{34}\]
In the large \(v\) limit where the chiral operator becomes dominant we have
\[\lim_{v\rightarrow\infty}\Delta\langle\chi\rangle=-\frac{32}{N}\sum_{p}\sin(2p), \tag{35}\]
which gives a value of about 10.2 for the limit the jumps tend towards as \(v\) is increased. Additionally, we find that when \(N\) becomes significantly large, then
\[\lim_{N\rightarrow\infty}\Delta\langle\chi\rangle=\sqrt{1-\frac{u^{2}}{v^{2}} }\left[\frac{16u}{v}+\frac{32}{\pi}\left(1+\frac{u^{2}}{v^{2}}\right)\right]. \tag{36}\]
Note that the order of the limits allows us to take \(N\rightarrow\infty\) without forming a continuum version of the lattice model, as we already assumed the existence of the discrete stepping feature in Eq. (34). This analytically predicted behaviour of chirality jumps is in agreement with the numerical findings, as shown in Fig. 6.
The frequency of the jumps is controlled by the rate in which the momentum space covered by \(\zeta_{1}\) shrinks while \(\zeta_{2}\) grows, as shown in Fig. 5. This can be expressed in terms of the proportion of the momentum space in the Brillouin Zone that is spanned by \(\zeta_{1}=[\frac{\pi}{2a},p_{1})\) for Fermi point \(p_{1}=\frac{1}{a}\sin^{-1}\left(\frac{u}{v}\right)\). The number of states in the left-hand Fermi sea is given by
\[N_{1}=N\cdot\frac{p_{1}+\frac{\pi}{2a}}{2\pi}=\frac{N\sin^{-1}(u/v)+N\pi/2}{2 \pi a}, \tag{37}\]
which, in Fig. 7, shows a correspondence between \(N_{1}\) dropping by an integer amount to the number of discrete steps in chirality taken.
### Chirality in the thermodynamic limit
In this section we analyse the chirality in the thermodynamic limit for \(N\rightarrow\infty\). Using the expression for
Figure 7: (a) The jumping rate of the chirality as found from Eq. (37), (b) the discrete jumping found from applying Eq. (33). It is seen that as the number of states in \(\zeta_{1}\) decreases, each integer change corresponds to a sudden increase in the chirality, with 7 total jumps in this interval.
Figure 6: Measurements of the discrete jumping of total chirality as chiral coefficient \(v\) is increased. The numerical solutions are found by taking the expectation of the chirality operator on the ground state at \(v\), then subtracting that from the same operator for some small change \(v+\epsilon\). The analytic solution is found from Eq. (33), and the thermodynamic solution \(N\rightarrow\infty\) from Eq. (36).
the correlation matrix derived in Eqs. (12) and (13), the ground state chirality with respect to the full chiral operator of Eq. (30) is given by the simple expression
\[\begin{split}\langle\chi_{n}\rangle&=8\text{Im}\big{(} C_{n,n+1}C_{n+2,n+2}-C_{n,n+2}C_{n+2,n+1}\\ &+C_{n+1,n+2}C_{n,n}-C_{n+1,n}C_{n,n+2}\\ &-(C_{n,n+1}+C_{n+1,n+2}-C_{n,n+2})/2\big{)}\\ &=\begin{cases}0&|v|<|u|\\ \frac{4}{\pi}\left(1-\frac{u^{2}}{v^{2}}\right)\left(\frac{4u}{\pi v}-1\right) &|v|\geq|u|\end{cases},\end{split} \tag{38}\]
and for the mean field chirality
\[\langle\chi_{n}^{\text{MF}}\rangle=4\text{Im}\left(C_{n,n+2}\right)=\begin{cases} 0&|v|<|u|\\ \frac{4}{\pi}\left(1-\frac{u^{2}}{v^{2}}\right)&|v|\geq|u|\end{cases}. \tag{39}\]
By Taylor expanding just above the critical point, we find the chirality goes as
\[\langle\chi_{n}\rangle\sim(v-v_{\text{c}})^{\gamma}, \tag{40}\]
where \(v_{\text{c}}=u\) is the critical point and \(\gamma=1\) is the critical exponent. On the other hand, it was shown in Ref. [9] by studying the full spin model of Eq. (1) using finite DMRG [40] that the phase transition of the full model is located at \(v_{\text{c}}\approx 1.12u\) with a critical exponent of \(\gamma\approx 0.39\). A comparison between the chirality of this MPS spin model simulation and the mean field approximation can be seen in Fig. 8. The mean field faithfully captures the important information about the phase transition. In particular, just like for the energy density, the two models agree exactly below the critical point where the chirality is zero. The behaviour suggests the chirality is an order parameter for the model and emphasises again that, below the critical point, the interactions are irrelevant in the ground state. We see that the free fermion mean field approximation of Eq. (6) accurately reveals that for small \(v\) the system is in a disordered, gapless, XX phase, while as \(v\) increases it passes through a second-order phase transition into a gapless chiral phase, corresponding to a non-zero ground state chirality \(\langle\chi_{n}\rangle\).
From Eqs. (38) and (39), we see that the chirality is non-zero if and only if we have complex next-to-nearest-neighbour correlations \(C_{n,n+2}\). We ask under what conditions is this the case. Consider a general tight-binding model with discrete translational symmetry and periodic boundary conditions. Suppose we had a model with inversion symmetry under the transformation \(n\rightarrow-n\). This implies that the dispersion relation is an even function obeying \(E(p)=E(-p)\), so our Fermi points come in \(\pm\) pairs. Referring back to the definition of the correlation matrix in Eq. (11), we see that \(C_{nm}^{*}=C_{nm}\) for an even dispersion relation: complex conjugation is equivalent to the transformation \(p\rightarrow-p\) in the integral and, as the range of integral is symmetric under this transformation due to the Fermi points being \(\pm\) symmetric as the dispersion relation is an even function, the integral, and hence correlation matrix, is invariant and hence real. In fact, this condition for inversion symmetry can be relaxed slightly: as long as the Fermi points come in \(\pm\) pairs, even if the dispersion \(E(p)\) itself is not an even function, the correlation matrix is real. This is the case for this model in the range \(|v|\leq|u|\) as in this phase the Fermi points are fixed at \(p_{0}=\pm\pi/2a\) despite the dispersion itself not being even, as shown in Fig. 1. However, this is broken when \(|v|>|u|\) as new Fermi points appear and the correlation matrix is complex.
Let us now break inversion symmetry. A simple model that breaks inversion symmetry is a model with nearest-neighbour hoppings and complex couplings, with Hamiltonian
\[H=-ue^{-i\theta}\sum_{n}c_{n}^{\dagger}c_{n+1}+\text{H.c.}, \tag{41}\]
where \(u\in\mathbb{R}\) and \(\theta\in[0,2\pi)\). The breaking of inversion symmetry is apparent from the dispersion relation \(E(p)=-2u\cos(p-\theta)\) as it is no longer an even function. The Fermi points of this model are at \(p_{0}=\theta\pm\pi/2\), therefore the correlations of this model are given by
\[\begin{split} C_{nm}&=\frac{1}{2\pi}\int_{\theta -\frac{\pi}{2}}^{\theta+\frac{\pi}{2}}\text{d}pe^{-ip(n-m)}\\ &=\frac{\sin\big{[}(n-m)\frac{\pi}{2}\big{]}}{\pi(n-m)}e^{-i \theta(n-m)},\end{split} \tag{42}\]
which are complex, but notice that correlations between next-to-nearest-neighbours, where \(|n-m|=2\), are zero, therefore the chirality of this model will be zero too.
The simplest way to achieve complex next-to-nearest neighbour correlations is to include a term in the Hamiltonian which couples next-to-nearest neighbour sites and
Figure 8: A comparison of the ground state chirality obtained from the mean field ground state \(|\Omega\rangle\) using the two operators \(\chi\) and \(\chi^{\text{MF}}\) (operators distinguished in Eq. (30) and Eq. (31) respectively), and the results obtained from exact diagonalisation of the spin model.
breaks inversion symmetry. A simple example of this is nothing but our mean field Hamiltonian of Eq. (6). The interesting feature of this model is that for \(|v|<|u|\), the dispersion relation retains its symmetric Fermi points at \(p_{\pm}=\pm\pi/2\) despite the dispersion not being symmetric. Therefore, all correlators in this phase will be real as seen in Eq. (12) and hence the chirality will be zero. On the other hand, for \(|v|>|u|\) the dispersion relation changes resulting in complex correlations which yields a non-zero chirality, giving the chirality its order parameter behaviour.
### Black hole profile chiralities
The above analysis was conducted for homogeneous systems where \(u\) and \(v\) are constants. However, we still expect this to hold when we upgrade \(v\) to a slowly varying function. We now consider profiles where \(v(x)\) changes slowly and investigate the behaviour of the system around \(v=u\). In Fig. 9 we present the chirality distribution across the system for a given coupling profile \(v(x)\) for constant \(u\). We observe the result that the system is chiral where \(|v|>|u|\), whereas for \(|v|<|u|\) the system is non-chiral, therefore we have an interface between two phases.
The chirality expectation can be found for different black hole backgrounds by choosing an appropriate relation for \(v\). If a collapsing dust metric for a black hole is considered, the coupling becomes
\[v=\sqrt{1-M(|x|-x_{h}/2)}, \tag{43}\]
where \(M\) is the mass of the black hole and \(x_{h}\) is the position of its horizon [41]. Another useful metric is a hyperbolic tanh profile [42], with coupling
\[v=\alpha[\tanh(\beta(x-x_{h})+\delta)+1],\ \ \delta=\tanh^{-1}\left(\frac{1}{ \alpha}-1\right). \tag{44}\]
In Fig. 10 we present the chiralities across the lattice, where the position on the lattice \(x\) corresponds to the position in space. Moreover, we present the total chiralities of these black hole profiles as their parameters are altered, giving similar results to those in the homogeneous case.
## V Bosonization
We now want to quantify the effect of interactions in our system introduced via the chirality term and analyse the validity of the mean field results. In higher dimensions, most interacting fermionic models can be studied using Fermi liquid theory. One dimensional systems can differ dramatically. The breakdown of Fermi liquid theory is intuitively explained by the nature of excitations near the Fermi surface [31]. In one dimension, the Fermi surface consists of two points, \(k_{F}\) and \(-k_{F}\). For inversion symmetric Hamiltonians, the dispersion in the vicinity of the Fermi surface is typically \(\omega_{1}(k)=v_{F}(k-k_{F})\) and \(\omega_{2}(k)=-v_{F}(k-k_{F})\). The _nesting condition_\(\omega_{1}(k)=-\omega_{2}(k)\) leads to a breakdown in perturbation theory and indicates that the interact
Figure 9: (a) An example of an inhomogeneous distribution for the couplings \(v\). (b) The corresponding chirality obtained from the spin model MPS [9] and the mean field model with the full operator \(\chi\). We see that the distribution of \(v\) describes a phase boundary between a chiral (\(v>u\)) and non-chiral (\(v<u\)) phase.
Figure 10: (a, b) Measurements of total chirality for 100 sites lattices with the horizon \(x_{h}\) at site 95 for the collapsing dust profile when mass \(M\) is increased, then, for the hyperbolic \(\tanh\) profile as \(\alpha\) is increased respectively. (c) Chirality expectations of each site of a 500 site lattice for spatial functions \(v(x)\) with horizon \(x_{h}\) positioned at site 450 for the collapsing dust metric with \(M=1\) (blue) and with the hyperbolic profile of \(v\) couplings as in Eq. (44) with \(\alpha=2,\beta=0.2\) (green).
ing model differs dramatically from the noninteracting model. In fact, the low energy behaviour is typically described by collective, bosonic excitations using Luttinger liquid theory.
For the model discussed here, bosonization has been previously employed when \(|v|<|u|\) in which case there are only two Fermi points [9]. This resulted in the bosonized Luttinger liquid Hamiltonian
\[H=u\int dx\left[\Pi^{2}+(\partial_{x}\Phi)^{2}\right] \tag{45}\]
for a bosonic field \(\Phi\) with canonical momentum \(\Pi\). This corresponds to a Luttinger coefficient [31]\(K=1\), corresponding to a free fermion model in the whole regime with the interactions simply renormalizing the Fermi velocities. This can be simply understood by noting that the Fermi velocities differ at the two Fermi points, therefore the nesting condition does not apply.
When \(|v|>|u|\) there are two additional Fermi points with equal Fermi velocites so the nesting condition becomes relevant. The bosonized system is now described by a four component Hamiltonian given, neglecting terms with minimal contribution, by
\[H=\sum_{\mu,\nu}\partial_{x}\phi_{\mu}h_{\mu\nu}\partial_{x}\phi_{\nu} \tag{46}\]
where \(\phi_{\mu}\) represents the bosonic fields [43] centred at each Fermi point in the sum of \(\mu\) and \(\nu\) and
\[h_{\mu\nu}=\frac{1}{\pi}\begin{pmatrix}\frac{\pi v_{L_{1}}}{2}-2v&0&v-u&v-u\\ 0&\frac{\pi v_{L_{2}}}{2}+2v&u-v&u-v\\ v-u&u-v&\frac{\pi v_{R}}{2}+2u&2u\\ v-u&u-v&2u&\frac{\pi v_{R}}{2}+2u\end{pmatrix}, \tag{47}\]
with Fermi velocities \(v_{L_{1,2}}=2(\mp u-v)\) and \(v_{R}=2v\left(1-\frac{u^{2}}{v^{2}}\right)\). After a coordinate transformation we find, near the transition point \(v\approx u\), the Hamiltonian takes the form
\[\begin{split} H&=u\int dx\left[\Pi_{1}^{2}+(\partial_{x} \Phi_{1})^{2}\right]\\ &+\sqrt{v_{R}v_{R}^{\prime}}\int dx\left[\sqrt{\frac{v_{R}}{v_{R}^ {\prime}}}\Pi_{2}^{2}+\sqrt{\frac{v_{R}^{\prime}}{v_{R}}}(\partial_{x}\Phi_{2 })^{2}\right],\end{split} \tag{48}\]
for bosonic fields \(\Phi_{1},\Phi_{2}\) and corresponding canonical momenta \(\Pi_{1},\Pi_{2}\), giving Luttinger coefficients of \(K_{1}=1\) and \(K_{2}=\sqrt{v_{R}/v_{R}^{\prime}}=\sqrt{v_{R}/(v_{R}+\frac{8u}{\pi})}\). This implies that the Luttinger coefficient \(K_{2}\neq 1\), suggesting the interactions after the transition into the chiral phase have a significant influence on the model.
## VI Classical analysis
To gain further insight into the behaviour of the chiral spin chain we analyse the classical version of the model. Classically, the spins are unit vectors that can take arbitrary orientations. The dispersion is found by minimising the energy of the spin vectors in a classically equivalent energy function. The Hamiltonian is now given as
\[H=\sum_{n}\left[-\frac{u}{2}(S_{n}^{x}S_{n+1}^{x}+S_{n}^{y}S_{n+1}^{y})-\frac {v}{4}\vec{S}_{n}\cdot(\vec{S}_{n+1}\times\vec{S}_{n+2})\right], \tag{49}\]
for spin \(\vec{S}_{n}=(\sin\phi_{n}\cos\theta_{n},\sin\phi_{n}\sin\theta_{n},\cos\phi_{ n})\). We adopt open boundary conditions, so the summation in Eq. (49) ends at \(N-2\) for the chiral operator. The \(u\) controlled XX portion of the energy tends to align all the spins with nearest neighbour couplings whilst the chiral coupling \(v\) of the three spin interaction, tends to make neighbouring spins orthogonal. The overall spin configuration that minimises the energy was determined numerically, where the first site was set as spin up and all others sites were free, and the classical chirality found using DMRG.
Figure 11: (a) Average chirality, when \(u=1\), \(v=20\), with the strong chiral contribution from large \(v\) gives almost \(1\) as lattice size \(N\) is increased for the classical chain, compared to the quantum case in (b) for values up to \(N=300\), which shows similar, locally oscillatory behaviour with a tendency toward \(\sim 1.22\) as lattice size is increased. (c) Values for the classical/quantum chirality operator, given in blue/black, as \(v\) is increased for a periodic lattice of \(N=300\). The classical value slowly approaches \(1\), whilst the quantum chirality grows larger towards \(\sim 1.22\). It can also be seen that the classical chirality begins to grow before the transition point of \(u=v\).
The average chirality has the form
\[\langle\chi\rangle=\frac{1}{N}\sum_{n}\vec{S}_{n}\cdot(\vec{S}_{n+1}\times\vec{S} _{n+2}). \tag{50}\]
The value for chirality for the spin configurations that minimize the classical energy is given in Fig. 11. By showing the spins along the chain in Bloch space it can be seen that, when \(v>u\), the spin structure of the lattice is a repeating three spin sequence with these three spins almost orthogonal due to the chiral operator minimising whilst the spins are orthogonal, whereas the XX portion of the energy is minimised when the spins are parallel. This sequence then repeats along the lattice whilst slowly processing, as is shown in Fig. 12, where the effect of increasing the chiral coupling strength is given.
The results from Fig. 11 show a similarity between the classical chirality calculations and those done in the quantum case. This figure also gives the changes in average chirality \(\chi\) when the system size is increased, shown to be a tendency toward 1 in the classical regime as the spins align for every extension of the chain to maximise the chirality. From observing the spins Fig. 12 when \(v\) is large it is found the spins take on a repeating 3-spin pattern in which they attempt to stay orthogonal to maximise \(\chi\), which may increase by a maximum of 1 for every chiral operator acted along the chain. In contrast, in the quantum chain the chirality takes a maximal value of approximately 1.22, as shown in Fig. 11. This indicates that chirality receives contributions from genuine quantum correlations, that cause its value to become larger than the maximum possible classical value of 1 [11].
## VII Conclusion
While the 1-dimensional XX model supports the relativistic 1-dimensional Dirac equation, adding a chiral interaction causes the Dirac code to tilt an effect that is controlled by the chiral coupling. Surprisingly, this emulates the effect of gravitational background on Dirac fermions [9]. When the chiral coupling varies appropriately as a function of position then the chiral spin chain simulates the behaviour of Dirac particles in the black hole background.
In particular, we introduced a chiral spin model and simplified it with MFT in order to investigate its properties analytically. This included the dispersion relation, which exhibited transition shown by a splitting of the Fermi sea of the half filled ground state, and the central charge, giving what kind of CFT is defined by the transition of the system into a chiral phase. Results were then compared to an MPS simulation to give an idea of the accuracy of this mean field Hamiltonian.
Subsequently, we assessed the field theory defined by splitting the spin chain model into a diatomic unit cell, and found a comparison to the field theory of Dirac particles on the curved spacetime background of a black hole with the curvature determined by the couplings of the model. It is seen, by the dispersion relation of the diatomic model, that increasing the relative strength of the chiral coupling tilts the energy spectrum, analogous to the over-tilting of the Dirac cone as it enters the horizon of a black hole.
To determine the contribution of the chiral interactions in the behaviour of the system we employed the bosonization method. This method determines the importance of the interactions, in turn giving an insight into the accuracy of the non-interacting approach that was taken in applying MFT to the Hamiltonian. Before the splitting of the Fermi sea, bosonization gave a Luttinger coefficient of \(K=1\), implying the interactions are insignificant. After the sea splits, i.e. for \(v>u\), two separate coefficients emerge, corresponding to the two portions of the Fermi sea. One of them has a value different than 1, suggesting the interactions after the transition into the chiral phase have a significant influence on the model.
Finally, the classical version of the model is investigated. An energy function analogous to the quantum chain Hamiltonian was minimised in order assess the behaviour of chirality in the classical limit. We established that for large chiral coupling the spin vectors tend to be orthogonal, with a three spin pattern processing along the chain. Importantly, the quantum chain gives a value of chirality larger than then possible classical value thus demonstrating that quantum correlations contribute significantly in the behaviour of the system.
We envision that our work can build the bridge between chiral systems and black holes, thus facilitating the quantum simulation of Hawking radiation, e.g. with
Figure 12: The procession of spins in a 10 spin lattice as found by the classical model for (a) \(v=0.8\) and (b) \(v=8\). In (b) the spin states are found in sets of 3 almost orthogonal spins that repeats and processes along the chain.
cold atom technology. Moreover, our investigation opens the way for modelling certain strongly correlated systems by effective geometric theories with extreme curvature, thus providing an intuitive tool for their analytical investigation. As the bosonisation of the system in the chiral phase appears to indicate the interactions are important in this regime, a comparison between this model and that of a solveable quantum gravity could be a future focus of research, e.g. via measuring the scrambling of our model.
## Author contributions
Ewan Forbes: conceptualization, methodology, investigation, software, writing - original draft. Matthew Horner: conceptualization, methodology, investigation, software, writing - review and editing. Andrew Hallam: methodology, investigation, software, data curation, writing - review and editing. Joseph Barker: methodology, software, data curation, writing - review and editing. Jiannis Pachos: conceptualization, methodology, writing - review and editing.
###### Acknowledgements.
We thank Patricio Salgado-Rebolledo for insightful discussions. E.F., M.D.H., A.H. and J.K.P. acknowledge support by EPSRC (Grant No. EP/R020612/1). JB acknowledges funding from a Royal Society University Research Fellowship. Statement of compliance with EPSRC policy framework on research data: This publication is theoretical work that does not require supporting research data.
|
2305.01726 | Slow Kill for Big Data Learning | Big-data applications often involve a vast number of observations and
features, creating new challenges for variable selection and parameter
estimation. This paper presents a novel technique called ``slow kill,'' which
utilizes nonconvex constrained optimization, adaptive $\ell_2$-shrinkage, and
increasing learning rates. The fact that the problem size can decrease during
the slow kill iterations makes it particularly effective for large-scale
variable screening. The interaction between statistics and optimization
provides valuable insights into controlling quantiles, stepsize, and shrinkage
parameters in order to relax the regularity conditions required to achieve the
desired level of statistical accuracy. Experimental results on real and
synthetic data show that slow kill outperforms state-of-the-art algorithms in
various situations while being computationally efficient for large-scale data. | Yiyuan She, Jianhui Shen, Adrian Barbu | 2023-05-02T18:51:35Z | http://arxiv.org/abs/2305.01726v1 | # Slow Kill for Big Data Learning
###### Abstract
Big-data applications often involve a vast number of observations and features, creating new challenges for variable selection and parameter estimation. This paper presents a novel technique called "slow kill," which utilizes nonconvex constrained optimization, adaptive \(\ell_{2}\)-shrinkage, and increasing learning rates. The fact that the problem size can decrease during the slow kill iterations makes it particularly effective for large-scale variable screening. The interaction between statistics and optimization provides valuable insights into controlling quantiles, stepsize, and shrinkage parameters in order to relax the regularity conditions required to achieve the desired level of statistical accuracy. Experimental results on real and synthetic data show that slow kill outperforms state-of-the-art algorithms in various situations while being computationally efficient for large-scale data.
Top-down algorithms, sparsity, nonconvex optimization, nonasymptotic analysis, sub-Nyquist spectrum sensing
## I Introduction
This paper studies how to build a parsimonious and predictive model in big data applications, where both the number of predictors and the number of observations can be extremely large. Let \(y\in\mathbb{R}^{n}\) be a response vector with \(n\) samples and \(X=[x_{1},\ldots,x_{p}]\in\mathbb{R}^{n\times p}\) be a design matrix consisting of \(p\) features or predictors. Consider a general learning problem with loss \(l_{0}(X\beta;y)\) to measure the discrepancy between \(X\beta\) and \(y\). As \(p\) can be much larger than \(n\), a sparsity-promoting regularizer is often used to capture model parsimony
\[\min_{\beta\in\mathbb{R}^{p}}l_{0}(X\beta;y)+P(\beta;\lambda), \tag{1}\]
where \(\lambda\) is a regularization parameter. There are numerous options for \(l_{0}\) and \(P\), neither of which are necessarily convex. In many cases, \(l_{0}\) may be a negative log-likelihood function, but we will consider a more general setup that may not be based on likelihood.
Over the past decade, there have been significant advancements in statistical theory for the minimizers of the penalized problem (1). However, modern scientists often encounter challenges with big data, making it impractical to obtain globally optimal estimators even when convexity is present. This paper aims to incorporate computational considerations into statistical modeling, resulting in a new big-data learning framework with theoretical guarantees. When tackling these challenges in large-scale variable selection, the desired algorithms should possess the following traits:
(a) Ease in tuning. It is common in practice to seek a solution with a _prescribed_ cardinality (or a specific number of variable, denoted by \(q\)). However, using an algorithm designed for the penalized problem (1) may require excessive computation, and the regularization parameter \(\lambda\) may not be as intuitive when attempting to achieve this objective. Many practitioners perform a grid search for \(\lambda\). However, when dealing with big data, the grid must be fine enough to encompass potentially useful candidate models, resulting in a substantial computational burden.
(b) Scalability. In addition to being efficient, an ideal algorithm should be easy to implement. Since ad-hoc procedures can be unreliable, it is preferable to employ an algorithm based on _optimization_ rather than relying on heuristics. It would also be advantageous if the algorithm could adapt its parameters according to the available computational resources, which necessitates an understanding of the algorithm's iteration complexity and per-iteration cost.
(c) Statistical guarantee. It is widely recognized that the lasso is effective for variable selection when the design matrix exhibits low coherence and the signal is sufficiently strong [1, 2]. Some simpler and faster methods, such as those for variable screening [3], are based on the assumption of independent (or only mildly correlated) features. While these weak-correlation assumptions allow for aggressive feature elimination, they are often restrictive for real-world high-dimensional data. Evaluating a globally optimal solution to (1) with an \(\ell_{0}\)-type penalty [4] does
have a statistically sound guarantee regardless of coherence, but is only computationally feasible for small datasets. Therefore, a more pressing challenge is to design an iterative process that can relax the stringent regularity conditions required for attaining optimal statistical accuracy.
This work proposes a new approach called _slow kill_ to tackle the aforementioned challenges. The main features of the algorithm are as follows.
* Interestingly, slow kill works in the opposite direction of forward pathwise methods and boosting algorithms, which all build up a model from the null [5, 6, 7, 8, 9].
* Slow kill incorporates adaptive \(\ell_{2}\)-shrinkage and growing learning rates to handle coherent designs and reduce computational burden. Its roots in optimization make it computationally scalable and easy to tune parameters.
* Theoretically, slow kill enjoys rigorous, provable guarantees of accuracy and linear convergence in a statistical sense. In particular, our theory supports backward quantile control and fast learning.
The rest of the paper is organized as follows. Section II investigates a hybrid regularized estimation in the regression setting to motivate some basic elements of slow kill and compares it to related works. Section III introduces the general slow kill procedure for a differentiable loss function and analyzes how the statistical error changes as the cycles progress. Section IV performs extensive simulations and real data experiments to compare slow kill to some state-of-the-art methods in terms of both efficiency and accuracy. We summarize our findings in Section V. More technical details are provided in the appendix.
_Notations and symbols._ The following notations and symbols will be used. Let \([n]=\{1,\ldots,n\}\) and \(\lfloor x\rfloor\) be the largest integer smaller than or equal to \(x\). Define \(a\lor b=\max(a,b)\) and \(a\wedge b=\min(a,b)\). We use \(a\lesssim b\) to denote \(a\leq cb\) for some positive constant \(c\), and the constants denoted by \(c\) or \(C\) may not be the same at each occurrence. Given any \(\beta\in\mathbb{R}^{p}\), we use \(\mathcal{J}(\beta)\subset[p]\) to denote its support, i.e., \(\mathcal{J}(\beta)=\{j:\beta_{j}\neq 0\}\), and \(J(\beta)=|\mathcal{J}(\beta)|=\|\beta\|_{0}=\sum_{j=1}^{p}1_{\beta_{j}\neq 0}\). Given \(I\subset[p]\), we use \(X_{I}\) to denote the sub-matrix of \(X\) formed with the columns in \(I\), and \(\beta_{I}\) the subvector associated with \(I\). In particular, \(x_{j}\) denotes the \(j\)th column of \(X\) for any \(j\in[p]\). When \(A\) is a symmetric matrix, we use \(A_{I}\) to denote the sub-matrix of \(A\) formed with the columns and rows indexed by \(I\), and \(\lambda_{\max}(A)\), \(\lambda_{\min}(A)\) to denote its largest and smallest eigenvalues, respectively.
Given \(X\in\mathbb{R}^{n\times p}\), the restricted isometry numbers \(\rho_{+}(s)\), \(\rho_{-}(s)\)[10] are the smallest and largest numbers, respectively, that satisfy
\[\rho_{-}(s)\|\beta\|_{2}^{2}\leq\|X\beta\|_{2}^{2}\leq\rho_{+}(s)\|\beta\|_{2 }^{2},\ \forall\beta\in\mathbb{R}^{p}:\|\beta\|_{0}\leq s, \tag{2}\]
and their dependence on \(X\) is omitted. Obviously, \(0\leq\rho_{-}(s)\leq\rho_{+}(s)\leq\rho_{+}(p)=\|X\|_{2}^{2}\), where \(\|X\|_{2}\) denotes the spectral norm of \(X\).
For ease of presentation, we introduce a quantile-thresholding operator \(\Theta^{\#}\) which performs simultaneous thresholding and \(\ell_{2}\)-shrinkage [11]. Given any \(s=[s_{1},\ldots,s_{p}]^{T}\in\mathbb{R}^{p}\), \(\Theta^{\#}(s;q,\eta)=[t_{1},\ldots,t_{p}]^{T}\) satisfying \(t_{(j)}=s_{(j)}/(1+\eta)\) if \(1\leq j\leq q\), and \(0\) otherwise, where \(s_{(1)},\ldots,s_{(p)}\) are the order statistics of \(s_{1},\ldots,s_{p}\) satisfying \(|s_{(1)}|\geq\cdots\geq|s_{(p)}|\), and \(t_{(1)},\ldots,t_{(p)}\) are defined similarly. To avoid ambiguity, we make a \(\Theta^{\#}\)-uniqueness assumption in performing \(\Theta^{\#}(s;q,\eta)\) throughout the paper: either \(|s_{(q)}|>|s_{(q+1)}|\) or \(s_{(q)}=s_{(q+1)}=0\) occurs. The multivariate quantile thresholding function \(\widetilde{\Theta}^{\#}(S;q,\eta)\) for any \(S=[s_{1},\ldots,s_{p}]^{T}\in\mathbb{R}^{p\times n}\) is defined as a \(p\times m\) matrix \(T=[t_{1},\ldots,t_{p}]^{T}\) with \(t_{j}=s_{j}/(1+\eta)\) if \(\|s_{j}\|_{2}\) is among the \(q\) largest elements in \(\{\|s_{j}\|_{2}:1\leq s\leq p\}\), and \(0\) otherwise.
## II Why Backward Selection?
This section is to motivate a "top-down" algorithm design in the fundamental regression setting. The quadratic loss is an important case of strongly convex losses and examining this case will provide a foundation for more general studies under restricted strong convexity.
Assume \(y=X\beta^{*}+\epsilon\), where \(\beta^{*}\in\mathbb{R}^{p}\), \(\|\beta^{*}\|_{0}\leq s\) with \(s\leq p\wedge n\). To begin with, we consider an \(\ell_{0}\)-constrained, \(\ell_{2}\)-penalized optimization problem to estimate the coefficient vector in high dimensions,
\[\min_{\beta}\frac{1}{2}\|y-X\beta\|_{2}^{2}+\frac{\eta_{0}}{2}\|\beta\|_{2}^{2} \equiv f(\beta)\ \ \text{s.t. }\|\beta\|_{0}\leq q. \tag{3}\]
When \(X,y\) are not centered, an intercept term \(1\alpha\) should be added in the loss, and \(\alpha\) is subject to no regularization. The hybrid regularization in (1) differs from the commonly used linear combination of \(\ell_{1}\) and \(\ell_{2}\) penalties in the
elastic net [12]. Compared to the regular \(\ell_{1}\) penalty and other nonconvex penalties, \(\|\cdot\|_{0}\) is arguably an ideal choice for enforcing sparsity and does not incur any unwanted bias. The constraint parameter \(q\,(\leq p)\) directly controls the number of variables in the resulting model, making it more convenient to use than a penalty parameter \(\lambda\). The simultaneous \(\ell_{2}\)-penalty is to compensate for collinearity and large noise, and is later used to overcome some obstacles in backward elimination. The associated regularization parameter \(\eta_{0}\) can be easily tuned and is not highly sensitive in experiments. Our theoretical analysis will reveal the benefits of a carefully designed shrinkage sequence for both numerical stability and statistical accuracy.
Problem (3) is nonconvex and includes a discrete constraint. While it can be challenging to computationally solve problems of this nature, it is possible to find a local minimum using a scalable iterative optimization algorithm. Moreover, in the era of big data, it may not be necessary to fully solve (3) in order to achieve good statistical performance for "regular" problems and analyzing algorithm-driven non-global estimators is crucial to discovering new and cost-effective methods for improving the statistical performance of nonconvex optimization. Concretely, to introduce a prototype algorithm, we first construct a surrogate function \(g(\beta,\beta^{-})\) for (3),
\[g(\beta,\beta^{-})=\frac{1}{2}\|y-X\beta^{-}\|_{2}^{2}+\langle X^{T}(X\beta^{ -}-y),\beta-\beta^{-}\rangle+\frac{\rho}{2}\|\beta-\beta^{-}\|_{2}^{2}+\frac{ \eta_{0}}{2}\|\beta\|_{2}^{2},\]
with \(\rho>0\) to be chosen later, and then define a sequence of iterates by
\[\beta^{(t+1)}=\arg\min_{\beta:\|\beta\|_{0}\leq q}g(\beta,\beta^{(t)}). \tag{4}\]
Recall the quantile-thresholding operator \(\Theta^{\#}\) defined at the end of Section I. With some simple algebra (details omitted), we obtain an iterative quantile-thresholding algorithm
\[\beta^{(t+1)}=\Theta^{\#}\Big{\{}\beta^{(t)}-\frac{1}{\rho}X^{T}(X\beta^{(t)} -y);q,\frac{\eta_{0}}{\rho}\Big{\}}. \tag{5}\]
The first step amounts to the sure independence screening [3] when \(\beta^{(0)}=0\). However, (5) iterates to lessen greediness with a low per-iteration cost.
The update rule in (5) possesses some desirable computational properties. For instance, if \(\rho\) is large enough (more specifically, \(\rho\geq\rho_{+}(2q)\) with \(\rho_{+}(\cdot)\) defined in (2)), then the algorithm shows a worst-case sublinear convergence rate, regardless of the problem's dimensions, coherence, and signal strength. The obtained solutions (though not necessarily optimal) can be characterized as _fixed points_ of the algorithm mapping defined in (4). For more results and technical details, please refer to Theorem A.1.
This class of procedures has been used in signal and information processing [11, 13], and in the special case of \(\eta_{0}=0\), the plain update rule of (5) falls under the category of iterative hard-thresholding (IHT) algorithms [14, 15] which only exhibit mediocre performance (cf. Remark 3 and Section IV). In fact, there is much potential for improvement by adaptively adjusting the three key parameters \(\rho,\eta_{0},q\) in (5), which has not been systematically explored in the literature.
### _Statistical error analysis: power and limitations_
While optimization error is important for analyzing an algorithm, our main focus is on _statistical error_. This subsection investigates the prototype algorithm (5) to motivate new techniques in later sections. In order to obtain sharp nonasymptotic results for this algorithm, it is important to note that the thresholds vary from iteration to iteration and the final estimator may not be globally optimal.
Recall \(y=X\beta^{*}+\epsilon\) with \(\|\beta^{*}\|_{0}\leq s\). Let
\[\vartheta:=q/s\]
with \(\vartheta>1\) throughout the paper. A fixed point \(\hat{\beta}\) associated with (5) that satisfies the following equation is called a \(\Theta^{\#}\)-estimator,
\[\hat{\beta}=\Theta^{\#}\big{\{}\hat{\beta}-\frac{1}{\rho}X^{T}(X\hat{\beta}-y );q,\bar{\eta}_{0}\big{\}},\text{ with }\bar{\eta}_{0}=\eta_{0}/\rho. \tag{6}\]
Theorem 1 studies the statistical accuracy of these estimators.
**Theorem 1**.: _Assume that \(\epsilon\) is a sub-Gaussian random vector with mean zero and scale bounded by \(\sigma\) (cf. Definition A.1 in the appendix). Let \(\hat{\beta}\) be any estimator satisfying (6) for some \(\eta_{0}\geq 0\) with \(\|\hat{\beta}\|_{0}=q\), and \(\rho>0\) be chosen such that_
\[\frac{\rho-\{(2-\varepsilon)\sqrt{\vartheta}-1\}\eta_{0}}{\sqrt{\vartheta}}\| \beta\|_{2}^{2}\leq(2-\delta)\|X\beta\|_{2}^{2}\quad\forall\beta:\|\beta\|_{0 }\leq(1+\vartheta)s \tag{7}\]
_for some \(\varepsilon,\delta>0\). Then with probability at least \(1-Cp^{-c},\)_
\[\|X(\hat{\beta}-\beta^{*})\|_{2}^{2}\vee\frac{\eta_{0}\varepsilon}{\delta}\| \hat{\beta}-\beta^{*}\|_{2}^{2}\lesssim\frac{1}{\delta^{2}}\sigma^{2}\vartheta s \log\frac{ep}{\vartheta s}+\frac{\eta_{0}}{\delta\varepsilon}\|\beta^{*}\|_{2 }^{2}, \tag{8}\]
_where \(C,c>0\) are constants._
From the error bound, (5) can achieve the minimax optimal error rate of \(\mathcal{O}(\sigma^{2}s\log(ep/s))\)[16], under the assumption of (7) and when \(\vartheta,\delta,\varepsilon\) are treated as constants. The result does not need \(\eta_{0}\) to be exactly zero. In fact, a positive \(\eta_{0}\) can actually be beneficial in satisfying the condition of (7) (e.g., \(\rho=(1.9\sqrt{\vartheta}-1)\eta_{0}+1.9\sqrt{\vartheta}\rho_{-}(q+s)\) and \(\varepsilon=\delta=0.1\), applicable to \(q>n\)). Another interesting observation is that \(\rho\) should be chosen to be properly small to achieve good statistical accuracy, which is in contrasts to the bound \(\rho\geq\rho_{+}(2q)\) mentioned earlier for numerical convergence. The remarks below make some further extensions and comparisons.
**Remark 1** (Estimation error bounds and faithful variable selection).: _The \(\ell_{2}\)-recovery result of Theorem 1 is fundamental, and can be used to derive estimation error bounds in other norms under proper regularity conditions._
**Theorem 2**.: _In the setup of Theorem 1, suppose the regularity condition (7) is replaced by_
\[\big{\{}\frac{\rho-(2\sqrt{\vartheta}-1)\eta_{0}}{\sqrt{\vartheta}}+\delta \rho_{+}((1+\vartheta)s)\big{\}}\|\beta\|_{2}^{2}\leq 2\|X\beta\|_{2}^{2},\ \forall\beta:\|\beta\|_{0}\leq(1+ \vartheta)s \tag{9}\]
_for some \(\delta>0\). Then_
\[\|\hat{\beta}-\beta^{*}\|_{2}^{2}\lesssim\frac{1}{\delta^{2}\rho_{+}((1+ \vartheta)s)}\sigma^{2}\vartheta s\log\frac{ep}{\vartheta s}+\frac{\eta_{0}^{ 2}}{\delta^{2}\rho_{+}((1+\vartheta)s)}\|\beta^{*}\|_{2}^{2} \tag{10}\]
_holds with probability at least \(1-Cp^{-c}\), for some \(C,c>0\). Moreover, under_
\[\nu\|\beta\|_{\infty}\leq\|(X^{T}X+\eta_{0}I)\beta\|_{\infty}/n,\ \ \beta:\|\beta\|_{0}\leq(1+\vartheta)s \tag{11}\]
_for some \(\nu>0\), any fixed-point \(\hat{\beta}\) satisfies_
\[\|\hat{\beta}-\beta^{*}\|_{\infty}\leq\frac{(\rho+\eta_{0})}{n\nu\sqrt{ \vartheta-1}}\frac{\|\hat{\beta}-\beta^{*}\|_{2}}{\sqrt{s}}+\frac{\|X^{T} \epsilon\|_{\infty}}{n\nu}+\frac{\eta_{0}}{n\nu}\|\beta^{*}\|_{\infty,} \tag{12}\]
_and_
\[\|(\hat{\beta}-\beta^{*})_{\mathcal{J}^{*}}\|_{\infty}+(1-\frac{\rho+\eta_{ 0}}{n\nu})\|(\hat{\beta}-\beta^{*})_{\hat{\mathcal{J}}\setminus\mathcal{J}^{* }}\|_{\infty}\leq\frac{\|X^{T}\epsilon\|_{\infty}}{n\nu}+\frac{\eta_{0}}{n\nu} \|\beta^{*}\|_{\infty}, \tag{13}\]
_where \(\mathcal{J}^{*}=\mathcal{J}(\beta^{*})\), \(\hat{\mathcal{J}}=\mathcal{J}(\hat{\beta})\)._
_Compared with (7), the condition of (9) replaces \(\delta\|X\beta\|_{2}^{2}\) by \(\delta\rho_{+}((1+\vartheta)s)\|\beta\|_{2}^{2}\). When \(q\) and \(s\) are small, \(\rho_{+}((1+\vartheta)s)\) is of the order \(\mathcal{O}(n)\). Therefore, (10) becomes \(\|\hat{\beta}-\beta^{*}\|_{2}^{2}\lesssim\{\sigma^{2}s\log(ep/s)\}/n,\) assuming \(\delta,\vartheta\) are constants and \(\eta_{0}\) is properly small._
_Moreover, the element-wise error bound (12) implies faithful variable selection under regularity condition (11) (which, like previous regularity conditions, favors low coherence, i.e., the off-diagonal entries of \(X^{T}X/n\) should be relatively small in magnitude). Specifically, assuming \(\vartheta,\nu,\delta\) are constants, \(\|x_{j}\|_{2}\lesssim\sqrt{n}\), \(\rho+\eta_{0}\lesssim n\) and the beta-min condition \(\min_{j\in\mathcal{J}^{*}}|\beta_{j}^{*}|>c\sigma\{\log(ep)/n\}^{1/2}\) with a sufficient large constant \(c\), (12) indicates that the \(s\) largest elements in \(|\hat{\beta}_{j}|\) correspond to \(\mathcal{J}^{*}=\{j:\beta_{j}^{*}\neq 0\}\) with high probability._
**Remark 2** (Fixed points vs. globally optimal solutions).: _The statistical accuracy results (8), (10), and (12) are proved for all nonglobal fixed-point estimators defined by (6). Our proof can be slightly modified to show that if a globally optimal solution can be computed, the statistical error rate remains unchanged but the left-hand side of
(7) becomes 0, indicating that the regularity condition always holds for any \(\delta\leq 2\). However, relying on multiple starting points to obtain a globally optimal solution and thus improve statistical performance can be inefficient for large datasets._
**Remark 3** (Comparison with some theoretical works).: _The aforementioned class of IHT algorithms may refer to the use of hard-thresholding \(\Theta_{H}(s;\lambda)=[s_{i}1_{|s_{i}|\geq\lambda}]\) with a fixed threshold \(\lambda\), or a varying threshold as the \(q/p\)-th quantile of \(|s_{i}|\)\((1\leq i\leq p)\) by fixing \(q\)[14, 15]. In comparison, the \(\ell_{2}\) component in (5) should not be ignored, and it may result in a different sparsity pattern in the presence of high coherence and large \(p\). Fairly speaking, the performance of IHT is not on par with some standard statistical methods and packages (such as the lasso). This is why we performed theoretical analysis in the hopes of discovering and developing new techniques._
_In a theoretical study, [17] obtained a convergence result in terms of function value under_
\[\vartheta>\rho_{+}^{2}(2q)/\rho_{-}^{2}(2q),\]
_which improves the condition in [18]_
\[\vartheta>32\rho_{+}^{2}(2q)/\rho_{-}^{2}(2q).\]
_Our condition in Theorem 1 is even less restrictive. For example, a sufficient condition for (7) is_
\[\vartheta>\{\rho_{+}(2q)+\eta_{0}\}^{2}/[4\{\rho_{-}(q+s)+\eta_{0}\}^{2}],\]
_or_
\[\vartheta>\{\rho_{+}(2q)+\eta_{0}\}^{2}/[4\{\rho_{-}(2q)+\eta_{0}\}^{2})]\]
_since \(\rho_{-}(q+s)\geq\rho_{-}(2q)\), which becomes \(\vartheta>\rho_{+}^{2}(2q)/\{4\rho_{-}^{2}(2q)\}\) in the worst case of \(\eta_{0}=0\). In conclusion, \(32\rho_{+}^{2}(2q)/\rho_{-}^{2}(2q)\geq\rho_{+}^{2}(2q)/\rho_{-}^{2}(2q)\geq \rho_{+}^{2}(2q)/\{4\rho_{-}^{2}(2q)\}\geq\rho_{+}^{2}(2q)/[4\rho_{-}^{2}(2q) ]\geq\rho_{+}^{2}(2q)/[4\{\rho_{-}(q+s)\}^{2}]\geq\{\rho_{+}(2q)+\eta_{0}\}^{2} /[4\{\rho_{-}(q+s)+\eta_{0}\}^{2}]\), and our obtained error rate of \(\sigma^{2}s\log(ep/s)\) is minimax optimal._
_Interested readers may also refer to [19, 20, 21, 9, 17, 22], for example, for the analyses of various penalties and mixed thresholding rules, with an error rate of \(\sigma^{2}s\log(ep)\). Since our purpose is to design a new backward selection algorithm for problems with a predetermined number of features, we will not discuss their technical assumptions. The experiments in Section IV make a comprehensive comparison of different methods in various scenarios._
### _New means of improvement for large-scale data_
Providing provable guarantees for prediction, estimation, and variable selection is reassuring. But the real challenge lies in finding innovative techniques that can _relax_ the required regularity conditions to ensure good statistical accuracy, while being more cost-effective than using multiple random starts. To gain further insights, we can use the restricted isometry numbers (as defined in (2)) to provide a sufficient condition for (7):
\[\rho<2\sqrt{\vartheta}\rho_{-}(q+s)+(2\sqrt{\vartheta}-1)\eta_{0}\ \ \text{or}\ \ 4 \vartheta>\frac{(\rho+\eta_{0})^{2}}{(\rho_{-}(q+s)+\eta_{0})^{2}}. \tag{14}\]
#### Ii-B1 "Fast" learning
One key takeaway from the results presented in Section II-A is the importance of the inverse learning rate, \(\rho\). In the field of machine learning, it is commonly advised to use a "slow" learning rate when training a nonconvex model. This can ensure good computational performance, as evidenced by the lower bound of \(\rho\) in Theorem A.1. However, it is important to note that according to (14), using an excessively large value for \(\rho\) may compromise the statistical guarantee of the model.
In fact, (7) suggests that smaller values of \(\rho\) are preferred, and combining statistical and numerical analysis leads to the following range for \(\rho\):
\[\rho_{+}(2q)\leq\rho\leq 2\sqrt{\vartheta}\rho_{-}(q+s)+(2\sqrt{\vartheta}-1) \eta_{0}. \tag{15}\]
In convex programming, the choice of stepsize does not affect the optimality of the solution as long as the algorithm converges. However, in our case of nonconvex constrained optimization, it is important to choose a large enough value for \(1/\rho\) not only to gain fast convergence, but also to ensure statistical accuracy. To the best of our knowledge, this is a novel finding. Since it may not be easy to determine the theoretical restricted isometry numbers in practice, a routine line search for the step size can be used. Specifically, according to the proof in Appendix A, one can use the majorization condition \(f(\beta^{(t+1)})\leq g(\beta^{(t+1)},\beta^{(t)})\) or \(\|X(\beta^{(t+1)}-\beta^{(t)})\|_{2}^{2}\leq\rho\|\beta^{(t+1)}-\beta^{(t)}\|_ {2}^{2}\) to prevent \(\rho\) from becoming too large while still preserving the convergence properties stated in Theorem A.1. The concept of using an iteration-varying sequence \(\rho_{t}\) will be important in the next section.
2 "Backward" selection
Another important discovery is the influence of cardinality control. If we use a conservative inverse learning rate of \(\rho=\rho_{+}(2q)\), then (14) imposes a limit on the restricted condition number of the design matrix:
\[\vartheta>[\rho_{+}(2q)+\eta_{0}]^{2}/\{4[\rho_{-}(q+s)+\eta_{0}]^{2})\}. \tag{16}\]
This suggests a promising approach to relax the regularity condition by increasing the value of \(\vartheta\).
Figure 1 confirms the point assuming random designs: the larger the value of \(q\) is, the more likely it is for (14) to hold on large-scale data. Random matrix theory also supports this idea.
**Theorem 3**.: _Assume that the rows of the random matrix \(X\in\mathbb{R}^{n\times p}\) are independent and identically distributed as \(N(0,\Sigma)\), where \(\Sigma_{ii}\leq 1\). Let \(\lambda_{\max}^{(2q)}\) be the largest eigenvalue of \(\Sigma_{I}\) for all \(I\subset[p]\) with \(|I|\leq 2q\), and \(\lambda_{\min}^{(q+s)}\) be the smallest eigenvalue of \(\Sigma_{I}\) for all \(I\subset[p]\) with \(|I|\leq q+s\). Then for any \(0<c<1\),_
\[\frac{\rho_{+}(2q)}{\rho_{-}(q+s)}\leq\left\{\frac{(1+c)\sqrt{\lambda_{\max}^ {(2q)}}+\sqrt{\{2\lambda_{\max}^{(2q)}q\log(ep/q)\}/n}+\sqrt{2q/n}}{(1-c)\sqrt {\lambda_{\min}^{(q+s)}}-\sqrt{\{\lambda_{\min}^{(q+s)}}(q+s)\log(ep/q)\}/n- \sqrt{(q+s)/n}}\right\}^{2} \tag{17}\]
_with probability at least \(1-2\exp(-nc^{2}/2)\), assuming \(n\geq\{2(q+s)/(1-c)^{2}\}\{1/\lambda_{\min}^{(q+s)}+\log(ep/q)\}\)._
The results can be extended to sub-Gaussian designs (by using, for example, Theorem 6.2 of [23] and Weyl's theorem). Let us consider the Toeplitz design \(\Sigma=[\tau^{|i-j|}]\) with \(0\leq\tau<1\). By the interlacing theorem,
\[(1-\tau)/(1+\tau)=\lambda_{\min}(\Sigma)\leq\lambda_{\min}^{(q+s)}\leq\lambda_ {\max}^{(2q)}\leq\lambda_{\max}(\Sigma)=(1+\tau)/(1-\tau),\]
and so the right-hand side of (17) is bounded by a constant with high probability as \(n\gg q\log{(ep/q)}\). Accordingly, the regularity condition can be satisfied with a properly large \(\vartheta\).
Of course, the error bound in (8) also increases with larger values of \(q\). To address this issue, we propose employing a _decreasing_ sequence of \(q_{t}\) to progressively tighten the cardinality constraint. Based on previous discussions, it is thus advisable to use _increasing_ learning rates \(1/\rho_{t}\) (such as \(1/\rho_{+}(2q_{t})\)) in the iterative process. It may also be beneficial to adjust the shrinkage parameter to a sequence \(\eta_{t}\), particularly when \(q_{t}>n\). This resulting algorithm, which combines progressive quantiles, \(\ell_{2}\)-shrinkage, and learning rates, will be referred to as "slow kill." It differs from the pure optimization algorithm (5) with a fixed \(q\) and from various bottom-up boosting and greedy algorithms that are commonly used in the literature.
The purpose of this section is to provide a compelling rationale for certain aspects of slow kill techniques. We will present results in a more general setting, including fast convergence of the iterates and how slow kill improves the quality of the initial estimate as \(q_{t}\) approaches \(q\), further relaxing the regularity conditions.
Fig. 1: An illustration of how \(4\vartheta\rho_{-}^{2}(q+s)/\rho_{+}^{2}(2q)\) varies as \(\vartheta\) increases. Here, the rows of \(X\) are independently drawn from a multivariate Gaussian distribution with zero mean and the covariance \(\Sigma=[0.5^{|i-j|}]\), \(n=2\),\(000,p=4\),\(000,s=4\). To determine \(\rho_{\pm}\) for a given matrix \(X\), we perform a random sampling. The results are averaged over 100 independent \(X\)’s that are generated from the same distribution.
## III Adaptive Control of Quantiles, Learning Rates, and \(\ell_{2}\)-shrinkage
Given a general loss, based on the discussions in the last section, we pursue sparsity in \(\beta\) via
\[\min_{\beta\in\mathbb{R}^{p}}l_{0}(X\beta;y)+\frac{\eta_{0}}{2}\|\beta\|_{2}^{2} \equiv l(\beta)+\frac{\eta_{0}}{2}\|\beta\|_{2}^{2}\equiv f(\beta)\text{ s.t. }\|\beta\|_{0}\leq q, \tag{18}\]
where for notational ease, \(l_{0}(X\beta;y)\) is often abbreviated as \(l(\beta)\). Again, the use of hybrid regularization is intended to address collinearity and large \(p\). We assume that the regularization parameters \(q,\eta_{0}\) are given in the algorithm design and theoretical analysis. (Of course, given \(q\), one can easily tune the value of \(\eta_{0}\) using methods such as AIC; as for the selection of \(q\), an information criterion is provided in the Appendix H.) The generalized Bregman function for a differentiable \(l\) is one of the main tools we use to handle a variety of losses:
\[\mathbf{\Delta}_{l}(\beta_{1},\beta_{2}):=l(\beta_{1})-l(\beta_{2})-\langle\nabla l (\beta_{2}),\beta_{1}-\beta_{2}\rangle, \tag{19}\]
where the differentiability can be replaced by directional differentiability to analyze a wide range of algorithms in statistical computation [22]. If \(l\) is also strictly convex, \(\mathbf{\Delta}_{l}\) becomes the standard Bregman divergence [24, 25]. When \(l(\cdot)=\|\cdot\|_{2}^{2}/2\), \(\mathbf{\Delta}_{l}(\beta_{1},\beta_{2})=\|\beta_{1}-\beta_{2}\|_{2}^{2}/2\), which is symmetric, and we abbreviate it to \(\mathbf{D}_{2}(\beta_{1},\beta_{2})\). Define the symmetrized version of \(\mathbf{\Delta}_{l}(\beta_{1},\beta_{2})\) by \(\mathbf{\Delta}_{l}(\beta_{1},\beta_{2}):=\{\mathbf{\Delta}_{l}(\beta_{1},\beta_{2}) +\ \mathbf{\Delta}_{l}\ (\beta_{1},\beta_{2})\}/2\), where \(\mathbf{\Delta}_{l}\ (\beta_{1},\beta_{2})=\mathbf{\Delta}_{l}(\beta_{2},\beta_{1})\). As an extension of (2), we introduce two generalized restricted isometry numbers \(\rho_{+}^{l}(s_{1},s_{2})\), \(\rho_{-}^{l}(s_{1},s_{2})\) that satisfy
\[\mathbf{\Delta}_{l}(\beta_{1},\beta_{2}) \leq\rho_{+}^{l}(s_{1},s_{2})\mathbf{D}_{2}(\beta_{1},\beta_{2}), \ \forall\beta_{i}:\|\beta_{i}\|_{0}\leq s_{i},i=1,2 \tag{20}\] \[\mathbf{\Delta}_{l}(\beta_{1},\beta_{2}) \geq\rho_{-}^{l}(s_{1},s_{2})\mathbf{D}_{2}(\beta_{1},\beta_{2}), \ \forall\beta_{i}:\|\beta_{i}\|_{0}\leq s_{i},i=1,2. \tag{21}\]
We differentiate \(s_{1},s_{2}\) because \(\mathbf{\Delta}_{l}\) may not be symmetric. These numbers will be convenient and useful for theoretical purposes; for example, Theorem 4 and Theorem 5 will use positive \(\rho_{+}^{l}(q,q)\) and \(\rho_{+}^{l}(q,s)\), respectively, while Theorem 6 will use nonnegative \(\rho_{-}^{l}\). When \(l(\beta)=\|X\beta-y\|_{2}^{2}/2\), \(\mathbf{\Delta}_{l}(\beta_{1},\beta_{2})=\|X\beta_{1}-X\beta_{2}\|_{2}^{2}/2\) and \(\rho_{+}^{l}(s_{1},s_{2})=\rho_{+}(s_{1}+s_{2})\). More generally, if the gradient of \(l_{0}(\cdot;y)\) is \(L\)-Lipschitz continuous, as is the case in regression or logistic regression,
\[\|\nabla l_{0}(\xi_{1};y)-\nabla l_{0}(\xi_{2};y)\|_{2}\leq L\|\xi_{1}-\xi_{2 }\|_{2}, \tag{22}\]
for all \(\xi_{1},\xi_{2}\in\mathbb{R}^{n}\), then it is easy to show that
\[\rho_{+}^{l}(s_{1},s_{2})\leq L\rho_{+}(s_{1}+s_{2})\ (\leq L\|X\|_{2}^{2}). \tag{23}\]
### _Numerical convergence and statistical accuracy for the general optimization algorithm_
First, we extend the previous iterative quantile-thresholding algorithm to handle losses that may not be quadratic. Construct the following surrogate function
\[g(\beta,\beta^{-})=l_{0}(X\beta;y)+\frac{\eta_{0}}{2}\|\beta\|_{2}^{2}+(\rho \mathbf{D}_{2}-\mathbf{\Delta}_{l})(\beta,\beta^{-}), \tag{24}\]
which is by linearizing the loss (only). Then, similar to the derivation in Section II, (24) leads to an algorithm
\[\beta^{(t+1)}=\Theta^{\#}\Big{\{}\beta^{(t)}-\frac{1}{\rho}X^{T}\nabla l_{0}(X \beta^{(t)};y);q,\frac{\eta_{0}}{\rho}\Big{\}}. \tag{25}\]
Some basic numerical properties are summarized as follows.
**Theorem 4**.: _Assume that \(\inf_{\xi,y}l_{0}(\xi;y)>-\infty\). Consider (25) starting from an arbitrary feasible \(\beta^{(0)}\). Then \(\rho\geq\rho_{+}^{l}(q,q)\) guarantees that for all \(t\geq 0\), \(f(\beta^{(t+1)})\leq g(\beta^{(t+1)},\beta^{(t)})\) and \((\rho-\rho_{+}^{l}(q,q))\mathbf{D}_{2}(\beta^{(t+1)},\beta^{(t)})\leq f(\beta^ {(t)})-f(\beta^{(t+1)})\), and so the objective function values converge as \(t\to\infty\). Assume \(\rho>\rho_{+}^{l}(q,q)\), \(\eta_{0}>0\) and \(\nabla l_{0}\) is continuous. Then every accumulation point \(\hat{\beta}\) of \(\beta^{(t)}\) satisfies the fixed-point equation_
\[\hat{\beta}=\Theta^{\#}\{\hat{\beta}-X^{T}\nabla l_{0}(X\hat{\beta};y)/\rho;q, \eta_{0}/\rho\}. \tag{26}\]
_Furthermore, if \(l_{0}(\cdot;y)\) is convex, \(\lim_{t\to\infty}\beta^{(t)}=\hat{\beta}\), and under \(\|\hat{\beta}\|_{0}=q\), \(\hat{\beta}\) is a local minimizer to problem (18) and the support of \(\beta^{(t)}\) stabilizes in finitely many iterations._
Next, we turn to the statistical accuracy of the estimators that are defined by (26). To overcome the obstacle that the loss is not necessarily associated with a probability density function, we define the concept of _effective noise_ with respect to the statistical truth \(\beta^{*}\) as
\[\epsilon=-\nabla l_{0}(X\beta^{*};y), \tag{27}\]
where we treat \(X\) as fixed and \(y\) as random in this section. The definition of effective noise in (27) does not depend on the regularizer. In the special case of a generalized linear model with cumulant function \(b\) and canonical link function \(g=(b^{\prime})^{-1}\), the loss is \(l(\beta)=l_{0}(X\beta;y)=-\langle y,X\beta\rangle+\langle 1,b(X\beta)\rangle\), and so \(\epsilon=y-g^{-1}(X\beta^{*})=y-\,\mathbb{E}y\). For regression, the effective noise term \(\epsilon\) is equivalent to the raw noise, which is usually assumed to be Gaussian. In the case of classification using the logistic deviance, \(\epsilon\) is bounded, making it sub-Gaussian. In fact, any loss function with a bounded derivative, such as Huber's loss, Hampel's loss, or the hinge loss, will always result in a sub-Gaussian \(\epsilon\), regardless of the distribution of \(y\). In this section, we assume that the effective noise is a sub-Gaussian random vector with mean zero and scale bounded by \(\sigma\). However, our proof techniques can be applied more generally. The following theorem provides a risk bound for the estimators obtained by (25), and also demonstrates the impact of the quality of the starting point on the regularity condition.
**Theorem 5**.: _Let \(\hat{\beta}:\|\hat{\beta}\|_{0}=q\) be an estimate obtained from (25) with a feasible starting point \(\beta^{(0)}\), namely, \(\hat{\beta}\in\min_{\|\beta\|_{0}\leq q}g(\beta,\hat{\beta})\) and \(f(\hat{\beta})\leq f(\beta^{(0)})\) with \(\|\beta^{(0)}\|_{0}\leq q\). Define_
\[P_{o}(q)=q\log(ep/q). \tag{28}\]
_Suppose that \(\beta^{(0)}\) satisfies_
\[\mathbb{E}\mathbf{D}_{2}(\beta^{(0)},\beta^{*})=\mathcal{O}(M)\frac{\sigma^{2 }P_{o}(q)+\sigma^{2}}{n}\text{ for some }\ M:1\leq M\leq+\infty. \tag{29}\]
_Let \(Q=\{\rho_{+}(q+s)M/n\}^{1/2}+\{\rho_{+}^{l}(q,s)+\eta_{0}\}M/n\). Assume for some \(\delta>0,0<\varepsilon\leq 1\) and large \(K\geq 0\),_
\[\begin{split}& K\sigma^{2}P_{o}(\vartheta s)+\Big{\{}2(1-\frac{1}{M}) \mathbf{\tilde{\Delta}}_{l_{0}}+\frac{C}{M(Q\delta\lor 1)}\mathbf{\Delta}_{l_{0}}- \delta\mathbf{D}_{2}\Big{\}}(X\beta,X\beta^{\prime})\\ &\geq\frac{1-1/M}{\sqrt{\vartheta}}\big{[}\rho-\{(2-\varepsilon) \sqrt{\vartheta}-1\}\eta_{0}\big{]}\mathbf{D}_{2}(\beta,\beta^{\prime}), \forall\beta,\beta^{\prime}:\|\beta\|_{0}\leq\vartheta s,\|\beta^{\prime}\|_{ 0}\leq s,\end{split} \tag{30}\]
_where \(C\) is some positive constant. Then_
\[\mathbb{E}\big{\{}\mathbf{D}_{2}(X\hat{\beta},X\beta^{*})\vee\frac{\eta_{0} \varepsilon}{\delta}\mathbf{D}_{2}(\hat{\beta},\beta^{*})\big{\}}\lesssim \frac{K\delta\lor 1}{\delta^{2}}\Big{\{}\sigma^{2}\vartheta s\log\big{(}\frac{ep}{ \vartheta s}\big{)}+\sigma^{2}\Big{\}}+\frac{\eta_{0}}{\delta\varepsilon}\| \beta^{*}\|_{2}^{2}. \tag{31}\]
Therefore, we can achieve the desired level of statistical accuracy as long as \(K,\delta,\vartheta\) are constants and \(\eta_{0}\) is not excessively large. When \(M=+\infty\) (no requirement on \(\beta^{(0)}\)), the regularity condition (30) becomes
\[\frac{\rho-\{(2-\varepsilon)\sqrt{\vartheta}-1\}\eta_{0}}{\sqrt{\vartheta}} \mathbf{D}_{2}(\beta,\beta^{\prime})\leq\big{(}2\mathbf{\tilde{\Delta}}_{l_{ 0}}-\delta\mathbf{D}_{2}\big{)}(X\beta,X\beta^{\prime})+K\sigma^{2}P_{o}( \vartheta s),\forall\beta,\beta^{\prime}:\|\beta\|_{0}\leq\vartheta s,\|\beta^ {\prime}\|_{0}\leq s,\]
which includes (7) as a special case. But when one uses a decent starting point, (30) is much more relaxed. In the extreme case where \(M=1\), the right-hand side of (30) becomes \(0\), and so with \(\mu\)-restricted strong convexity \((\mathbf{\tilde{\Delta}}_{l_{0}}-\mu\mathbf{D}_{2})(X\beta,X\beta^{\prime})\geq 0\) for \(\|\beta\|_{0}\leq\vartheta s,\|\beta^{\prime}\|_{0}\leq s\), (30) is always satisfied.
### _Slow kill: algorithm design & sequential analysis_
Using a multi-start strategy to select a high-quality initial value for \(\beta^{(0)}\) may be computationally infeasible for large-scale data. Fortunately, we will see that designing iteration-varying thresholding and shrinkage can effectively relax the statistical regularity conditions and improve the statistical accuracy of the sequence of iterates.
More concretely, slow kill modifies the optimization algorithm (25) by introducing three auxiliary sequences \(\rho_{t+1},q_{t+1},\eta_{t+1}\)
\[\beta^{(t+1)}=\Theta^{\#}\Big{\{}\beta^{(t)}-\rho_{t+1}^{-1}X^{T}\nabla l_{0}( X\beta^{(t)};y);q_{t+1},\bar{\eta}_{t+1}\Big{\}},\text{ with }\bar{\eta}_{t+1}=\eta_{t+1}/\rho_{t+1} \tag{32}\]
where \(q_{t}\to q\), \(\eta_{t}\to\eta_{0}\). The scaled shrinkage sequence \(\bar{\eta}_{t}\) will be more convenient to use than the raw sequence \(\eta_{t}\) in later analysis. We want to understand whether adapting the inverse learning rate, cardinality, and \(\ell_{2}\)-shrinkage
parameters during the iteration can lead to improved performance. Specifically, we aim to investigate how the statistical accuracy of \(\beta^{(t)}\) changes as \(t\) increases, and under what conditions the statistical error converges geometrically fast. The focus of Theorem 6 is on the statistical error of \(\beta^{(t)}\) with respect to the statistical truth \(\beta^{*}\), rather than on their optimization errors relative to a specific minimizer \(\beta^{o}\). We will see that in principle, slow kill benefits from decreasing \(q_{t}\) and \(\rho_{t}\). It is also worth noting that the error bound in (35) places no requirements on \(\vartheta_{t},\rho_{t},\eta_{t}\).
**Theorem 6**.: _Let the sequence of iterates \(\beta^{(t)}:\|\beta^{(t)}\|_{0}=q_{t}\) be generated from (32) with a feasible \(\beta^{(0)}\). Given any \(t\geq 1\), define_
\[h_{t}^{-1} =(1-1/\sqrt{\vartheta_{t}})(\rho_{t}+\eta_{t})+(1-\varepsilon)( \rho_{-}^{l}(q_{t},s)+\eta_{t}), \tag{33}\] \[\kappa_{t} =(\rho_{t}-\rho_{-}^{l}(s,q_{t}))h_{t}, \tag{34}\]
_where \(\varepsilon\) is an arbitrary number in \((0,1]\). Then the following recursive statistical error bound_
\[\mathbf{D}_{2}(\beta^{*},\beta^{(T+1)})+\sum_{t=0}^{T}\big{(} \Pi_{\tau=t}^{T}h_{\tau+1}\big{)}(\rho_{t+1}\mathbf{D}_{2}-\mathbf{\Delta}_{t })(\beta^{(t+1)},\beta^{(t)})\] \[\leq\sum_{t=0}^{T}\big{(}\kappa_{t+1}\cdots\kappa_{T+1}\big{)} \bigg{\{}\frac{A\sigma^{2}}{\varepsilon}\frac{\rho_{+}(q_{t+1}+s)}{ \big{(}\frac{\rho_{-}^{l}(q_{t+1},s)}{\rho_{t+1}}\vee\bar{\eta}_{t+1}\big{)} \big{(}1-\frac{\rho_{-}^{l}(s,q_{t+1})}{\rho_{t+1}}\big{)}\rho_{t+1}^{2}} \cdot\vartheta_{t+1}s\log\big{(}\frac{ep}{\vartheta_{t+1}s}\big{)}\] \[\qquad\qquad\qquad+\frac{\bar{\eta}_{t+1}}{\big{(}1-\frac{\rho_{ -}^{l}(s,q_{t+1})}{\rho_{t+1}}\big{)}\varepsilon}\|\beta^{*}\|_{2}^{2}\bigg{\}} +\bigg{(}\Pi_{t=0}^{T}\kappa_{t+1}\bigg{)}\mathbf{D}_{2}(\beta^{*},\beta^{(0)}). \tag{35}\]
_holds for all \(T\geq 0\), with probability at least \(1-Cp^{-cA}\), where \(C,c\) are positive constants._
The corollary below showcases the usefulness of the theorem on algorithm configuration.
**Corollary 1**.: _In the setup of Theorem 6, given any \(\varepsilon\in(0,1]\), if \(\rho_{t}\) and \(\eta_{t}\) are chosen to satisfy_
\[\rho_{t+1}\geq\rho_{+}^{l}(q_{t+1},q_{t}) \tag{36}\] \[\bar{\eta}_{t}\geq 0\vee\frac{(1/\sqrt{\vartheta_{t}}+ \varepsilon)-2(\rho_{-}^{l}(s,q_{t})\wedge\rho_{-}^{l}(q_{t},s))/\rho_{t}}{2-1 /\sqrt{\vartheta_{t}}-\varepsilon} \tag{37}\]
_so that \((\rho_{t+1}\mathbf{D}_{2}-\mathbf{\Delta}_{t})(\beta^{(t+1)},\beta^{(t)})\geq 0\) and \(\kappa_{t}\leq(1+\varepsilon)^{-1}\), then with probability at least \(1-Cp^{-cA}\) the statistical error of \(\{\beta^{(t)}\}\) decays geometrically fast,_
\[\mathbf{D}_{2}(\beta^{*},\beta^{(T+1)})\leq\bigg{(}\frac{1}{1+ \varepsilon}\bigg{)}^{T+1}\mathbf{D}_{2}(\beta^{*},\beta^{(0)})+\frac{1}{ \varepsilon}\sum_{t=0}^{T}\bigg{(}\frac{1}{1+\varepsilon}\bigg{)}^{T-t+1}E_{ t+1} \tag{38}\]
_for all \(T\geq 0\), where_
\[E_{t+1}=\big{\{}1-\frac{\rho_{-}^{l}(s,q_{t+1})}{\rho_{+}^{l}(q_ {t+1},q_{t})}\big{\}}^{-1}\bigg{\{}\frac{A\sigma^{2}}{\frac{\rho_{-}^{l}(q_{t+ 1},s)}{\rho_{+}^{l}(q_{t+1},q_{t})}\vee\bar{\eta}_{t+1}}\frac{\rho_{+}(q_{t+1}+ s)}{(\rho_{+}^{l}(q_{t+1},q_{t}))^{2}}\vartheta_{t+1}s\log\big{(}\frac{ep}{ \vartheta_{t+1}s}\big{)}+\bar{\eta}_{t+1}\|\beta^{*}\|_{2}^{2}\bigg{\}}. \tag{39}\]
The theoretical results provide valuable insights into the design of the three main elements of slow kill. Let's first apply Theorem 6 to analyze the basic optimization algorithm with fixed quantiles \(q_{t}\equiv q\) and universal values \(\rho_{t}\equiv\rho,\bar{\eta}_{t}\equiv\bar{\eta}\). (38) then shows linear convergence of the statistical error, with the first term on the right-hand side indicating the impact of the initial point. Because \(\Sigma_{t=0}^{T}\{1/(1+\varepsilon)\}^{T-t+1}\leq 1/\varepsilon\), the final error is of the order
\[\frac{\rho_{+}(q+s)}{(\rho_{+}^{l}(q,q))^{2}}\,\sigma^{2}\vartheta \text{s}\log\big{(}\frac{ep}{\vartheta s}\big{)}+\bar{\eta}\|\beta^{*}\|_{2}^ {2}, \tag{40}\]
where the restricted condition number \(\rho_{+}^{l}(q,q)/\{\rho_{-}^{l}(s,q)\wedge\rho_{-}^{l}(q,s)\}\) and \(\varepsilon\) are assumed to be constants. The lower bound derived in (37) can help reduce the bias, and suggests the benefit of using a large quantile in this regard.
On the other hand, large quantiles can lead to an inflated variance term \(\vartheta_{t+1}s\log\{ep/(\vartheta_{t+1}s)\}\) in (39), which motivates the use of decreasing quantiles, the most distinctive feature of slow kill. Indeed, a more careful examination of (38) shows that the factor \(1/(1+\varepsilon)^{T-t+1}\) allows for much larger \(q_{t}\) to be used in earlier iteration steps. This is
because for small \(t\), the associated error \(E_{t+1}\) will be more heavily shrunk in the final bound. Although it can be difficult to theoretically derive the optimal cooling scheme for the sequence \(q_{t}\), various schemes seem to perform well in practice, such as \(q_{t+1}=\lfloor q+(T-t)/(aTt+bT)\rfloor\) (inverse) or \(\lfloor q+(p-q)/1+a\exp(bt/T)^{c}\rfloor\) (sigmoidal), among others.
After \(q_{t}\) is given, the choice of \(\rho_{t+1}\) can be determined theoretically using (36): \(\rho_{t+1}\geq\rho_{+}^{l}(q_{t+1},q_{t})\), which gives an upper bound of the stepsize to prevent slow kill from diverging. In implementation, \(\rho_{+}^{l}(q_{t+1},q_{t})\) is often unknown. With regular design matrices (such as Toeplitz), a constant multiple of \(L\{n+q_{t+1}\log(ep/q_{t+1})\}\) can be employed based on (A.17) in the proof of Appendix D, assuming that \(\nabla l_{0}\) is \(L\)-Lipschitz continuous. More generally, seen from the second term on the left-hand side of (35), we can use a line search with criterion
\[(\rho_{t+1}\mathbf{D}_{2}-\mathbf{\Delta}_{l})(\beta^{(t+1)},\beta^{(t)})\geq 0. \tag{41}\]
See Appendix I for some implementation details of the line search. (41) enforces the majorization condition at \((\beta^{(t+1)}\), \(\beta^{(t)})\), and so the resulting \(\rho_{t+1}\) can be even smaller than \(\rho_{+}^{l}(q_{t+1},q_{t})\). The importance of limiting the size of \(\rho_{t}\) was previously discussed in Section II-B for \(\ell_{0}\)-constrained regression. Similarly, having a smaller \(\rho_{t+1}\) can help achieve a larger \(\varepsilon\), which in turn leads to faster convergence and smaller error, as demonstrated in (33) and (37).
The lower bound for the scaled \(\ell_{2}\)-shrinkage sequence \(\bar{\eta}_{t}\) in Corollary 1 can be rewritten as
\[2\sqrt{\vartheta}_{t}>\frac{\rho_{t+1}+\bar{\eta}_{t}\rho_{t}}{\rho_{-}^{l}( s,q_{t})\wedge\rho_{-}^{l}(q_{t},s)+\bar{\eta}_{t}\rho_{t}}. \tag{42}\]
It is similar to a restricted condition number condition, and extends (16) to a general loss. Specifically, when \(2q_{t}>n\), (37) implies \(\bar{\eta}_{t}>(1/\sqrt{\vartheta_{t}})/(2-1/\sqrt{\vartheta_{t}})=1/(2\sqrt {\vartheta_{t}}-1)\), and as a result, we recommend using a scaled shrinkage sequence defined by
\[\bar{\eta}_{t}=1/(2\sqrt{q_{t}/\bar{s}}-1), \tag{43}\]
where \(\bar{s}=q\wedge nL^{2}/\log(ep)\) (a surrogate for \(s\), according to Appendix F) and \(L\) is the Lipschitz parameter of \(\nabla l_{0}\). (43) plays an important role in early slow kill iterations and is independent of the learning rate.
Our analyses support the use of the \(\ell_{2}\)-assisted backward quantile control to gradually tighten the constraint. The update formula (32) used in slow kill has a strong foundation in optimization, which gives it an advantage over heuristics based multi-stage procedures. The fast geometric convergence established in Theorem 6, together with a strong signal strength, indicates that the zeros in \(\beta^{(t)}\) represent irrelevant predictors with high probability (cf. Remark 1 and Appendix G). This allows us to occasionally squeeze the design matrix using \(\mathcal{J}(\beta^{(t+1)})\) (e.g., when \(q_{t+1}\) reaches \(p/2^{k}\)) to reduce the problem size (Appendix I). The apparent junk features are thus removed at an early stage, saving computational cost, while the more difficult to identify irrelevant features are addressed only when we are close to finding an optimal solution. This trait makes slow kill particularly well-suited for big data learning. Slow kill offers similar advantages in group variable selection [11] and low-rank matrix estimation [26].
In contrast, forward pathwise and boosting algorithms [5, 27, 6, 7, 8, 19, 9] grow a model from the null in a _bottom-up_ fashion. Such algorithms must consider almost all features at each iteration, making them computationally intensive, as they often require hundreds or thousands of boosting iterations. Motivated by the \(\ell_{0}\)-optimization perspective, we can also investigate a class of "steady grow" procedures in which \(q_{t}\) increases from \(0\) to \(q\) in (32). Compared with boosting, the update and selection would incorporate the effect of the previous estimate in addition to the gradient. A retaining option can be introduced in steady grow that works in the opposite way to the squeezing operation in slow kill. The investigation of retaining and squeezing, as well as a combination of slow kill and steady grow, is left for future research.
Finally, how to obtain a sparse model with a prescribed cardinality is the problem of interest throughout the paper. But if one wants to determine the best value for \(q\), we suggest using a predictive information criterion [28] that can guarantee the optimal prediction error rate in a nonasymptotic sense (which is presented in Appendix H).
## IV Experiments
### _Simulations_
In this part, we conduct simulation studies to compare the performance of slow kill (abbreviated as SK in tables and figures below) with some popular sparse learning methods in terms of prediction accuracy, selection
consistency, and computational efficiency. Unless otherwise mentioned, the rows \(\tilde{x}_{i}^{T}\) of the predictor matrix \(X=[\tilde{x}_{1},\ldots,\tilde{x}_{n}]^{T}\in\mathbb{R}^{n\times p}\) are independently generated from a multivariate normal distribution with covariance matrix \(\Sigma\), where \(\Sigma\) either has a Toeplitz structure \([\tau^{|i-j|}]\) or has equal correlations \([\tau{1}_{i\neq j}]\). High correlation strengths such as \(\tau=0.9\) will be included in our experiments. We consider both regression and classification with a sparse \(\beta^{*}\): \(\beta^{*}_{j}=1,\) if \(j=10k+1,0\leq k<s\) and so \(s=\|\beta^{*}\|_{0}\). In the regression experiments, \(y=X\beta^{*}+\epsilon\) with \(\epsilon_{i}\sim N(0,1)\), and for the classification experiments, \(y_{i}=1\) if \(\tilde{x}_{i}^{T}\beta^{*}>0\) and 0 otherwise.
In addition to slow kill, the following methods are included for comparison: lasso [29], elastic net (ENET) [12], MCP [4], SCAD [30], and IHT and NIHT ([15, 31], for regression only). (We also evaluated the performance of picasso [9] in simulations as an improved version of [19]. However, its pathwise computation resulted in worse error rates and missing rates than standard nonconvex optimization on the synthetic data. Therefore, we did not present the results. We will include the algorithm in our experiments with real data in later sections.) The quadratic loss is used in regression and the logistic deviance is used in classification. For slow kill, we take a simple single starting point \(\beta^{(0)}=0\) and \(\eta_{0}=50\); an inverse cooling schedule \(q_{t+1}=\lfloor q+(T-t)/\{tT/(p-q)+2T/(p-2q)\}\rfloor\)\((0\leq t\leq T)\) is used so that \(q_{T}=q\) and \(q_{1}=p/2\), and we set \(T=100\) in all experiments for convenience and efficiency. We use the R package glmnet to implement lasso and elastic net, the package ncpen [32] for the aforementioned nonconvex penalties, and the package sparsify for IHT methods. (The core of glmnet is implemented using Fortran subroutines, while ncpen is mainly based on C++. Our implementation of slow kill could potentially be made more efficient and require less memory by using C or Fortran, but it already performs comparably or better than the other methods, as shown in later tables and figures.) To ensure a fair comparison and eliminate the influence of different parameter tuning schemes, we select the estimate with 1.5s nonzeros for each method. To calibrate the bias, we refit each obtained model using only the selected variables. All other algorithmic parameters are set to their default values.
Given each simulation setup, we repeat the experiment for 50 times and evaluate the performance of each algorithm according to the measures defined below: the missing rate \(\times 100\%\) and the prediction error. Concretely, the missing rate is the fraction of undetected true variables, and in regression, the prediction error is calculated by \(10\) times \((\hat{\beta}-\beta^{*})^{T}\Sigma(\hat{\beta}-\beta^{*})\) using the true signal, while in classification, it refers to the misclassification error rate \(\times 100\%\) on a separate test set containing the same number of observations as the training dataset. The total computational time (in seconds) is also included to describe the computational cost. Since the implementation of a penalized method often uses warm starts, we terminate the algorithm once it reaches an estimate with the prescribed cardinality.
Table I shows some experiment results in the regression setup. Figure 2 plots more results of some representative methods when varying the sparsity level \(s\) and the correlation strength \(\tau\) (excluding elastic net and IHT, because their performance is similar to that of lasso and poor, respectively). It can be seen that slow kill outperforms the other methods in terms of both statistical accuracy and computational time, particularly in more challenging situations with more relevant features and coherent designs.
For classification, Table II and Figure 3 make a comparison between different methods with various correlation structures and problem dimensions, and similar conclusions can be drawn. It is important to note that the excellent statistical accuracy of slow kill is _not_ accompanied by a sacrifice in computational time compared to other methods. In fact, as seen in Figure 3, slow kill offers substantial time savings especially when \(n\) is large, while being very successful at selection and prediction.
\begin{table}
\begin{tabular}{l r r r r r} & \multicolumn{3}{c}{Toeplitz structure} & \multicolumn{3}{c}{Equal correlation} \\ \cline{2-5} & Error & Miss & Time & Error & Miss & Time \\ LASSO & 16 & 32 & 5 & 15 & 83 & 13 \\ ENET & 16 & 31 & 13 & 14 & 82 & 34 \\ IHT & 85 & 68 & 55 & 16 & 88 & 57 \\ NIHT & 12 & 22 & 4 & 17 & 80 & 18 \\ MCP & 12 & 23 & 34 & 18 & 78 & 24 \\ SCAD & 12 & 23 & 13 & 16 & 85 & 6 \\ SK & 2 & 2 & 1 & 12 & 50 & 1 \\ \hline \end{tabular}
\end{table} TABLE I: Regression: performance comparison in terms of prediction error, missing rate and computational time with different correlation structures. In more details, \(p=5,\!000,n=150,s=10\) and \(\Sigma=[\tau^{|i-j|}]\) or \([\tau{1}_{i\neq j}]\) with \(\tau=0.9\)
Next, we present some experiments in which the signal strength is varied. Recall that in the regression setup, we set \(\beta_{j}^{*}=1\) for \(j\in\mathcal{J}(\beta^{*})\). For \(n=100,p=5000,\sigma=1\), the minimax optimal rate is approximately \(\sigma\sqrt{(\log p)/n}(\approx 0.292)\) (ignoring the constant factor for which a sharp value may be difficult to derive). We conducted additional experiments by setting \(\beta_{j}^{*}=0.8,0.6,0.4,0.2\). The comparison results for different methods are demonstrated in Figure 4. As the signal strength was low (e.g., \(\beta_{j}^{*}=0.2,0.4\)), all methods performed poorly. For higher values, slow kill outperformed the other methods by a large margin.
We conducted another experiment to explore larger values of \(\|\beta^{*}\|_{2}^{2}\). (As a reminder, in the previous setting where \(s=10\) and \(\beta_{j}^{*}=1\), \(\forall j\in\mathcal{J}(\beta^{*})\), we had \(\|\beta^{*}\|_{2}^{2}=10.\)) We tested \(\|\beta^{*}\|_{2}^{2}=50,100,150,200\) by scaling up each \(\beta_{j}^{*}\) by a corresponding factor. The results of this experiment are shown in Figure 5. As \(\|\beta^{*}\|_{2}^{2}\) increases, NIHT, MCP, and slow kill exhibit clear advantages, with the latter two showing similar prediction errors and missing rates.
\begin{table}
\begin{tabular}{l c c c c c} \multicolumn{4}{c}{Toeplitz structure} & \multicolumn{3}{c}{Equal correlation} \\ \cline{2-6} & Error & Miss & Time & Error & Miss & Time \\ LASSO & 8.0 & 24 & 10 & 5.1 & 95 & 49 \\ ENET & 8.0 & 25 & 31 & 4.7 & 95 & 135 \\ MCP & 6.9 & 23 & 15 & 5.0 & 93 & 20 \\ SCAD & 7.0 & 22 & 22 & 5.1 & 94 & 16 \\ SK & 2.2 & 2 & 4 & 3.9 & 78 & 4 \\ \hline \end{tabular}
\end{table} TABLE II: Classification: performance comparison in terms of prediction error, missing rate and computational time with different correlation structures. In more details, \(p=2\),\(000\), \(n=500\), \(s=10\) and \(\Sigma=[\tau^{|i-j|}]\) or \([\tau 1_{i\neq j}]\) with \(\tau=0.9\)
Fig. 2: Regression: performance comparison in terms of prediction error, missing rate and computational time when varying the sparsity and the correlation strength of the model. In more details, \(p=10\),\(000\), \(n=150\), \(s=6,8,10,12\) and \(\Sigma=[\tau 1_{i\neq j}]\) with \(\tau=0.5,0.7,0.9\).
### _Handwritten digits classification_
The Gisette dataset [33] was created to classify the highly confusing digits 4 and 9 for handwritten digit recognition. There are 5,000 predictors, including various pixel constructed features as well as some 'probes' with little predictive power. Because the exact number of relevant features is unknown, we assess the performance of different methods given the same model cardinality to make a fair comparison. We randomly split the 7,000
Fig. 4: Comparison of prediction errors (left) and missing rates (right) of different methods under different signal strengths. The details of the regression setup are given in Section IV-A, and we set \(p=5{,}000,n=100,s=10\), \(\tau=0.8\), and \(\beta_{j}^{*}=0.2,0.4,0.6,0.8\) for \(j\in\mathcal{J}(\beta^{*})\).
Fig. 3: Classification: performance comparison in terms of prediction error, missing rate and computational time with different correlation structures and sample sizes. In more details, \(p=10{,}000,n=600,800,1000,1200,s=15\) and \([\tau_{1i\neq j}]\) with \(\tau=0.5,0.7,0.9\).
samples into a training subset with 3,000 samples and a test subset with 4,000 samples for 20 times to report the average misclassification error rate and total computational time.
Due to the relatively large size of the data, computational efficiency is a major concern. Many statistical packages were unable to deliver meaningful results in a reasonable amount of time. Here, we compare the glmnet [34], logitboost [35, 36], picasso with the MCP option [37], and slow kill with different numbers of selected features.
According to Figure 6, logitboost and picasso achieved better misclassification error rates on the dataset than glmnet, but slow kill consistently performed the best. In terms of computational cost, glmnet and slow kill were extremely scalable; logitboost was quite expensive even for just \(q=40\), and picasso suffered a similar issue when \(q\geq 60\).
### _Breast cancer microarray data_
The breast-cancer microarray dataset [38] from the Curated Microarray Database contains 35,981 gene expression levels of 143 tumor samples of patients with breast cancer and 146 paired adjacent normal breast tissue samples. The goal is to identify some differentially expressed genes to help the classification of normal and tumor tissues. We randomly split the dataset into a training subset (60%) and a test subset (40%) for 20 times and report the misclassification error rates and total computational time of different methods in Table III.
According to Tables III, logitboost has the highest computational complexity, and picasso shows the worst overall classification performance on this dataset. In contrast, glmnet and slow kill can achieve lower misclassification error rates, and the latter is much more cost-effective according to our experiments.
Fig. 5: Comparison of prediction errors (left) and missing rates (right) of different methods for large signals. The details of the regression setup are given in Section IV-A, and we set \(p=5,000,n=100,s=10,\tau=0.8\), and \(\|\beta^{*}\|_{2}^{2}=50,100,150,200\) (by scaling up each \(\beta^{*}_{j}\)).
Fig. 6: Gisette data. Left panel: mean misclassification error rate, right panel: total computational time, with different numbers of selected features. Picasso is too costly compared with the other methods and only part of its cost curve is shown.
### _Sub-Nyquist spectrum sensing and learning_
Sub-Nyquist sampling-based wideband spectrum sensing for millimeter wave is an important topic for next-generation wireless communication systems. With a multi-coset sampler [39], a multiple-measurement-vector model in signal processing can be formulated as \(Y=XB^{*}+\mathcal{E}\), where the goal is to exploit the joint (row-wise) weak sparsity of \(B^{*}\) to reconstruct the spectrum. Here, all the matrices are complex (e.g., \(Y\in\mathbb{C}^{n\times m}\), \(X\in\mathbb{C}^{n\times p}\)), and the size of the predictor matrix \(X\) is determined by the number of cosets and the number of channels; interested reader may refer to [40] for more detail. Nicely, with the Hermitian inner product \(\langle A,B\rangle\triangleq\text{tr}\{A^{\text{H}}B\}\) in place of the real inner product, and the generalized Bregman function redefined as \(\boldsymbol{\Delta}_{l}(B_{1},B_{2})=l(B_{1})-l(B_{2})-\langle\nabla l(B_{2}),B_{1}-B_{2}\rangle/2-\langle B_{1}-B_{2},\nabla l(B_{2})\rangle/2\), all of our theorems and algorithms can be extended to the complex group sparsity pursuit.
We compared our method with two popular methods, SOMP [41] and JB-HTP [42], on a benchmark time-domain dataset in [43]. Table IV shows the normalized mean square error \(\|\hat{B}-B^{*}\|_{F}/\|B^{*}\|_{F}\) of each method as we vary \(q\) (the number of selected channels). A demonstration of spectral recovery is plotted in Figure 7, where the predictive information criterion in Appendix H was used for model selection in slow kill.
## V Summary
This paper proposed a new slow kill method for large-scale variable selection. It is a scalable optimization-based algorithm that uses three carefully designed and theoretically justified sequences of thresholds, shrinkage, and learning rates.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \(q=3\) & \(q=4\) & \(q=5\) & \(q=6\) & \(q=7\) & \(q=8\) \\ \hline SOMP & 0.83 & 0.93 & 0.82 & 0.91 & 0.92 & 0.94 \\ JB-HTP & 0.94 & 1.00 & 0.99 & 0.95 & 1.07 & 0.96 \\ SK & 0.74 & 0.65 & 0.53 & 0.38 & 0.42 & 0.50 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Spectrum reconstruction error in terms of normalized mean square error
Fig. 7: Spectrum sensing results by different methods.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{\(q=60\)} & \multicolumn{2}{c}{\(q=80\)} & \multicolumn{2}{c}{\(q=100\)} & \multicolumn{2}{c}{\(q=120\)} & \multicolumn{2}{c}{\(q=140\)} \\ \cline{2-11} & Error & Time & Error & Time & Error & Time & Error & Time & Error & Time \\ \hline GLANET & 10.9 & 19 & 10.7 & 19 & 10.5 & 50 & 10.2 & 50 & 10.2 & 50 \\ PICASSO & 11.4 & 43 & 11.3 & 43 & 11.1 & 48 & 11.3 & 48 & 11.2 & 42 \\ LogitBoost & 11.2 & 500 & 11.2 & 680 & 10.9 & 860 & 10.6 & 1080 & 10.8 & 1220 \\ SK & 10.8 & 10 & 10.2 & 11 & 10.2 & 11 & 10.1 & 11 & 9.8 & 11 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Breast cancer microarray data: misclassification error rate (\(\times 100\%\)) and total computational time (in seconds)
Intuitively, slow kill uses a novel backward quantile control with adaptive \(\ell_{2}\) shrinkage and increasing learning rates to relax regularity conditions and overcome obstacles in backward elimination. This method is significantly different from boosting and many forward stagewise procedures in the existing literature. Our theoretical studies led to insights on how to design a progressive hybrid regularization to achieve the optimal error rate and fast convergence. The technique is applicable to a general loss that is not necessarily a negative log-likelihood function, and its ability to reduce the problem size throughout the iteration makes it attractive for big data.
The definition of a sub-Gaussian random variable or a sub-Gaussian random vector is standard in the literature.
**Definition A.1**.: _We call \(\xi\) a sub-Gaussian random variable if it has mean zero and the scale (\(\psi_{2}\)-norm) for \(\xi\), defined as \(\inf\{\sigma>0:\,\mathbb{E}[\exp(\xi^{2}/\sigma^{2})]\leq 2\}\), is finite. We call \(\xi\in\mathbb{R}^{p}\) a sub-Gaussian random vector with scale bounded by \(\sigma\) if all one-dimensional marginals \(\langle\xi,\alpha\rangle\) are sub-Gaussian satisfying \(\|\langle\xi,\alpha\rangle\|_{\psi_{2}}\leq\sigma\|\alpha\|_{2}\), for any \(\alpha\in\mathbb{R}^{p}\). Similarly, a random matrix \(\xi\) is called sub-Gaussian if \(\text{vec}\,\left(\xi\right)\) is sub-Gaussian._
### _Theorem A.1 and Theorem 4_
First, for the algorithm (5) defined in the setup of Section II, we have the following numerical properties.
**Theorem A.1**.: _Given any \(X,y\) and \(\beta^{(0)}\), the sequence of iterates \(\beta^{(t)}\) generated by (5) satisfies \(f(\beta^{(t)})-f(\beta^{(t+1)})\geq\rho\|\beta^{(t+1)}-\beta^{(t)}\|_{2}^{2}/2 -\|X(\beta^{(t+1)}-\beta^{(t)})\|_{2}^{2}/2,\ \forall t\geq 0\) and so when \(\rho\geq\rho_{+}(2q)\), \(f(\beta^{(t)})\) converges, and \(\beta^{(t)}\) satisfies_
\[\min_{0\leq t\leq T}\|\beta^{(t+1)}-\beta^{(t)}\|_{2}^{2}\leq\frac{1}{T+1}\frac {2f(\beta^{(0)})}{\rho-\rho_{+}(2q)}.\]
_Moreover, as long as \(\rho>\rho_{+}(2q)\) and \(\eta_{0}>0\), \(\beta^{(t)}\) has a unique limit point \(\hat{\beta}\) that satisfies the "fixed-point" equation_
\[\beta=\Theta^{\#}\{\beta-X^{T}(X\beta-y)/\rho;q,\eta_{0}/\rho\},\]
_and when \(\|\hat{\beta}\|_{0}=q\), \(\hat{\beta}\) is also a local minimizer of problem (3)._
To prove the first conclusion in Theorem A.1, notice that in the regression setting,
\[g(\beta^{(t+1)},\beta^{(t)})-f(\beta^{(t+1)})=\rho\|\beta^{(t+1)}-\beta^{(t)} \|_{2}^{2}/2-\|X(\beta^{(t+1)}-\beta^{(t)})\|_{2}^{2}/2,\]
and thus
\[f(\beta^{(t)})-f(\beta^{(t+1)})\geq\frac{\rho}{2}\|\beta^{(t+1)}-\beta^{(t)} \|_{2}^{2}-\frac{1}{2}\|X(\beta^{(t+1)}-\beta^{(t)})\|_{2}^{2},\ \ \forall t\geq 0.\]
Taking the summation from \(t=0\) to \(t=T\) and using the fact that \(\|X(\beta^{(t+1)}-\beta^{(t)})\|_{2}^{2}\leq\rho_{+}(2q)\|\beta^{(t+1)}-\beta^{ (t)}\|_{2}^{2}\), we have
\[\frac{(\rho-\rho_{+}(2q))}{2}\sum_{t=0}^{T}\|\beta^{(t+1)}-\beta^{(t)}\|_{2}^{ 2}\leq f(\beta^{(0)})-f(\beta^{(T+1)}),\]
which leads to
\[\min_{0\leq t\leq T}\|\beta^{(t+1)}-\beta^{(t)}\|_{2}^{2}\leq\frac{2}{(T+1)( \rho-\rho_{+}(2q))}f(\beta^{(0)}).\]
Next, we consider the general problem and prove Theorem 4, which implies the second part of Theorem A.1. From \(\inf_{\xi,y}l_{0}(\xi;y)>-\infty\), we assume without loss of generality that \(l_{0}(\xi;y)\geq 0\). Recall \(l_{0}(X\beta;y)\) is abbreviated as \(l(\beta)\) and thus \(\nabla l(\beta)=X^{T}\nabla l_{0}(X\beta)\) by the chain rule.
From the construction \(g(\beta,\beta^{(t)})=f(\beta)+(\rho\mathbf{D}_{2}-\mathbf{\Delta}_{l_{0}})( \beta,\beta^{(t)})\), we get
\[(\rho\mathbf{D}_{2}-\mathbf{\Delta}_{l_{0}})(\beta^{(t+1)},\beta^{(t)})+f( \beta^{(t+1)})\leq g(\beta^{(t)},\beta^{(t)})=f(\beta^{(t)}).\]
When \(\rho\geq\rho_{+}^{l}(q,q)\), \((\rho\mathbf{D}_{2}-\mathbf{\Delta}_{l_{0}})(\beta^{(t+1)},\beta^{(t)})\geq 0\), from which it follows that the sequence of \(f(\beta^{(t)})\) is non-increasing and convergent. In fact, one just needs
\[f(\beta^{(t+1)})\leq g(\beta^{(t+1)},\beta^{(t)})\] (A.1)
to enjoy the function value convergence, which can be used for line search.
In addition, we obtain
\[(\rho-\rho_{+}^{l}(q,q))\mathbf{D}_{2}(\beta^{(t+1)},\beta^{(t)})\leq f(\beta^{(t) })-f(\beta^{(t+1)}).\]
Finally, let us study the limit points of the sequence of iterates. We first notice that \(\{\beta^{(t)}\}_{t=0}^{\infty}\) is uniformly bounded under \(\eta_{0}>0\), since
\[\eta_{0}\|\beta^{(t)}\|_{2}^{2}/2\leq f(\beta^{(t)})\leq f(\beta^{(0)}).\]
From \(\lim_{t\to\infty}\{f(\beta^{(t)})-f(\beta^{(t+1)})\}=0\), \(\lim_{t\to\infty}(\rho\mathbf{D}_{2}-\mathbf{\Delta}_{l_{0}})(\beta^{(t+1)}, \beta^{(t)})=0\), and because \(\rho>\rho_{+}^{l}(q,q)\),
\[\lim_{t\to\infty}(\beta^{(t+1)}-\beta^{(t)})=0.\]
Let \(\hat{\beta}\) be any limit point of \(\beta^{(t)}\) satisfying \(\hat{\beta}=\lim_{k\to\infty}\beta^{(j_{k})}\) for some sequence \(j_{k}\). Then
\[0=\lim_{k\to\infty}(\beta^{(j_{k}+1)}-\beta^{(j_{k})}) =\lim_{k\to\infty}\Theta^{\#}\{\beta^{(j_{k})}-\nabla l(\beta^{(j _{k})})/\rho,q,\eta_{0}/\rho\}-\hat{\beta}\] \[=\Theta^{\#}\{\hat{\beta}-\nabla l(\hat{\beta})/\rho;q,\eta_{0}/ \rho\}-\hat{\beta},\]
where the second equality is due to the continuity of \(\nabla l(\beta)\) and the \(\Theta^{\#}\)-uniqueness assumption.
Define \(\hat{\mathcal{J}}=\{j:\hat{\beta}_{j}\neq 0\}\). Then we get
\[\hat{\beta}_{\hat{\mathcal{J}}}=\hat{\beta}_{\hat{\mathcal{J}}}/(1+\eta_{0}/ \rho)-X_{\hat{\mathcal{J}}}^{T}\nabla l_{0}(X_{\hat{\mathcal{J}}}\hat{\beta}_{ \hat{\mathcal{J}}};y)/(\rho+\eta_{0}),\]
or equivalently,
\[\eta_{0}\hat{\beta}_{\hat{\mathcal{J}}}+X_{\hat{\mathcal{J}}}^{T}\nabla l_{0 }(X_{\hat{\mathcal{J}}}\hat{\beta}_{\hat{\mathcal{J}}};y)=0.\]
Therefore, given \(\hat{\mathcal{J}}\), \(\hat{\beta}_{\hat{\mathcal{J}}}\) is a stationary point of
\[\min_{\gamma}l_{0}(X_{\hat{\mathcal{J}}}\gamma;y)+\eta_{0}\|\gamma\|_{2}^{2}/2.\] (A.2)
When \(l_{0}(\cdot;y)\) is convex and \(\eta_{0}>0\), (A.2) is strongly convex and thus \(\hat{\beta}_{\hat{\mathcal{J}}}\) is the unique minimizer.
By Ostrowski's convergence theorem, the set of limit points of \(\beta^{(t)}\) must be connected. On the other hand, the set of all restricted optimal solutions \(\{\hat{\beta}_{\hat{\mathcal{J}}}\}\) is finite, and so
\[\lim_{t\to\infty}\beta^{(t)}=\hat{\beta}.\]
Under \(\|\hat{\mathcal{J}}\|_{0}=q\), it is easy to see that the neighborhood \(\{\beta:\|\beta-\hat{\beta}\|_{\infty}<\epsilon,\;J(\beta)\leq q\}\) with \(0<\epsilon<\min_{j\in\hat{\mathcal{J}}}|\hat{\beta}_{j}|\) is just \(\{\beta:\mathcal{J}(\beta)=\hat{\mathcal{J}},|\beta_{j}-\hat{\beta}_{j}|< \epsilon,\forall j\in\hat{\mathcal{J}}\}\). The local optimality of \(\hat{\beta}\) and support stability of \(\beta^{(t)}\) thus follow.
### _Proof of Theorem 1_
We first introduce some lemmas that are helpful in proving the theorem. The first is a generalization of Lemma 9 in [44].
**Lemma A.1**.: _Let \(\mathcal{J}(B)\) denote the row support of matrix \(B\) and define \(J(B)=\|B\|_{2,0}=|\mathcal{J}(B)|\). Consider the following problem with \(0\leq q\leq p,\eta\geq 0\):_
\[\min_{B\in\mathbb{R}^{p\times m}}\frac{1}{2}\|Y-B\|_{F}^{2}+\frac{\eta}{2}\|B\| _{F}^{2}=l(B)\quad\text{subject to }\|B\|_{2,0}\leq q.\]
_Then \(\hat{B}=\vec{\Theta}^{\#}(Y;q,\eta)\) (recall \(\vec{\Theta}^{\#}\) defined in Section I) gives a globally optimal solution, and for any \(B\) satisfying \(J(B)\leq s\), we have_
\[l(B)-l(\hat{B})\geq(1-\mathcal{L}(\mathcal{J},\hat{\mathcal{J}}))(1+\eta)\frac{ \|\hat{B}-B\|_{F}^{2}}{2}\] (A.3)
_where \(\mathcal{J}=\mathcal{J}(B)\), \(\hat{\mathcal{J}}=\mathcal{J}(\hat{B})\), and \(\mathcal{L}(\mathcal{J},\hat{\mathcal{J}})=\sqrt{|\mathcal{J}\setminus\hat{ \mathcal{J}}|/|\hat{\mathcal{J}}\setminus\mathcal{J}|}\). When \(J(\hat{B})=q\) with \(\vartheta(\equiv q/s)\geq 1\), \(\mathcal{L}(\mathcal{J},\hat{\mathcal{J}})\leq\sqrt{|\mathcal{J}|/|\hat{ \mathcal{J}}|}\leq 1/\sqrt{\vartheta}\). In the above statement, \(0/0\) is understood as \(1\)._
**Lemma A.2**.: _There exist universal constants \(A,C,c>0\) such that for any \(a>0\), the following event_
\[\sup_{\beta_{1},\beta_{2}}\langle\epsilon,X(\beta_{1}-\beta_{2}) \rangle-\frac{1}{2a}\|X(\beta_{1}-\beta_{2})\|_{2}^{2}-\frac{a}{2}A\sigma^{2} \{J(\beta_{1})\lor J(\beta_{2})\}\log\Big{\{}\frac{ep}{J(\beta_{1})\lor J( \beta_{2})}\Big{\}}\geq\frac{a}{2}\sigma^{2}t\] (A.4)
_occurs with probability at most \(C\exp{(-ct)}p^{-cA}\), where \(t\geq 0\)._
First, by definition, it is easy to show that \(\hat{\beta}\) satisfies
\[\hat{\beta}\in\operatorname*{argmin}_{\beta}g(\beta,\hat{\beta}),\]
where \(g(\beta,\beta^{-})=\|y-X\beta^{-}\|_{2}^{2}/2+\langle X^{T}(X\beta^{-}-y), \beta-\beta^{-}\rangle+\rho\|\beta-\beta^{-}\|_{2}^{2}/2+\eta_{0}\|\beta\|_{2 }^{2}/2\). By \(g(\hat{\beta},\hat{\beta})\leq g(\beta^{*},\hat{\beta})\) and Lemma A.1,
\[\frac{1}{2}\|\beta^{*}-\hat{\beta}+\frac{1}{\rho}X^{T}(X\hat{ \beta}-y)\|_{2}^{2}-\frac{1}{2}\|\frac{1}{\rho}X^{T}(X\hat{\beta}-y)\|_{2}^{2} +\frac{\eta_{0}}{2\rho}\|\beta^{*}\|_{2}^{2}-\frac{\eta_{0}}{2\rho}\|\hat{ \beta}\|_{2}^{2}\] \[\geq(1+\frac{\eta_{0}}{\rho})\frac{1-\mathcal{L}(\mathcal{J}^{*},\hat{\mathcal{J}})}{2}\|\hat{\beta}-\beta^{*}\|_{2}^{2},\]
where \(\mathcal{J}^{*}=\mathcal{J}(\beta^{*})\), \(\hat{\mathcal{J}}=\mathcal{J}(\hat{\beta})\), and \(\mathcal{L}(\mathcal{J}^{*},\hat{\mathcal{J}})\leq 1/\sqrt{\vartheta}\).
It follows from the model \(y=X\beta^{*}+\epsilon\) that
\[\|X\hat{\beta}-X\beta^{*}\|_{2}^{2}+\frac{\eta_{0}}{2}\|\hat{ \beta}\|_{2}^{2}\leq\frac{\rho-(\sqrt{\vartheta}-1)\eta_{0}}{2\sqrt{\vartheta }}\|\hat{\beta}-\beta^{*}\|_{2}^{2}+\frac{\eta_{0}}{2}\|\beta^{*}\|_{2}^{2}+ \langle X\hat{\beta}-X\beta^{*},\epsilon\rangle,\]
which gives
\[\|X\hat{\beta}-X\beta^{*}\|_{2}^{2}+\frac{\eta_{0}}{2}\|\hat{ \beta}-\beta^{*}\|_{2}^{2}\] \[\leq \frac{\rho-(\sqrt{\vartheta}-1)\eta_{0}}{2\sqrt{\vartheta}}\| \hat{\beta}-\beta^{*}\|_{2}^{2}+\eta_{0}\langle\hat{\beta}-\beta^{*},-\beta^{ *}\rangle+\langle X\hat{\beta}-X\beta^{*},\epsilon\rangle\] \[\leq \frac{\rho-(\sqrt{\vartheta}-1)\eta_{0}}{2\sqrt{\vartheta}}\| \hat{\beta}-\beta^{*}\|_{2}^{2}+\frac{b\eta_{0}}{2}\|\hat{\beta}-\beta^{*}\|_ {2}^{2}+\frac{\eta_{0}}{2b}\|\beta^{*}\|_{2}^{2}+\langle X\hat{\beta}-X\beta^ {*},\epsilon\rangle\] (A.5)
for any \(b>0\). Applying Lemma A.2 with \(t=0\), we can show that for any \(a>0\), the following event
\[\langle X\hat{\beta}-X\beta^{*},\epsilon\rangle\leq\frac{1}{2a} \|X\hat{\beta}-X\beta^{*}\|_{2}^{2}+\frac{a}{2}A\sigma^{2}\vartheta s\log\frac {ep}{\vartheta s}\] (A.6)
occurs with probability at least \(1-Cp^{-c}\), where \(A,C,c>0\) are some universal constants.
Combining (A.5), (A.6) and the regularity condition (7) yields
\[\frac{\eta_{0}(\varepsilon-b)}{2}\|\hat{\beta}-\beta^{*}\|_{2}^{2 }+\Big{(}\frac{\delta}{2}-\frac{1}{2a}\Big{)}\|X\hat{\beta}-X\beta^{*}\|_{2}^{ 2}\leq\frac{\eta_{0}}{2b}\|\beta^{*}\|_{2}^{2}+\frac{a}{2}A\sigma^{2} \vartheta s\log\frac{ep}{\vartheta s}\]
with probability at least \(1-Cp^{-c}\). By choosing \(a=2/\delta\) and \(b=\varepsilon/2\), we have the bound for the prediction error as
\[\|X\hat{\beta}-X\beta^{*}\|_{2}^{2}+\frac{\eta_{0}\varepsilon}{ \delta}\|\hat{\beta}-\beta^{*}\|_{2}^{2} \leq\frac{4\eta_{0}}{\delta\varepsilon}\|\beta^{*}\|_{2}^{2}+ \frac{4}{\delta^{2}}A\sigma^{2}\vartheta s\log\frac{ep}{\vartheta s}\] \[\lesssim\frac{\eta_{0}}{\delta\varepsilon}\|\beta^{*}\|_{2}^{2}+ \frac{1}{\delta^{2}}\sigma^{2}\vartheta s\log\frac{ep}{\vartheta s},\]
which holds with probability at least \(1-Cp^{-c}\).
**Proof of Lemma A.1** In this proof, given a matrix \(B\in\mathbb{R}^{p\times m}\) and an index set \(\mathcal{I}\subset[p]\), we use \(B_{\mathcal{I}}\) to denote the submatrix of \(B\) by extracting its rows indexed by \(\mathcal{I}\). Let \(\mathcal{J}_{1}=\mathcal{J}\cap\hat{\mathcal{J}}\), \(\mathcal{J}_{2}=\hat{\mathcal{J}}\setminus\mathcal{J}\) and \(\mathcal{J}_{3}=\mathcal{J}\setminus\hat{\mathcal{J}}\). Then \(\mathcal{J}=\mathcal{J}_{1}\cup\mathcal{J}_{3}\) and \(\hat{\mathcal{J}}=\mathcal{J}_{1}\cup\mathcal{J}_{2}\).
It can be easily shown that \(\hat{B}_{\mathcal{J}_{1}}=Y_{\mathcal{J}_{1}}/(1+\eta)\) and \(\hat{B}_{\mathcal{J}_{2}}=Y_{\mathcal{J}_{2}}/(1+\eta)\). By writing \(B_{\mathcal{J}_{1}}=Y_{\mathcal{J}_{1}}/(1+\eta)+\Delta_{\mathcal{J}_{1}}\) and \(B_{\mathcal{J}_{3}}=Y_{\mathcal{J}_{3}}/(1+\eta)+\Delta_{\mathcal{J}_{3}}\), we have
\[l(B)-l(\hat{B}) =\frac{1+\eta}{2}\|\Delta_{\mathcal{J}_{1}}\|_{F}^{2}+\frac{1}{2( 1+\eta)}\|Y_{\mathcal{J}_{2}}\|_{F}^{2}+\frac{1+\eta}{2}\|\Delta_{\mathcal{J }_{3}}\|_{F}^{2}-\frac{1}{2(1+\eta)}\|Y_{\mathcal{J}_{3}}\|_{F}^{2},\] \[\frac{1+\eta}{2}\|\hat{B}-B\|_{F}^{2} =\frac{1+\eta}{2}\|\Delta_{\mathcal{J}_{1}}\|_{F}^{2}+\frac{1}{2( 1+\eta)}\|Y_{\mathcal{J}_{2}}\|_{F}^{2}+\frac{1+\eta}{2}\|\frac{1}{1+\eta}Y_{ \mathcal{J}_{3}}+\Delta_{\mathcal{J}_{3}}\|_{F}^{2}.\]
Let \(K\leq 1\) satisfy
\[l(B)-l(\hat{B})\geq\frac{K}{2}(1+\eta)\|\hat{B}-B\|_{F}^{2},\]
which is implied by
\[\frac{1}{2(1+\eta)}\|Y_{\mathcal{J}_{2}}\|_{F}^{2}+\frac{1+\eta}{ 2}\|\Delta_{\mathcal{J}_{3}}\|_{F}^{2}-\frac{1}{2(1+\eta)}\|Y_{\mathcal{J}_{3 }}\|_{F}^{2}\] (A.7) \[\geq\frac{K}{2(1+\eta)}\|Y_{\mathcal{J}_{2}}\|_{F}^{2}+\frac{K(1+ \eta)}{2}\|\frac{1}{1+\eta}Y_{\mathcal{J}_{3}}+\Delta_{\mathcal{J}_{3}}\|_{F} ^{2}.\]
(A.7) is equivalent to
\[(1-K)\|Y_{\mathcal{J}_{2}}\|_{F}^{2}+(1+\eta)^{2}\|\Delta_{\mathcal{J}_{3}}\|_ {F}^{2}\geq(1+\eta)^{2}K\|\frac{1}{1+\eta}Y_{\mathcal{J}_{3}}+\Delta_{ \mathcal{J}_{3}}\|_{F}^{2}+\|Y_{\mathcal{J}_{3}}\|_{F}^{2}.\] (A.8)
By construction, \(\|y_{i}\|_{2}\geq\|y_{j}\|_{2}\) for any \(i\in\mathcal{J}_{2}\) and \(j\in\mathcal{J}_{3}\). Thus \(\|Y_{\mathcal{J}_{2}}\|_{F}^{2}\geq J_{2}\|Y_{\mathcal{J}_{3}}\|_{F}^{2}/J_{3}\), from which it follows that (A.8) is implied by
\[\{(1-K)(J_{2}/J_{3})-(1+K)\}\|Y_{\mathcal{J}_{3}}\|_{F}^{2}+(1-K)(1+\eta)^{2} \|\Delta_{\mathcal{J}_{3}}\|_{F}^{2}\geq 2K(1+\eta)\langle Y_{\mathcal{J}_{3}}, \Delta_{\mathcal{J}_{3}}\rangle.\]
Therefore, restricting \(K\) to \((1+K)/(1-K)\leq J_{2}/J_{3}\) or \(K\leq(J_{2}-J_{3})/(J_{2}+J_{3})\leq 1\), the largest possible \(K\) should satisfy
\[\{(1-K)(J_{2}/J_{3})-(1+K)\}\cdot(1-K)=|K|^{2}\]
or \((1-K)^{2}=J_{3}/J_{2}\), or \(K=1-\sqrt{J_{3}/J_{2}}(\leq(J_{2}-J_{3})/(J_{2}+J_{3}))\). This gives
\[\mathcal{L}=1-K=(J_{3}/J_{2})^{1/2}.\]
Note that when \(\mathcal{J}_{2}=\emptyset\), \(K\) can take \(-\infty\) for \(\mathcal{J}_{3}\neq\emptyset\) and \(0\) for \(\mathcal{J}_{3}=\emptyset\) to ensure (A.8).
Now assume \(J(\hat{B})=q\) with \(\vartheta\geq 1\). If \(\mathcal{J}_{2}\neq\emptyset\), \(\mathcal{L}\leq\sqrt{(J_{3}+J_{1})/(J_{2}+J_{1})}=\sqrt{J/\hat{J}}\leq 1/ \sqrt{\vartheta}\). Otherwise, we must have \(\mathcal{J}_{3}=\emptyset\), \(\mathcal{J}=\hat{\mathcal{J}}\) and \(\vartheta=1\). The proof is complete.
The lemma can be used in the analysis of \(\ell_{0}\)-constrained (elementwise) sparsity pursuit, as well as group variable selection (cf. Section IV-D).
**Proof of Lemma a.2** Given a matrix \(A\), denote by \(\mathcal{P}_{A}\) the orthogonal projection onto its range, and \(\mathcal{P}_{A}^{\perp}\) its orthogonal complement. In the proof, \(\mathcal{P}_{\mathcal{J}}\) is used as a short notation for \(\mathcal{P}_{X_{\mathcal{J}}}\) in the proof for any \(J\subset[p]\). Let \(\mathcal{J}_{1}=\mathcal{J}(\beta_{1}),\mathcal{J}_{2}=\mathcal{J}(\beta_{2}), J_{1}=|\mathcal{J}_{1}|,J_{2}=|\mathcal{J}_{2}|\).
First, note that the term \(\{J(\beta_{1})\lor J(\beta_{2})\}\log[ep/\{J(\beta_{1})\lor J(\beta_{2})\}]\) is used in (A.4), instead of \(J(\beta_{1}-\beta_{2})\log\{ep/J(\beta_{1}-\beta_{2})\}\), and although \(J(\beta_{1}-\beta_{2})\leq J(\beta_{1})+J(\beta_{2})\), \(J(\beta_{1})+J(\beta_{2})\) can be larger than \(p\). To tackle the issue, we employ a decomposition trick
\[X\beta_{1}-X\beta_{2} =\mathcal{P}_{\mathcal{J}_{1}}X(\beta_{1}-\beta_{2})+\mathcal{P}_{ \mathcal{J}_{1}}^{\perp}X(\beta_{1}-\beta_{2})\] \[=\mathcal{P}_{\mathcal{J}_{1}}X(\beta_{1}-\beta_{2})+\mathcal{P}_{ \mathcal{J}_{1}}^{\perp}\mathcal{P}_{\mathcal{J}_{2}}X(\beta_{1}-\beta_{2}).\]
Let \(\Delta=\beta_{1}-\beta_{2}\). Then
\[\langle\epsilon,X\Delta\rangle=\langle\epsilon,P_{\mathcal{J}_{1}}X\Delta \rangle+\langle\epsilon,\mathcal{P}_{\mathcal{J}_{1}}^{\perp}\mathcal{P}_{ \mathcal{J}_{2}}X\Delta\rangle.\] (A.9)
Let us bound the first term on the right-hand side of (A.9). Define \(P_{o}(J)=\sigma^{2}J\log(ep/J)\) for \(0\leq J\leq p\), which is an increasing function, and \(\Gamma_{J}=\{\alpha\in\mathbb{R}^{p}:\|\alpha\|_{2}\leq 1,\alpha\in\mathcal{P}_{ \mathcal{J}}\) for some \(\mathcal{J}\subset[p],|\mathcal{J}|\leq J\}\). Then for any \(a,b>0\)
\[\langle\epsilon,\mathcal{P}_{\mathcal{J}_{1}}X\Delta\rangle- \frac{1}{a}\|\mathcal{P}_{\mathcal{J}_{1}}X\Delta\|_{2}^{2}-bLP_{o}(J_{1})\] \[\leq \|\mathcal{P}_{\mathcal{J}_{1}}X\Delta\|_{2}\langle\epsilon,\frac {\mathcal{P}_{\mathcal{J}_{1}}X\Delta}{\|\mathcal{P}_{\mathcal{J}_{1}}X\Delta \|_{2}}\rangle-2\|\mathcal{P}_{\mathcal{J}_{1}}X\Delta\|_{2}\sqrt{\frac{b}{a} LP_{o}(J_{1})}\] \[\leq \frac{1}{a}\|\mathcal{P}_{\mathcal{J}_{1}}X\Delta\|_{2}^{2}+ \frac{a}{4}\sup_{J_{1}\leq p}\sup_{\Delta\in\Gamma_{J_{1}}}\big{\{}\langle \epsilon,\Delta\rangle-2\sqrt{(b/a)LP_{o}(J_{1})}\big{\}}_{+}^{2}\] \[\equiv \frac{1}{a}\|\mathcal{P}_{\mathcal{J}_{1}}X\Delta\|_{2}^{2}+ \frac{a}{4}\sup_{J_{1}\leq p}R_{J_{1}}^{2},\]
where \(R_{J_{1}}:=\sup_{\Delta\in\Gamma_{J_{1}}}\big{\{}\langle\epsilon,\Delta \rangle-2\sqrt{(b/a)LP_{o}(J_{1})}\big{\}}_{+}\) with \(L\) a sufficiently large constant. When \(J_{1}=0\), \(R_{J_{1}}=0\). When \(J_{1}\geq 1\), for any \(t\geq 0\), if \(4b/a\) is a constant greater than \(1\), we have
\[\mathbb{P}(\sup_{1\leq J_{1}\leq p}R_{J_{1}}\geq t\sigma)\] \[\leq \sum_{J_{1}=1}^{p}\mathbb{P}\bigg{(}\sup_{\Delta\in\Gamma_{J_{1}} }\langle\epsilon,\Delta\rangle-\sqrt{LP_{o}(J_{1})}\geq t\sigma+2\sqrt{\frac{b }{a}LP_{o}(J_{1})}-\sqrt{LP_{o}(J_{1})}\bigg{)}\] \[\leq C\exp(-ct^{2})\sum_{J_{1}=1}^{p}\exp[-c(2\sqrt{b/a}-1)^{2}LP_{o} (J_{1})/\sigma^{2}]\] (A.10) \[\leq C\exp(-ct^{2})\exp(-cL\log p)\sum_{J_{1}=1}^{p}\exp(-cLJ_{1})\] \[\leq C\exp(-ct^{2})p^{-cL}.\]
The second inequality is due to Lemma 6 of [21], and we used \(J\log(ep/J)\geq J+\log p\) for any \(J\in[p]\) in the third inequality. Therefore, for any \(a,b>0,\,4b>a\) and \(t\geq 0\), we have
\[\mathbb{P}\Big{\{}\langle\epsilon,\mathcal{P}_{\mathcal{J}_{1}}X\Delta\rangle -\frac{2}{a}\|\mathcal{P}_{\mathcal{J}_{1}}X\Delta\|_{2}^{2}-bLP_{o}(J_{1}) \geq\frac{a}{4}t\sigma^{2}\Big{\}}\leq C\exp(-ct)p^{-Lc}.\] (A.11)
Similarly, for the second term in (A.9), we can use Lemma 7 of [13] to prove that for any \(t\geq 0\),
\[\mathbb{P}\Big{[}\langle\epsilon,\mathcal{P}_{\mathcal{J}_{1}}^{\perp} \mathcal{P}_{\mathcal{J}_{2}}X\Delta\rangle-\frac{2}{a}\|\mathcal{P}_{\mathcal{ J}_{1}}^{\perp}\mathcal{P}_{\mathcal{J}_{2}}X\Delta\|_{2}^{2}-bL\{P_{o}(J_{1})+P_{o}(J_{ 2})\}\geq\frac{a}{4}t\sigma^{2}\Big{]}\leq C\exp(-ct)p^{-Lc}.\] (A.12)
Combining (A.11), (A.12) and using the fact that \(\|\mathcal{P}_{\mathcal{J}_{1}}X\Delta\|_{2}^{2}+\|\mathcal{P}_{\mathcal{J}_{1 }}^{\perp}P_{\mathcal{J}_{2}}X\Delta\|_{2}^{2}=\|X\Delta\|_{2}^{2}\), we get for any \(a,b>0,\,4b>a\) and \(t\geq 0\),
\[\mathbb{P}\Big{[}\langle\epsilon,X\Delta\rangle-\frac{4}{a}\|X\Delta\|_{2}^{ 2}-3bL\{P_{o}(J_{1})\lor P_{o}(J_{2})\}\geq\frac{a}{2}t\sigma^{2}\Big{]}\leq C \exp(-ct)p^{-Lc}.\] (A.13)
Finally, using the increasing property of \(P_{o}(J)\) for \(J\in[0,p]\), we have \(P_{o}(J_{1})\lor P_{o}(J_{2})\leq(J_{1}\lor J_{2})\log\{ep/(J_{1}\lor J_{2})\}\). A reparameterization of (A.13) gives the conclusion.
### _Proof of Theorem 2_
From the proof of Theorem 1, we get with probability \(1-Cp^{-c}\),
\[\|X\hat{\beta}-X\beta^{*}\|_{2}^{2}+\frac{\eta_{0}(1-b)}{2}\|\hat {\beta}-\beta^{*}\|_{2}^{2} \leq \frac{\rho-(\sqrt{\vartheta}-1)\eta_{0}}{2\sqrt{\vartheta}}\|\hat {\beta}-\beta^{*}\|_{2}^{2}+\frac{\eta_{0}}{2b}\|\beta^{*}\|_{2}^{2}+\] \[\frac{1}{2a}\|X\hat{\beta}-X\beta^{*}\|_{2}^{2}+\frac{a}{2}A\sigma ^{2}\vartheta s\log\frac{ep}{\vartheta s},\]
which gives
\[\|X\hat{\beta}-X\beta^{*}\|_{2}^{2}-\frac{\eta_{0}b}{2}\|\hat{\beta}- \beta^{*}\|_{2}^{2} \leq\frac{\rho-(2\sqrt{\vartheta}-1)\eta_{0}}{2\sqrt{\vartheta}}\| \hat{\beta}-\beta^{*}\|_{2}^{2}+\frac{\eta_{0}}{2b}\|\beta^{*}\|_{2}^{2}+\] \[\frac{\rho_{+}((1+\vartheta)s)}{2a}\|\hat{\beta}-\beta^{*}\|_{2} ^{2}+\frac{a}{2}A\sigma^{2}\vartheta s\log\frac{ep}{\vartheta s}.\]
Under the regularity condition (9), choosing \(a=2/\delta\) and \(b=\delta\rho_{+}((1+\vartheta)s)/(4\eta_{0})\) give (10). (The result applies to \(\eta_{0}=0\) as well.)
To show the second result, note that from Theorem 1, the fixed-point solution \(\hat{\beta}\) must satisfy \(\hat{\beta}=\Theta^{\#}\{\hat{\beta}-X^{T}\nabla l_{0}(X\hat{\beta};y)/\rho;q, \eta_{0}/\rho\}\), which means
\[\left\|\hat{\beta}(1+\eta_{0}/\rho)-\hat{\beta}+\frac{1}{\rho}X^ {T}\nabla l_{0}(X\hat{\beta})\right\|_{\infty}\leq(1+\eta_{0}/\rho)\min_{j \in\tilde{\mathcal{J}}}|\hat{\beta}_{j}|\] \[\implies \left\|\eta_{0}\hat{\beta}+X^{T}(\nabla l_{0}(X\hat{\beta})- \nabla l_{0}(X\beta^{*}))-X^{T}\epsilon\right\|_{\infty}\leq(\rho+\eta_{0}) \min_{j\in\tilde{\mathcal{J}}}|\hat{\beta}_{j}|\] \[\implies \left\|X^{T}(\nabla l_{0}(X\hat{\beta})-\nabla l_{0}(X\beta^{*}) )+\eta_{0}(\hat{\beta}-\beta^{*})\right\|_{\infty}\leq\|X^{T}\epsilon\|_{ \infty}+\eta_{0}\|\beta^{*}\|_{\infty}+(\rho+\eta_{0})\min_{j\in\tilde{ \mathcal{J}}}|\hat{\beta}_{j}|.\]
Next, we introduce a lemma.
**Lemma A.3**.: _Let \(\tilde{\beta},\beta\in\mathbb{R}^{p}\) satisfying \(\|\tilde{\beta}\|_{0}=q>s\geq\|\beta\|_{0}\), and for short, denote \(\mathcal{J}(\tilde{\beta})\) and \(\mathcal{J}(\tilde{\beta})\) by \(\tilde{\mathcal{J}}\) and \(\mathcal{J}\), respectively. Then_
\[\min_{j\in\tilde{\mathcal{J}}}|\tilde{\beta}_{j}|\leq\min_{j\in \tilde{\mathcal{J}}\setminus\mathcal{J}}|\tilde{\beta}_{j}|\leq\frac{\|( \tilde{\beta}-\beta)_{\tilde{\mathcal{J}}\setminus\mathcal{J}}\|_{2}}{\sqrt{| \tilde{\mathcal{J}}\setminus\mathcal{J}|}}\leq\frac{\|(\tilde{\beta}-\beta)_{ \tilde{\mathcal{J}}\setminus\mathcal{J}}\|_{2}}{\sqrt{q-s}}\leq\frac{\|\tilde {\beta}-\beta\|_{2}}{\sqrt{q-s}}\] (A.14) \[\min_{j\in\tilde{\mathcal{J}}}|\tilde{\beta}_{j}|\leq\max_{j\in \tilde{\mathcal{J}}\setminus\mathcal{J}}|\tilde{\beta}_{j}|=\|(\tilde{\beta}- \beta)_{\tilde{\mathcal{J}}\setminus\mathcal{J}}\|_{\infty}\leq\|\tilde{ \beta}-\beta\|_{\infty}.\] (A.15)
The proof is simple and omitted. Now, combining the regularity condition (11) and (A.14) or (A.15) gives the desired result.
### _Proof of Theorem 3_
By definition, we have
\[\rho_{+}(2q)=\sup_{I\in[p]:|I|=2q}\lambda_{\max}(X_{I}^{T}X_{I}),\]
and under \(q+s\leq n,\)
\[\rho_{-}(q+s)=\inf_{I\in[p]:|I|=q+s}\lambda_{\min}(X_{I}^{T}X_{I}).\]
By Theorem of 6.1 of [23], we have
\[\mathbb{P}\left\{\sqrt{\frac{\lambda_{\max}(X_{I}^{T}X_{I})}{n}}\geq(1+c_{0}) \sqrt{\lambda_{\max}(\Sigma_{I})}+\sqrt{\frac{\operatorname{tr}(\Sigma_{I})}{ n}}\right\}\leq\exp(-nc_{0}^{2}/2),\ \ \forall I:|I|=2q\]
and
\[\mathbb{P}\left\{\sqrt{\frac{\lambda_{\min}(X_{I}^{T}X_{I})}{n}}\leq(1-c_{0}) \sqrt{\lambda_{\min}(\Sigma_{I})}-\sqrt{\frac{\operatorname{tr}(\Sigma_{I})}{ n}}\right\}\leq\exp(-nc_{0}^{2}/2),\ \ \forall I:|I|=q+s\]
for all \(c_{0}>0\). Applying the union bound gives
\[\mathbb{P}\left\{\sqrt{\frac{\rho_{+}(2q)}{n}}\geq(1+c_{0})\sqrt{\lambda_{ \max}^{(2q)}}+\sqrt{\frac{2q}{n}}\right\}\leq\binom{p}{2q}\exp(-nc_{0}^{2}/2).\] (A.16)
Let \(nc^{2}=nc_{0}^{2}-\log\binom{p}{2q}\). Then using \(\log\binom{p}{2q}\leq 2q\log{(ep/q)}\), \(c_{0}\leq c+\sqrt{2q\log(ep/q)/n}\). Therefore for any \(c>0\),
\[\mathbb{P}\left\{\sqrt{\frac{\rho_{+}(2q)}{n}}\geq(1+c)\sqrt{\lambda_{\max}^{( 2q)}}+\sqrt{\frac{2q\log(ep/q)}{n}}\sqrt{\lambda_{\max}^{(2q)}}+\sqrt{\frac{2q} {n}}\right\}\leq\exp(-nc^{2}/2).\] (A.17)
Similarly,
\[\mathbb{P}\left\{\sqrt{\frac{\rho_{-}(q+s)}{n}}\leq(1-c)\sqrt{\lambda_{\min}^{(q+ s)}}-\sqrt{\frac{(q+s)\log(ep/q)}{n}}\sqrt{\lambda_{\min}^{(q+s)}}-\sqrt{\frac{q+s}{n }}\right\}\leq\exp(-nc^{2}/2).\]
Let \(c\in(0,1)\) and assume \(n\geq\{2(q+s)/(1-c)^{2}\}\{1/\lambda_{\min}^{(q+s)}+\log(ep/q)\}\). Then
\[\frac{\rho_{+}(2q)}{\rho_{-}(q+s)}\leq\left\{\frac{(1+c)\sqrt{\lambda_{\max}^{ (2q)}}+\sqrt{\{2\lambda_{\max}^{(2q)}q\log(ep/q)\}/n}+\sqrt{2q/n}}{(1-c)\sqrt{ \lambda_{\min}^{(q+s)}}-\sqrt{\{\lambda_{\min}^{(q+s)}}(q+s)\log(ep/q)\}/n- \sqrt{(q+s)/n}}\right\}^{2}\]
holds with probability at least \(1-2\exp(-nc^{2}/2)\).
### _Proof of Theorem 5_
Let \(E:=\sigma^{2}P_{o}(q)+\sigma^{2}\). Similar to the proof of Theorem 1, from the construction of \(g\) and Lemma A.1, we have
\[\rho(1-1/\sqrt{\vartheta})(1+\eta_{0}/\rho)\mathbf{D}_{2}(\beta^{*},\hat{ \beta})+g(\hat{\beta},\hat{\beta})\leq g(\beta^{*},\hat{\beta}),\]
and thus
\[2\bar{\mathbf{\Delta}}_{l_{0}}(X\hat{\beta},X\beta^{*})+\frac{\eta_{0}}{2}\| \hat{\beta}\|_{2}^{2}\leq\frac{\rho-(\sqrt{\vartheta}-1)\eta_{0}}{\sqrt{ \vartheta}}\mathbf{D}_{2}(\hat{\beta},\beta^{*})+\frac{\eta_{0}}{2}\|\beta^{* }\|_{2}^{2}+\langle\epsilon,X\hat{\beta}-X\beta^{*}\rangle.\] (A.18)
Applying Lemma A.2 gives
\[\langle\epsilon,X\hat{\beta}-X\beta^{*}\rangle\leq\delta\mathbf{D}_{2}(X\hat{ \beta},X\beta^{*})+\frac{1}{\delta}A\sigma^{2}P_{o}(q)+R\] (A.19)
for any \(\delta>0\), where \(R:=\sup_{\beta_{1},\beta_{2}}\{\langle\epsilon,X\beta_{1}-X\beta_{2}\rangle- \delta\mathbf{D}_{2}(X\beta_{1},X\beta_{2})-A\sigma^{2}P_{o}(q)/\delta\}_{+}\) and
\[\mathbb{P}(\delta R>\sigma^{2}t)\leq C\exp(-ct)p^{-cA},\]
where \(A,C,c>0\) are some constants. Therefore,
\[\mathbb{E}\langle\epsilon,X\hat{\beta}-X\beta^{*}\rangle\leq\,\mathbb{E}\{ \delta\mathbf{D}_{2}(X\hat{\beta},X\beta^{*})\}+\frac{C}{\delta}(\sigma^{2}P _{o}(q)+\sigma^{2}).\] (A.20)
Combining (A.18) and (A.20) gives
\[\mathbb{E}\{(2\bar{\mathbf{\Delta}}_{l_{0}}-\delta\mathbf{D}_{2}) (X\hat{\beta},X\beta^{*})+\eta_{0}\mathbf{D}_{2}(\hat{\beta},\beta^{*})\}\] (A.21) \[\leq \,\mathbb{E}\Big{\{}\frac{\rho-(\sqrt{\vartheta}-1)\eta_{0}}{ \sqrt{\vartheta}}\ \mathbf{D}_{2}(\hat{\beta},\beta^{*})+\eta_{0}\langle-\beta^{*},\hat{\beta}- \beta^{*}\rangle\Big{\}}+\frac{C}{\delta}E,\]
and so
\[\mathbb{E}\Big{[}(2\bar{\mathbf{\Delta}}_{l_{0}}-\delta\mathbf{D}_{2})(X\hat{ \beta},X\beta^{*})-\frac{\rho-\{(2-\varepsilon)\sqrt{\vartheta}-1\}\eta_{0}}{ \sqrt{\vartheta}}\ \mathbf{D}_{2}(\hat{\beta},\beta^{*})\Big{]}\leq\frac{C}{\delta}E+\frac{\eta_ {0}}{2\varepsilon}\|\beta^{*}\|_{2}^{2}\] (A.22)
for any \(\varepsilon,\delta>0\).
Next, from \(l_{0}(X\hat{\beta})+\eta_{0}\|\hat{\beta}\|_{2}^{2}/2\leq l_{0}(X\beta^{(0)})+ \eta_{0}\|\beta^{(0)}\|_{2}^{2}/2\), we have
\[\mathbf{\Delta}_{l_{0}}(X\hat{\beta},X\beta^{*})+\eta_{0}\mathbf{ D}_{2}(\hat{\beta},\beta^{*})\] (A.23) \[\leq \,\mathbf{\Delta}_{l_{0}}(X\beta^{(0)},X\beta^{*})+\eta_{0} \mathbf{D}_{2}(\beta^{(0)},\beta^{*})+\eta_{0}\langle-\beta^{*},\hat{\beta}- \beta^{*}\rangle-\eta_{0}\langle-\beta^{*},\beta^{(0)}-\beta^{*}\rangle\] \[+\langle\epsilon,X\hat{\beta}-X\beta^{*}\rangle-\langle\epsilon,X \beta^{(0)}-X\beta^{*}\rangle.\]
Therefore, for any \(\delta^{\prime},\delta^{\prime\prime},\varepsilon^{\prime}>0\)
\[\mathbb{E}\{(\mathbf{\Delta}_{l_{0}}-\delta^{\prime}\mathbf{D}_{2}) (X\hat{\beta},X\beta^{*})+\eta_{0}\mathbf{D}_{2}(\hat{\beta},\beta^{*})\}\] \[\leq \,\mathbb{E}\big{\{}(\mathbf{\Delta}_{l_{0}}+\delta^{\prime \prime}\mathbf{D}_{2})(X\beta^{(0)},X\beta^{*})+\eta_{0}\mathbf{D}_{2}(\beta^{(0 )},\beta^{*})+\frac{\eta_{0}}{2\varepsilon}\|\beta^{*}\|_{2}^{2}+\eta_{0} \varepsilon\mathbf{D}_{2}(\hat{\beta},\beta^{*})\] \[+\frac{\eta_{0}}{2\varepsilon^{\prime}}\|\beta^{*}\|_{2}^{2}+\eta_ {0}\varepsilon^{\prime}\mathbf{D}_{2}(\beta^{(0)},\beta^{*})\big{\}}+CE\big{(} \frac{1}{\delta^{\prime}}+\frac{1}{\delta^{\prime\prime}}\big{)}.\]
By the assumption of the starting point \(\mathbb{E}\{{\bf D}_{2}(\beta^{(0)},\beta^{*})\}\leq CME/n\), we have
\[\mathbb{E}\{{\bf D}_{2}(X\beta^{(0)},X\beta^{*})\}\leq C\rho_{+}(q+s)ME/n,\, \mathbb{E}\{{\bf\Delta}_{l_{0}}(X\beta^{(0)},X\beta^{*})\}\leq C\rho_{+}^{l}(q,s )ME/n.\]
Taking \(1/\delta^{\prime\prime}=\sqrt{\rho_{+}(q+s)M/n}\), we obtain
\[\mathbb{E}\{({\bf\Delta}_{l_{0}}-\delta^{\prime}{\bf D}_{2})(X \hat{\beta},X\beta^{*})+\eta_{0}(1-\varepsilon){\bf D}_{2}(\hat{\beta},\beta^ {*})\}\] \[\leq CE\big{(}\frac{1}{\delta^{\prime}}+\sqrt{\frac{\rho_{+}(q+s)M}{n} }+\frac{\rho_{+}^{l}(q,s)}{n}M+\frac{\eta_{0}(1+\varepsilon^{\prime})}{n}M \big{)}+\eta_{0}\big{(}\frac{1}{\varepsilon}+\frac{1}{\varepsilon^{\prime}} \big{)}\frac{\|\beta^{*}\|_{2}^{2}}{2}.\]
Let \(Q_{0}:=\sqrt{\rho_{+}(q+s)M/n}+\rho_{+}^{l}(q,s)M/n+\eta_{0}(1+\varepsilon^{ \prime})M/n\). Then
\[CE\big{(}\frac{1}{\delta^{\prime}}+Q_{0}\big{)}\leq\frac{C}{c_{1}\wedge c_{2}} E\big{(}\frac{c_{1}}{\delta^{\prime}}+c_{2}Q_{0}\big{)}\]
for any \(c_{1},c_{2}>0\). Taking \(\delta^{\prime}:\delta^{2}=\delta^{\prime 2}/(c_{1}+c_{2}Q_{0}\delta^{\prime})\) and \(\varepsilon^{\prime}:1/\varepsilon+1/\varepsilon^{\prime}=(1/\delta^{\prime}+ Q_{0})c_{3}\delta/\varepsilon\) for some large constant \(c_{3}>0\), we get
\[\mathbb{E}\{(\frac{\delta}{\delta^{\prime}}{\bf\Delta}_{l_{0}}-\delta{\bf D}_{ 2})(X\hat{\beta},X\beta^{*})+\frac{\delta}{\delta^{\prime}}\eta_{0}(1- \varepsilon){\bf D}_{2}(\hat{\beta},\beta^{*})\}\leq\frac{CE}{c_{1}\wedge c_{ 2}}\frac{1}{\delta}+c_{3}\frac{\eta_{0}}{2\varepsilon}\|\beta^{*}\|_{2}^{2}.\] (A.24)
Multiplying (A.22) by \((1-1/M)\) and (A.24) by \(1/M\) and adding the two inequalities yield
\[\mathbb{E}\Big{[}(1-\frac{1}{M})\big{\{}2\bar{\bf\Delta}_{l_{0}} (X\hat{\beta},X\beta^{*})-\frac{\rho-\{(2-\varepsilon)\sqrt{\vartheta}-1\} \eta_{0}}{\sqrt{\vartheta}}{\bf D}_{2}(\hat{\beta},\beta^{*})\big{\}}\] (A.25) \[+(\frac{\delta}{M\delta^{\prime}}{\bf\Delta}_{l_{0}}-\delta{\bf D }_{2})(X\hat{\beta},X\beta^{*})+\frac{\delta}{M\delta^{\prime}}\eta_{0}(1- \varepsilon){\bf D}_{2}(\hat{\beta},\beta^{*})\Big{]}\] \[\leq C\big{(}\frac{E}{\delta}+\frac{\eta_{0}}{\varepsilon}\|\beta^{*} \|_{2}^{2}\big{)}.\]
Simple calculation shows
\[\frac{\delta^{\prime}}{\delta}=\frac{c_{2}Q_{0}\delta+\sqrt{c_{2}^{2}Q_{0}^{2} \delta^{2}+4c_{1}}}{2}\leq\frac{\sqrt{2}+1}{2}\{c_{2}Q_{0}\delta\lor\sqrt{4c_ {1}}\}\leq C(Q_{0}\delta\lor 1).\]
It follows that
\[\varepsilon^{\prime}\leq\frac{\varepsilon}{C(Q_{0}\delta\lor 1)+\delta Q_{0}-1} \leq C\frac{\varepsilon}{Q_{0}\delta\lor 1}\leq C\varepsilon\]
for some large constant \(C\), and so \(Q_{0}\lesssim Q\). Under the condition that
\[K\sigma^{2}P_{o}(\vartheta s)+\Big{\{}2(1-\frac{1}{M})\bar{\bf \Delta}_{l_{0}}+\frac{C}{M(Q\delta\lor 1)}{\bf\Delta}_{l_{0}}-2\delta{\bf D}_{2} \Big{\}}(X\hat{\beta},X\beta^{*})\] (A.26) \[\geq \frac{1-1/M}{\sqrt{\vartheta}}\big{[}\rho-\{(2-\varepsilon)\sqrt{ \vartheta}-1\}\eta_{0}\big{]}{\bf D}_{2}(\hat{\beta},\beta^{*})-\frac{C}{M(Q \delta\lor 1)}\eta_{0}(1-\varepsilon){\bf D}_{2}(\hat{\beta},\beta^{*}),\]
(A.25) yields
\[\mathbb{E}[{\bf D}_{2}(X\hat{\beta},X\beta^{*})]\leq \frac{K}{\delta}\sigma^{2}P_{o}(\vartheta s)+\frac{CE}{\delta^{2} }+C\frac{\eta_{0}}{\varepsilon}\|\beta^{*}\|_{2}^{2}\] (A.27) \[\lesssim \frac{K\delta\lor 1}{\delta^{2}}E+\frac{\eta_{0}}{\delta\varepsilon}\| \beta^{*}\|_{2}^{2}.\]
With a reparameterization, the regularity condition (30) implies (A.26).
### _Proof of Theorem 6_
For convenience, denote \(\mathbf{D}_{2}(X\beta,X\beta^{\prime})\) by \(\mathbf{D}_{2,X}(\beta,\beta^{\prime})\). From Lemma A.1, we have
\[g(\beta^{*},\beta^{(t)})-g(\beta^{(t+1)};\beta^{(t)})\geq\rho_{t+1}(1-\mathcal{ L}_{t+1})(1+\bar{\eta}_{t+1})\mathbf{D}_{2}(\beta^{(t+1)},\beta^{*}),\] (A.28)
where \(\mathcal{L}_{t+1}=\mathcal{L}(\mathcal{J}(\beta^{*}),\mathcal{J}(\beta^{(t+1) }))\leq 1/\sqrt{\vartheta_{t+1}}\). (Recall \(\vartheta_{t+1}=q_{t+1}/s>1\), and \(s\geq\|\beta^{*}\|_{0}\).)
Substituting \(g(\beta,\beta^{(t)})=l(\beta)+\eta_{t+1}\mathbf{D}_{2}(\beta,0)+(\rho_{t+1} \mathbf{D}_{2}-\mathbf{\Delta}_{l})(\beta,\beta^{(t)})\) and \(l(\beta^{*})-l(\beta^{(t+1)})=\langle\epsilon,X\beta^{(t+1)}-X\beta^{*}\rangle- \mathbf{\hat{\Delta}}_{l}\;(\beta^{*},\beta^{(t+1)})\) into (A.28) gives
\[\begin{split}&\{\rho_{t+1}(1-\mathcal{L}_{t+1})(1+\bar{\eta}_{t+1} )\mathbf{D}_{2}+\mathbf{\hat{\Delta}}_{l}\}(\beta^{*},\beta^{(t+1)})+\eta_{t +1}\mathbf{D}_{2}(\beta^{*},\beta^{(t+1)})\\ &+(\rho_{t+1}\mathbf{D}_{2}-\mathbf{\Delta}_{l})(\beta^{(t+1)}, \beta^{(t)})\\ \leq&(\rho_{t+1}\mathbf{D}_{2}-\mathbf{\Delta}_{l}) (\beta^{*},\beta^{(t)})+\langle\epsilon,X\beta^{(t+1)}-X\beta^{*}\rangle+\eta _{t+1}\langle-\beta^{*},\beta^{(t+1)}-\beta^{*}\rangle.\end{split}\] (A.29)
From Lemma A.2, with probability at least \(1-Cp^{-cA}\)
\[\langle\epsilon,X\beta^{(t+1)}-X\beta^{*}\rangle\leq\delta_{t+1}\mathbf{D}_{2,X}(\beta^{*},\beta^{(t+1)})+\delta_{t+1}^{-1}A\sigma^{2}P_{o}(q_{t+1}),\text{ for all }t\geq 0\] (A.30)
given any \(\delta_{t+1}>0\), where \(A\) is a constant. Moreover, for any \(\varepsilon_{t+1}>0\),
\[\langle-\beta^{*},\beta^{(t+1)}-\beta^{*}\rangle\leq\varepsilon_{t+1}\mathbf{ D}_{2}(\beta^{*},\beta^{(t+1)})+\varepsilon_{t+1}^{-1}\mathbf{D}_{2}(\beta^{*},0).\] (A.31)
Plugging these bounds into (A.29) gives
\[\big{\{}\rho_{t+1}(1-\mathcal{L}_{t+1})(1+\bar{\eta}_{t+1}) \mathbf{D}_{2}+\mathbf{\hat{\Delta}}_{l}\;+(1-\varepsilon_{t+1})\eta_{t+1} \mathbf{D}_{2}-\delta_{t+1}\mathbf{D}_{2,X}\big{\}}(\beta^{*},\beta^{(t+1)})\] \[\quad+(\rho_{t+1}\mathbf{D}_{2}-\mathbf{\Delta}_{l})(\beta^{(t+1) },\beta^{(t)})\] \[\leq \,(\rho_{t+1}\mathbf{D}_{2}-\mathbf{\Delta}_{l})(\beta^{*},\beta ^{(t)})+\delta_{t+1}^{-1}A\sigma^{2}P_{o}(q_{t+1})+\varepsilon_{t+1}^{-1} \eta_{t+1}\mathbf{D}_{2}(\beta^{*},0).\] (A.32)
By the definition of (generalized) isometry numbers and using \(\mathcal{L}_{t+1}\leq 1/\sqrt{\vartheta_{t+1}}\), we have
\[\Big{\{}\rho_{t+1}\big{(}1-\frac{1}{\sqrt{\vartheta_{t+1}}}\big{)} (1+\bar{\eta}_{t+1})+\rho_{-}^{l}(q_{t+1},s)+(1-\varepsilon_{t+1})\eta_{t+1}- \delta_{t+1}\rho_{+}(q_{t+1}+s)\Big{\}}\mathbf{D}_{2}(\beta^{*},\beta^{(t+1)})\] \[\quad+(\rho_{t+1}\mathbf{D}_{2}-\mathbf{\Delta}_{l})(\beta^{(t+1) },\beta^{(t)})\] \[\leq \,\big{\{}\rho_{t+1}-\rho_{-}^{l}(s,q_{t+1})\big{\}}\mathbf{D}_{2 }(\beta^{*},\beta^{(t)})+\delta_{t+1}^{-1}A\sigma^{2}P_{o}(q_{t+1})+ \varepsilon_{t+1}^{-1}\eta_{t+1}\mathbf{D}_{2}(\beta^{*},0).\] (A.33)
Let \(\varepsilon_{0}\) be any number \(\in(0,1]\). Taking \(\varepsilon_{t+1}=\varepsilon_{0}/2,\delta_{t+1}=(\varepsilon_{0}\rho_{-}^{l}(q _{t+1},s)+\varepsilon_{0}\eta_{t+1}/2)/\rho_{+}(q_{t+1}+s)\), we have
\[(1-1/\sqrt{\vartheta_{t+1}})(1+\bar{\eta}_{t+1})\rho_{t+1}+\rho_{- }^{l}(q_{t+1},s)+(1-\varepsilon_{t+1})\eta_{t+1}-\delta_{t+1}\rho_{+}(q_{t+1}+s)\] \[= \,(1-1/\sqrt{\vartheta_{t+1}})(1+\bar{\eta}_{t+1})\rho_{t+1}+(1- \varepsilon_{0})\rho_{-}^{l}(q_{t+1},s)+(1-\varepsilon_{0})\eta_{t+1}.\]
Let
\[E_{t+1}= \,\frac{1}{\rho_{t+1}-\rho_{-}^{l}(s,q_{t+1})}\Big{\{}\frac{A\sigma ^{2}}{\varepsilon_{0}}\frac{\rho_{+}(q_{t+1}+s)}{\rho_{-}^{l}(q_{t+1},s)+\eta_{t+ 1}/2}P_{o}(q_{t+1})+\frac{\eta_{t+1}}{\varepsilon_{0}}\|\beta^{*}\|_{2}^{2} \Big{\}}\] \[\leq \,\frac{A\sigma^{2}}{\varepsilon_{0}}\frac{\rho_{+}(q_{t+1}+s)}{( \rho_{-}^{l}(q_{t+1},s)/\rho_{t+1}\vee\bar{\eta}_{t+1})(1-\rho_{-}^{l}(s,q_{t+ 1})/\rho_{t+1})\rho_{t+1}^{2}}P_{o}(q_{t+1})\] \[\quad+\frac{\bar{\eta}_{t+1}}{\varepsilon_{0}(1-\rho_{-}^{l}(s,q_{ t+1})/\rho_{t+1})}\|\beta^{*}\|_{2}^{2}\]
for any \(t\geq 0\). By the definitions of \(\kappa_{t},h_{t}\), we can obtain
\[\mathbf{D}_{2}(\beta^{*},\beta^{(t+1)})+h_{t+1}(\rho_{t+1}\mathbf{D}_{2}- \mathbf{\Delta}_{l})(\beta^{(t+1)},\beta^{(t)})\leq\kappa_{t+1}\mathbf{D}_{2}( \beta^{*},\beta^{(t)})+\kappa_{t+1}E_{t+1}.\] (A.34)
Applying a recursive argument with \(t=T,\ldots,0\) gives
\[\mathbf{D}_{2}(\beta^{*},\beta^{(T+1)})+\sum_{t=0}^{T}\big{(}\Pi_{ \tau=t}^{T}h_{\tau+1}\big{)}(\rho_{t+1}\mathbf{D}_{2}-\mathbf{\Delta}_{l})( \beta^{(t+1)},\beta^{(t)})\] \[\leq \,\bigg{(}\Pi_{t=0}^{T}\kappa_{t+1}\bigg{)}\mathbf{D}_{2}(\beta^ {*},\beta^{(0)})+\sum_{t=0}^{T}\Big{(}\Pi_{\tau=t}^{T}\kappa_{\tau+1}\Big{)}E_{t +1},\]
and thus the bound (35) follows.
To ensure
\[\frac{\rho_{t}-\rho_{-}^{l}(s,q_{t})}{(1-1/\sqrt{\vartheta_{t}})(1+\bar{\eta}_{t} )\rho_{t}+(1-\varepsilon)(\rho_{-}^{l}(q_{t},s)+\eta_{t})}\leq\frac{1}{1+\alpha}\] (A.35)
for some \(\alpha>0\), we need
\[\bar{\eta}_{t}\geq\frac{(\alpha+1/\sqrt{\vartheta_{t}})-(2+\alpha-\varepsilon )\{\rho_{-}^{l}(s,q_{t})\wedge\rho_{-}^{l}(q_{t},s)\}/\rho_{t}}{2-1/\sqrt{ \vartheta_{t}}-\varepsilon}.\] (A.36)
The result in the corollary follows by taking \(\alpha=\varepsilon\) and noticing that \(\rho_{t+1}\geq\rho_{+}^{l}(q_{t+1},q_{t})\) implies \((\rho_{t+1}\mathbf{D}_{2}-\mathbf{\Delta}_{l})(\beta^{t+1},\beta^{t})\geq 0\).
### _A recursive coordinatewise error bound under restricted isometry_
Recall the general procedure defined in (32),
\[\beta^{(t+1)}=\Theta^{\#}\Big{\{}\beta^{(t)}-\rho_{t+1}^{-1}X^{T}\nabla l_{0} (X\beta^{(t)};y);q_{t+1},\bar{\eta}_{t+1}\Big{\}},\text{ with }\bar{\eta}_{t+1}=\eta_{t+1}/\rho_{t+1}.\] (A.37)
Following a similar approach to Theorem 2 for the set of fixed points, an error bound for \(\beta^{(t+1)}\) in the \(\infty\)-norm can be established under appropriate regularity conditions.
To facilitate the proof, we first recall the definition of \(\rho_{-}^{l}(s_{1},s_{2})\) as given in (21). In particular, in the regression setup, \(\rho_{-}(s_{1},s_{2})\) satisfies
\[\|X(\beta_{1}-\beta_{2})\|_{2}^{2}\geq\rho_{-}(s_{1},s_{2})\|\beta _{1}-\beta_{2}\|_{2}^{2},\forall\beta_{i}:\|\beta_{i}\|_{0}\leq s_{i}\] \[\Longleftrightarrow (\beta_{1}-\beta_{2})^{T}(\rho I-X^{T}X)(\beta_{1}-\beta_{2})\leq (\rho-\rho_{-}(s_{1},s_{2}))\|\beta_{1}-\beta_{2}\|_{2}^{2},\forall\beta_{i}: \|\beta_{i}\|_{0}\leq s_{i}.\]
The presence of positive restricted eigenvalues in the Gram matrix \(X^{T}X\) implies the existence of proper upper bounds on the restricted eigenvalues of the matrix \(\rho I-X^{T}X\). So when considering the \(\infty\)-norm error for \(\beta^{(t+1)}\), it appears more manageable to work with the matrix \(\rho I-X^{T}X\) than with \(X^{T}X\).
Motivated by this, given \(l\), \(X\), and \(s_{i}\), we introduce a generalized restricted isometry number \(\upsilon(s_{1},s_{2})\) that satisfies
\[\|\rho(\beta_{1}-\beta_{2})-X^{T}\{\nabla l_{0}(X\beta_{1})-\nabla l_{0}(X \beta_{2})\}\|_{\infty}\leq(\rho-\upsilon)\|\beta_{1}-\beta_{2}\|_{\infty}, \text{ for all }\beta_{i}:\|\beta_{i}\|_{0}\leq s_{i},\rho\geq\upsilon.\] (A.38)
In the case where \(l_{0}(X\beta)=\|X\beta-y\|_{2}^{2}/2\), we have \(\nabla l_{0}(X\beta_{1})-\nabla l_{0}(X\beta_{2})=X(\beta_{1}-\beta_{2})\) and \(\rho(\beta_{1}-\beta_{2})-X^{T}(\nabla l_{0}(X\beta_{1})-\nabla l_{0}(X\beta_ {2}))=(\rho I-X^{T}X)(\beta_{1}-\beta_{2})\). Therefore, (A.38) can be understood as a variant of low coherence for the design matrix in the context of the \(\infty\)-norm.
**Theorem A.2**.: _For the sequence of iterates generated by procedure (A.37) and \(\upsilon_{t}\) denoting \(\upsilon(q_{t},s)\) as defined by (A.38), the following recursive coordinatewise error bound on \(\beta^{(t+1)}\) holds for any \(t\geq 0\):_
\[\|\beta^{(t+1)}-\beta^{*}\|_{\infty}\leq(1-\frac{\upsilon_{t}+\eta_{t+1}}{\rho _{t+1}+\eta_{t+1}})\|\beta^{(t)}-\beta^{*}\|_{\infty}+\frac{\|X^{T}\epsilon\| _{\infty}}{\rho_{t+1}+\eta_{t+1}}+\frac{\eta_{t+1}\|\beta^{*}\|_{\infty}}{\rho _{t+1}+\eta_{t+1}}+\frac{1}{\sqrt{\vartheta_{t+1}-1}}\frac{\|\beta^{(t+1)}- \beta^{*}\|_{2}}{\sqrt{s}}.\]
Proof.: The proof follows similar lines of the proof of Theorem 2. First, by the definition of \(\Theta^{\#}\),
\[\Big{\|}(1+\bar{\eta}_{t+1})\beta^{(t+1)}-\beta^{(t)}+\frac{1}{\rho_{t+1}}X^{T }\nabla l_{0}(X\beta^{(t)})\Big{\|}_{\infty}\leq(1+\bar{\eta}_{t+1})\min_{j\in \mathcal{J}(\beta_{j}^{(t+1)})}|\beta_{j}^{(t+1)}|\]
and so
\[\|(\rho_{t+1}+\eta_{t+1})\beta^{(t+1)}-\rho_{t+1}\beta^{(t)}+X^{T} (\nabla l_{0}(X\beta^{(t)})-\nabla l_{0}(X\beta^{*}))-X^{T}\epsilon\|_{\infty}\] \[\leq(\rho_{t+1}+\eta_{t+1})\min_{j\in\mathcal{J}(\beta_{j}^{(t+1) })}|\beta_{j}^{(t+1)}|.\]
Writing
\[(\rho_{t+1}+\eta_{t+1})\beta^{(t+1)}-\rho_{t+1}\beta^{(t)}=(\rho_{t+1}+\eta_{t+ 1})(\beta^{(t+1)}-\beta^{*})-\rho_{t+1}(\beta^{(t)}-\beta^{*})+\eta_{t+1}\beta^ {*}\]
and using the sub-additivity of the \(\infty\)-norm, we get
\[(\rho_{t+1}+\eta_{t+1})\|\beta^{(t+1)}-\beta^{*}\|_{\infty} \leq\|\rho_{t+1}(\beta^{(t)}-\beta^{*})-X^{T}(\nabla l_{0}(X\beta^{ (t)})-\nabla l_{0}(X\beta^{*}))\|_{\infty}\] \[\quad+\|X^{T}\epsilon\|_{\infty}+\eta_{t+1}\|\beta^{*}\|_{\infty} +(\rho_{t+1}+\eta_{t+1})\min_{j\in\mathcal{J}(\beta^{(t+1)}_{j})}|\beta^{(t+1) }_{j}|.\]
By (A.14) of Lemma A.3 and the definition of \(\upsilon_{t}\), we get
\[(\rho_{t+1}+\eta_{t+1})\|\beta^{(t+1)}-\beta^{*}\|_{\infty} \leq(\rho_{t+1}-\upsilon_{t})\|\beta^{(t)}-\beta^{*}\|_{\infty}\] \[\quad+\|X^{T}\epsilon\|_{\infty}+\eta_{t+1}\|\beta^{*}\|_{\infty }+(\rho_{t+1}+\eta_{t+1})\frac{\|\beta^{(t+1)}-\beta^{*}\|_{2}}{\sqrt{q_{t+1} -s}}.\]
Additionally, we can obtain \((\rho_{t+1}+\eta_{t+1})\|(\beta^{(t+1)}-\beta^{*})_{\mathcal{J}^{*}}\|_{ \infty}\leq(\rho_{t+1}-\upsilon_{t})\|\beta^{(t)}-\beta^{*}\|_{\infty}+\|X^{ T}\epsilon\|_{\infty}+\eta_{t+1}\|\beta^{*}\|_{\infty}\) or
\[\|(\beta^{(t+1)}-\beta^{*})_{\mathcal{J}^{*}}\|_{\infty}\leq(1-\frac{\upsilon _{t}+\eta_{t+1}}{\rho_{t+1}+\eta_{t+1}})\|\beta^{(t)}-\beta^{*}\|_{\infty}+ \frac{\|X^{T}\epsilon\|_{\infty}}{\rho_{t+1}+\eta_{t+1}}+\frac{\eta_{t+1}\| \beta^{*}\|_{\infty}}{\rho_{t+1}+\eta_{t+1}},\]
by applying (A.15).
### _Model selection by predictive information criterion_
Although parameter \(q\) as an upper bound of the true model support size can often be directly specified based on domain knowledge, this section develops a new information criterion for the tuning of \(q\) to achieve the best prediction performance in finite samples. We assume _multiple responses_ to cover the application in Section IV-D. Let \(Y\in\mathbb{R}^{n\times m}\), \(X\in\mathbb{R}^{n\times p}\) be the response matrix and predictor matrix, respectively, and \(l_{0}(XB;Y)\) be the given loss. We use \(\mathcal{J}(B)\) to denote the row support of \(B\) and define \(J(B)=|\mathcal{J}(B)|\). Assume the true \(B^{*}\in\mathbb{R}^{p\times m}\) is row-sparse and let \(s^{*}=J(B^{*})\). The problem considered in the main sections corresponds to the special case \(m=1\). To choose the best (row) support size, we advocate the following complexity penalty to be added to the loss in the predictive information criterion:
\[P(B)=J(B)m+J(B)\log\{ep/J(B)\}.\] (A.39)
Recall \(\mathbf{D}_{2}(A_{1},A_{2})=\|A_{1}-A_{2}\|_{F}^{2}/2\) in the matrix context.
**Theorem A.3**.: _Let the effective noise \(E=-\nabla l_{0}(XB^{*})\) be sub-Gaussian with mean zero and scale bounded by a constant and \(B^{*}\in\mathcal{M}\) and \(B^{*}\neq 0\). Assume that there exist constants \(\delta>0\) and \(A_{0}\geq 0\) such that \((\mathbf{\Delta}_{l_{0}}-\delta\mathbf{D}_{2})(XB,XB^{\prime})+A_{0}(P(B)+P(B ^{\prime}))\geq 0\), for all \(B,B^{\prime}\in\mathcal{M}\). Then for a sufficiently large constant \(A\), any \(\tilde{B}\) that minimizes_
\[l_{0}(XB;Y)+AP(B)\] (A.40)
_subject to \(B\in\mathcal{M}\) must satisfy_
\[\mathbb{E}\{\|X\hat{B}-XB^{*}\|_{F}^{2}\lor P(\hat{B})\}\lesssim ms^{*}+s^{*} \log(ep/s^{*}).\] (A.41)
Theorem A.3 does not involve any regularization parameters (like \(q,\lambda\)), but it achieves the minimax optimal error rate (A.41). Moreover, the justification of (A.40) does not require an infinite-sample-size, design coherence or signal-to-noise ratio conditions.
When the noise distribution has a dispersion parameter \(\sigma^{2}\), Theorem A.3 still applies, but the penalty in (A.40) becomes \(A\sigma^{2}P(B)\) with an unknown factor. A preliminary scale estimate can be possibly used. But an appealing result for regression is that the estimation of \(\sigma\) can be bypassed. We give a scale-free form of predictive information criterion by
\[mn\log\{\|Y-XB\|_{F}^{2}\}+AP(B),\] (A.42)
where \(A\) is an absolute constant.
**Theorem A.4**.: _Let \(Y=XB^{*}+\mathcal{E}\), where \(E=[\epsilon_{i,k}]\) has independent centered sub-Gaussian\((\sigma^{2})\) entries and \(\mathbb{E}\epsilon_{i,k}^{2}\gtrsim\sigma^{2}\) with \(\sigma^{2}\) unknown. Define \(l_{0}(XB;Y)=\|XB-Y\|_{F}^{2}\). Assume the true model is not over-complex in the sense that \(P(B^{*})\leq mn/A_{0}\) for some constant \(A_{0}>0\). Let \(\delta(B)=AP(B)/(mn)\), where \(A\) is a positive
constant satisfying \(A<A_{0}\), and so \(\delta(B^{*})<1\). Then, for sufficiently large values of \(A_{0}\) and \(A\), any \(\hat{B}\) that minimizes \(\log l_{0}(XB;Y)+\delta(B)\) subject to \(\delta(B)<1\) must satisfy \(\mathbf{D}_{2}(X\hat{B},XB^{*})\lesssim\sigma^{2}\{s^{*}m+s^{*}\log(ep/s^{*})\}\) with probability at least \(1-Cp^{-c}\exp\{-cm\}-C\exp(-cmn)\) for some constants \(C,c>0\)._
A more general form of \(AP(B)\) can be expressed as "\(\alpha_{1}\times\text{degrees-of-freedom}+\alpha_{2}\times\text{inflation}\)" with \(\alpha_{1},\alpha_{2}\) as absolute constants. The two theorems can proved based on modifying the proofs of Theorems 2 and 3 in [28]. For completeness, we present some details below. Note that although the logarithmic form of the scale-free predictive information criterion is widely used, other non-asymptotic forms exist [28]. In fact, a key trick in the proof is to convert these forms into a fractional scale-free predictive information criterion, which is essential for establishing the desired properties.
Proof.: We first prove Theorem A.3 under the assumption that \(\text{vec}\left(\mathcal{E}\right)\) is subGaussian with mean 0 and scale \(\sigma\). From the definition of \(\hat{B}\), \(\mathbf{\Delta}_{l_{0}}(X\hat{B},XB^{*})+A\sigma^{2}P(\hat{B})\leq A\sigma^{2 }P(B^{*})+\langle\mathcal{E},X\hat{B}-XB^{*}\rangle\). Similar to the proof of Lemma A.2, we can show that for any \(a,b,a^{\prime}>0\), \(4b>a\), and \(t>0\),
\[\langle\mathcal{E},XB-XB^{*}\rangle\leq(\frac{2}{a}+\frac{2}{a^{ \prime}})\mathbf{D}_{2}(XB,XB^{*})+a^{\prime}\sigma^{2}t+4bL\sigma^{2}\{P(B^{ *})+P(B)\},\forall B\in\mathbb{R}^{p\times m}\] (A.43)
occurs with probability at least \(1-Cp^{-c}\exp(-cm)\exp(-ct)\), where \(L,c,C\) are positive constants. (The probability bound can be derived by setting \(L\) to a sufficiently large constant and observing that \(Jm+J\log(ep/J)\geq m+\log(ep)\) holds for \(J\geq 1\), and the union bound calculation, as in (A.10), does not need to cover the case \(J=0\).)
Now, substituting \(\hat{B}\) for \(B\) in (A.43) and taking the expectation, we have for any \(a,b,a^{\prime}>0\), \(4b>a\),
\[\mathbb{E}\{\mathbf{\Delta}_{l_{0}}(X\hat{B},XB^{*})+A\sigma^{2}P (\hat{B})\}\] \[\leq \mathbb{E}\Big{\{}A\sigma^{2}P(B^{*})+(\frac{2}{a}+\frac{2}{a^{ \prime}})\mathbf{D}_{2}(X\hat{B},XB^{*})+ca^{\prime}\sigma^{2}+4bL\sigma^{2}[P (B^{*})+P(\hat{B})]\Big{\}}.\]
Combining it with the regularity condition gives
\[\mathbb{E}\big{\{}(\delta-\frac{2}{a}-\frac{2}{a^{\prime}})\mathbf{D}_{2}(X \hat{B},XB^{*})+(A-4bL-C)P(\hat{B})\big{\}}\leq(A+4bL+C)\sigma^{2}P(B^{*})+ca ^{\prime}\sigma^{2}.\]
Since \(P(B^{*})\geq c>0\), choosing the constants satisfying \((1/a+1/a^{\prime})(1+1/b^{\prime})<\delta/2\), \(4b>a\), and \(A>4bL+C\) yields the conclusion.
Next, we prove Theorem A.4. We begin with a proof for \(\hat{B}\) selected by a fractional form of scale-free form of predictive information criterion: \(l_{0}(XB;Y)/(1-\delta(B))\) subject to \(\delta(B)\leq 1\). Let \(h(B;A)=1/\{mn-AP(B)\}\). From the optimality of \(\hat{B}\), \(l_{0}(X\hat{B};Y)h(\hat{B};A)\leq l_{0}(XB^{*};Y)h(B^{*};A)\) or
\[l_{0}(X\hat{B};Y)-l_{0}(XB^{*};Y)\leq l_{0}(XB^{*};Y)\Big{(}\frac{h(B^{*};A)}{ h(\hat{B};A)}-1\Big{)},\]
where we used \(h(\hat{B};A)>0\). Using the Bregman divergence for the quadratic function, we get
\[\mathbf{D}_{2}(X\hat{B},XB^{*})\leq l_{0}(XB^{*};Y)\Big{(}\frac{h(B^{*};A)}{ h(\hat{B};A)}-1\Big{)}+\langle\mathcal{E},X\hat{B}-XB^{*}\rangle.\] (A.44)
From the definition of \(h\) and the model parsimony assumption, (A.44) becomes
\[\mathbf{D}_{2}(X\hat{B},XB^{*})\] \[\leq l_{0}(XB^{*};Y)\ \frac{AP(B^{*})-AP(\hat{B})}{mn-AP(B^{*})}+ \langle\mathcal{E},X\hat{B}-XB^{*}\rangle\] \[= \frac{1}{2}\frac{A\|\mathcal{E}\|_{F}^{2}}{mn\sigma^{2}-A\sigma^{ 2}P(B^{*})}\sigma^{2}P(B^{*})-\frac{1}{2}\frac{A\|\mathcal{E}\|_{F}^{2}}{mn- AP(B^{*})}\sigma^{2}P(\hat{B})+\langle\mathcal{E},X\hat{B}-XB^{*}\rangle\] \[\leq \frac{1}{2}\frac{A\|\mathcal{E}\|_{F}^{2}}{(1-A/A_{0})mn\sigma^{2 }}\sigma^{2}P(B^{*})-\frac{1}{2}\frac{A\|\mathcal{E}\|_{F}^{2}}{mn\sigma^{2}} \sigma^{2}P(\hat{B})+\langle\mathcal{E},X\hat{B}-XB^{*}\rangle.\] (A.45)
The stochastic term \(\langle\mathcal{E},X\hat{B}-XB^{*}\rangle\) can be bounded similarly by (A.43): for any \(a_{1},b_{1},a_{2}>0\) satisfying \(4b_{1}>a_{1}\),
\[\langle\mathcal{E},X\hat{B}-XB^{*}\rangle\leq 2(1/a_{1}+1/a_{2})\mathbf{D}_{2}(X \hat{B},XB^{*})+(b_{1})L_{1}\sigma^{2}\{P(\hat{B})+P(B^{*})\},\]
with probability at least \(1-Cp^{-c}\exp\{-cm\}\) for some \(c,C,L_{1}>0\). Plugging it into (A.45) gives
\[\big{(}1-\frac{2}{a_{1}}-\frac{2}{a_{2}}\big{)}\mathbf{D}_{2}(X \hat{B},XB^{*})\] \[\leq\frac{1}{2}\Big{\{}\frac{A\|\mathcal{E}\|_{F}^{2}}{(1-A/A_{0} )mn\sigma^{2}}+2b_{1}L_{1}\Big{\}}\sigma^{2}P(B^{*})-\frac{1}{2}\Big{\{}\frac{ A\|\mathcal{E}\|_{F}^{2}}{mn\sigma^{2}}-2b_{1}L_{1}\Big{\}}\sigma^{2}P(\hat{B}).\]
Since \(\epsilon_{i,k}\) are independent and non-degenerate, \(c_{1}mn\sigma^{2}\leq\mathbb{E}\|\mathcal{E}\|_{F}^{2}\leq c_{2}mn\sigma^{2}\) for some constants \(c_{1},c_{2}>0\). Let \(\gamma\) be some constant satisfying \(0<\gamma<1\). On \(\mathcal{E}=\{c_{1}(1-\gamma)mn\sigma^{2}\leq\|\mathcal{E}\|_{F}^{2}\leq c_{2 }(1+\gamma)mn\sigma^{2}\}\), we have
\[\frac{A\|\mathcal{E}\|_{F}^{2}}{(1-A/A_{0})mn\sigma^{2}}\leq\frac{c_{2}(1+ \gamma)A_{0}A}{A_{0}-A}\ \ \text{ and }\ \frac{A\|\mathcal{E}\|_{F}^{2}}{mn\sigma^{2}}\geq c_{1}(1-\gamma)A.\]
Regarding the probability of the event, we write \(\|\mathcal{E}\|_{F}^{2}=\text{vec}\left(\mathcal{E}\right)A\,\text{vec}\left( \mathcal{E}\right)^{T}\) with \(A=I\in\mathbb{R}^{nm\times nm}\) and bound it with the Hanson-Wright inequality. In fact, from \(\text{Tr}(A)=mn,\|A\|_{2}=1,\|A\|_{F}=\sqrt{mn}\), the complement of \(\mathcal{E}\) occurs with probability at most \(C^{\prime}\exp\{-c^{\prime}mn\}\).
Now, with \(A_{0},A,a_{1},a_{2},b_{1}\) large enough such that \((1/a_{1}+1/a_{2})<1/2\), \(4b_{1}>a_{1}\), \(A>2b_{1}L_{1}/\{c_{1}(1-\gamma)\}\) and \(A_{0}>A\), we can obtain the desired prediction error rate for the fractional form. Finally, based on the fact that \(1/(1-\delta)\geq\exp(\delta)\geq 1/(1-\delta/2)\) for any \(0\leq\delta<1\), the same error rate holds for the logarithmic form (see [28] for more details).
### _More implementation details_
Slow kill is extremely simple to implement and a summary is given below. For ease of presentation, we define an \(\bar{\eta}\) function based on Theorem 6 and its discussions,
\[\bar{\eta}(q_{+},\rho_{+})=\begin{cases}\frac{1}{2\sqrt{q_{+}/ \bar{s}-1}},&\text{if }q_{+}>2q\text{ and }q\geq n/2\\ \frac{\eta_{0}}{\rho_{+}},&\text{if }q_{+}\leq 2q,\\ \frac{\eta_{0}}{\rho_{+}}\wedge\frac{1}{2\sqrt{q_{+}/\bar{s}-1}},&\text{ otherwise},\end{cases}\] (A.46)
where \(\bar{s}=q\wedge nL^{2}/\log(ep)\geq s\) with \(L\) the Lipschitz parameter of \(\nabla l_{0}\) and \(\eta_{0}\) is a user defined parameter. (Like \(q\), \(\eta_{0}\) is a regularization parameter customizable by the user.) We also define a \(\beta\) function
\[\beta(q_{+},\rho_{+},\beta^{-})=\Theta^{\#}\Big{\{}\beta^{-}-\rho_{+}^{-1}X^ {T}\nabla l_{0}(X\beta^{-};y);q_{+},\bar{\eta}(q_{+},\rho_{+})\Big{\}},\] (A.47)
based on (32). (Often, an intercept should be included (say \(\beta_{1}\)) that is subject to no regularization. We can add a column of ones in the design matrix and redefine the \(\Theta^{\#}\) in (A.47) to keep the first entry and perform quantile-thresholding on the remaining subvector.)
Recall the line search criterion for a trial \(\rho\):
\[(\rho\mathbf{D}_{2}-\mathbf{\Delta}_{l})(\beta(q_{t+1},\rho,\beta^{(t)}),\beta ^{(t)})\geq 0\] (A.48)
or
\[\frac{\rho}{2}\|\beta(q_{t+1},\rho,\beta^{(t)})-\beta^{(t)}\|_{2}^ {2}\geq l_{0}(X\beta(q_{t+1},\rho,\beta^{(t)}))-l_{0}(X\beta^{(t)})\] \[-\langle\nabla l_{0}(X\beta^{(t)}),X\beta(q_{t+1},\rho,\beta^{(t) })-X\beta^{(t)}\rangle.\]
Then the algorithm can be summarized as follows.
Input: \(X,y\), a quantile parameter sequence \(q_{t}\to q\in[p]\), a target \(\ell_{2}\)-shrinkage \(\eta_{0}\geq 0\).
Initialization: \(\beta^{(0)},\rho_{0}\) (say \(0\) and \(L\|X\|_{2}^{2}\), respectively).
For each \(q_{t+1}\) (\(t\geq 0\)), perform the following
* Find \(\rho_{t+1}\) by line search with the criterion (A.48).
* Perform \(\beta^{(t+1)}\leftarrow\beta(q_{t+1},\rho_{t+1},\beta^{(t)})\) according to (A.47).
We can also add a squeezing operation as step c): \(X\gets X_{\mathcal{J}(\beta^{(t+1)})}\) from time to time (say when \(q_{t+1}\) reaches \(p/2^{k}\) for \(k\) greater than some \(k_{0}\)). In addition, after \(q_{t+1}\) reaches \(q\) and when the sparsity pattern of \(\beta^{(t+1)}\) stabilizes, one
can use a classical optimization method to solve a smooth problem to get the nonzero entries of the final estimate. As for step a), many standard line search methods can be used, e.g., backtracking [45]. We use an adaptive search with warm starts. Concretely, given \(\alpha\in(0,1)\), we begin with \(\rho\leftarrow\rho_{t}\), and set \(\rho\leftarrow\alpha\rho\) if (A.48) is satisfied for \(\beta(q_{t+1},\rho,\beta^{(t)})\) and \(\rho\leftarrow\rho/\alpha\) otherwise, until a small enough \(\rho_{t+1}\) makes (A.48) hold while \(\alpha\rho_{t+1}\) does not. In practice, it is wise to limit the number (\(M\)) of searches. We use \(\alpha=0.5,M=5\) for implementation.
|
2304.09053 | Bayes Hilbert Spaces for Posterior Approximation | Performing inference in Bayesian models requires sampling algorithms to draw
samples from the posterior. This becomes prohibitively expensive as the size of
data sets increase. Constructing approximations to the posterior which are
cheap to evaluate is a popular approach to circumvent this issue. This begs the
question of what is an appropriate space to perform approximation of Bayesian
posterior measures. This manuscript studies the application of Bayes Hilbert
spaces to the posterior approximation problem. Bayes Hilbert spaces are studied
in functional data analysis in the context where observed functions are
probability density functions and their application to computational Bayesian
problems is in its infancy. This manuscript shall outline Bayes Hilbert spaces
and their connection to Bayesian computation, in particular novel connections
between Bayes Hilbert spaces, Bayesian coreset algorithms and kernel-based
distances. | George Wynne | 2023-04-18T15:17:16Z | http://arxiv.org/abs/2304.09053v1 | # Bayes Hilbert Spaces for Posterior Approximation
###### Abstract
Performing inference in Bayesian models requires sampling algorithms to draw samples from the posterior. This becomes prohibitively expensive as the size of data sets increase. Constructing approximations to the posterior which are cheap to evaluate is a popular approach to circumvent this issue. This begs the question of what is an appropriate space to perform approximation of Bayesian posterior measures. This manuscript studies the application of Bayes Hilbert spaces to the posterior approximation problem. Bayes Hilbert spaces are studied in functional data analysis in the context where observed functions are probability density functions and their application to computational Bayesian problems is in its infancy. This manuscript shall outline Bayes Hilbert spaces and their connection to Bayesian computation, in particular novel connections between Bayes Hilbert spaces, Bayesian coreset algorithms and kernel-based distances.
###### Contents
* 1 Introduction..
Introduction
The aim of this manuscript is to advocate for, and demonstrate the utility of, Bayes Hilbert spaces as tools to investigate computational Bayesian problems. Bayesian statistics is a common statistical modelling paradigm which involves the fusion of prior knowledge about the unknown quantity to be inferred with observed data. The result is a posterior measure over the unknown quantity. The posterior measure is the object of interest since it can be used to perform prediction and uncertainty quantification. Practically though, using the posterior measure for these tasks involves deploying sampling algorithms on the posterior measure. The practical issue is that it is expensive to sample from the posterior measure when the observed data set is large. The typical cost when one has observed \(N\) data points is \(O(N)\) per iteration of the sampling algorithm, which becomes prohibitive since many thousands of iterations could be required to obtain the desired number of samples. For more on Bayesian approaches to modelling see Gelman et al. (2013) and for sampling in Bayesian methods see the handbook Kroese et al. (2011).
There are a range of solutions to this problem. The two main approaches are designing algorithms which avoid the \(O(N)\) cost each iteration and forming an approximation to the target posterior and sampling from that approximation instead of the target posterior. Examples of the former approach involve sampling algorithms which employ sub-sampling (Bierkens et al., 2019; Dang et al., 2019), variational inference (Blei et al., 2017; Hoffman et al., 2013) and divide and conquer methods (Scott et al., 2016; Srivastava et al., 2015; Vyner et al., 2023). The latter approximation approach will be the focus of this paper. This approach typically involves keeping the prior the same and approximating the likelihood function with something cheaper to evaluate than \(O(N)\), resulting in a cheaper per-iteration cost when sampling. The area of this methodology which shall be the focus in the sequel is Bayesian coresets (Huggins et al., 2016; Campbell and Broderick, 2019). This is where the an approximation of the likelihood is formed based on \(M\) points and weights, with \(M\ll N\). This approximation results in an \(O(M)\) per-iteration cost once inserted into a sampling algorithm. Many variants of this method exist (Campbell and Broderick, 2018, 2019; Campbell and Beronov, 2019; Manousakas et al., 2020; Naik et al., 2022).
When performing this approximation, indeed when performing any posterior approximation, one must ask themselves what is an appropriate space to form the approximation? This choice plays a critical role in the analysis and performance of posterior approximations. It will impact both the notion of distance between the approximate posterior and the target posterior and how one performs optimisation with respect to this distance to form the approximation. The aim of this paper is to show Bayes Hilbert spaces are appropriate spaces to perform such analysis and facilitate a novel perspectives on Bayesian coreset algorithms and posterior approximation in general.
Bayes Hilbert spaces (van den Boogart et al., 2011; van den Boogaart et al., 2014; Barfoot and D'Eleuterio, 2023) are spaces of measures which have notions of addition and scalar multiplication which are coherent with Bayes's theorem. This makes them particularly appealing spaces to form posterior approximations. The notion of a Bayes Hilbert space has origins in compositional data analysis (Aitchison, 1982; Pawlowsky-Glahn and Buccianti, 2011) and is now a common tool used in distributional data analysis (Mateu and Giraldo, 2021; Petersen et al., 2022). This is a sub-field of functional data analysis where a user observes a data set which is a collection of probability density functions and wishes to perform analysis. To deal with the specific structure of probability density functions, different versions of addition and scalar multiplication are used in the Bayes Hilbert space than typical function spaces. The use of Bayes Hilbert spaces in distributional data analysis is mature with many applications (Mateu and Giraldo, 2021). On the other hand, the use of Bayes Hilbert spaces in computational Bayesian problems is in its infancy. The primary reference
of this application is the innovative contribution of Barfoot and D'Eleuterio (2023) which made clear how variational inference procedures can be viewed in terms of projections in Bayes Hilbert spaces. This connection was made by showing variational inference methods can be viewed as optimising coefficients of an approximating function in a Bayes Hilbert space.
The main technical contributions of this manuscript are relating the Bayes Hilbert space norm to commonly used distances on measures, providing a novel connection between Bayes Hilbert spaces and kernel methods and framing multiple Bayesian coreset algorithms in terms of Bayes Hilbert spaces.
Section 2 will provide background content for this manuscript. In particular, Section 2.1 will introduce the mathematical details of Bayes Hilbert spaces, Section 2.2 will describe the sense in which Bayes Hilbert spaces are coherent with Bayes' theorem and Section 2.3 will outline existing literature related to Bayes Hilbert spaces. After the background content, Section 3 will provide a novel result which relates the distance in a Bayes Hilbert space to common discrepancies for measures, Section 4 will outline how Bayesian coresets can be expressed in a Bayes Hilbert space, Section 5 will provide a novel link between maximum mean discrepancy and Bayesian coresets, Section 6 will leverage this link to provide novel connections between maximum mean discrepancy algorithms and Hilbert coreset algorithms, Section 7 will describe a novel connection between Bayesian coresets constructed using the Kullback-Leibler divergence and Bayes Hilbert spaces. Concluding remarks and future avenues of research are outlined in Section 8.
## 2 Background
This section shall provide background information to motivate and frame the novel results that will occur in the later sections. Section 2.1 shall outline Bayes Hilbert spaces which are the central focus of this manuscript, Section 2.2 shall outline Bayes' theorem and Section 2.3 shall discuss lines of research related to the Bayes Hilbert spaces methodology, with a focus on methods which map measures into Hilbert spaces.
### Bayes Hilbert Spaces
The aim of this section is to motivate and introduce the mathematical details of Bayes Hilbert spaces. This will include defining the elements of a Bayes Hilbert space, the notions of addition and scalar multiplication and the Hilbert inner product.
Before beginning with the mathematical details of a Bayes Hilbert space it is helpful to understand how the ideas developed. The ideas behind a Bayes Hilbert space may be traced back to compositional data analysis (Aitchison, 1982). Mathematically, compositional data is non-negative multivariate data with a sum constraint that represents relative, rather than absolute, information. This data type is appropriate when one wishes to analyse the composition of certain objects, hence the name compositional. For example, one may be studying a collection of chemicals which are each made up from a fixed number of other chemicals. Then compositional data analysis can be used to perform analysis on the relative proportions of each of the chemicals. For a long list of other examples see Aitchison (1986).
This type of data has many quirks. For example, a coherent notion of addition and subtraction is not straightforward. This is because the typical notion of vector addition would lead to a notion of subtraction which could result in negative values, and negative values cannot represent proportions. Issues therefore also occur if one were to try and use the typical notion of scalar multiplication with this type of data.
The pioneering paper by Aitchison (1982) derived a mathematical framework to deal with the quirks of compositional data outlined above. These tools are known as the Aitchison geometry and form the bedrock for the field of compositional data analysis. The geometry revolves around deriving particular notions of equality, addition, scalar multiplication and subtraction for compositional data along with a logarithmic transform which maps the data from the simplex into a nicer space. The application areas of this mathematical framework are legion, covering areas such as geology, ecology and microbiome research (Pawlowsky-Glahn and Buccianti, 2011; Filzmoser et al., 2018; Aitchison, 1986; van den Boogaart and Tolosana-Delgado, 2013) as well as an insightful survey paper marking 40 years since Aitchison's original publication (Greenacre et al., 2022).
The idea of a Bayes Hilbert space is to generalise the Aitchison geometry used in compositional data analysis to the situation where the objects of interest are measures. The easiest way to conceptualise this move is to view compositional data as a histogram, with each the \(n\)-th entry of a compositional vector representing the probability of the \(n\)-th event. Then, the trick is to view a probability density function as a continuous limit of a histogram and to then adapt the Aitchison geometry to this continuous limit, which in practice means moving from finite sums over the entries of a compositional vector to integrals with respect to probability density functions. Then the move from probability density functions to measures is made by using Radon-Nikodym derivatives.
The rest of this section will be spent making these ideas and intuition concrete. All technical content is taken from existing sources (Hron et al., 2022; van den Boogaart et al., 2014; van den Boogart et al., 2011; Maier et al., 2021).
Let \((\Theta,\mathcal{A})\) be a measurable space and \(\mu\) a probability measure on \(\Theta\). This measure \(\mu\) will act as the base measure. What follows can be generalised to finite measures that are not probability measures, the difference is simply additional normalisation terms. Elements of \(\Theta\) will be denoted by \(\theta\). Define \(M(\mu)\) as the set of measures on \((\Theta,\mathcal{A})\) that are \(\sigma\)-finite and mutually absolutely continuous with respect to \(\mu\), which means that they have the same null sets as \(\mu\). Discussion regarding the choice of \(\mu\) for Bayesian computation problems is given in Section 4.
Define an equivalence relation \(=_{B}\) between two measure \(\eta,\nu\in M(\mu)\) as \(\eta=_{B}\nu\) if and only if there exists a constant \(c>0\) such that \(\eta(A)=c\nu(A)\:\forall\:A\in\mathcal{A}\). It can be easier to understand this notion of equivalence in terms of the Radon-Nikodym derivatives with respect to \(\mu\). For a measure \(\eta\in M(\mu)\) the Radon-Nikodym derivative of \(\eta\) with respect to \(\mu\) is the non-negative function \(\frac{\mathrm{d}\eta}{\mathrm{d}\mu}\) such that \(\eta(A)=\int_{A}\frac{\mathrm{d}\eta}{\mathrm{d}\mu}\mathrm{d}\mu\:\forall A\in \mathcal{A}\). The existence of such a function is guaranteed by the Radon-Nikodym theorem, see for example Bogachev (2007, Theorem 3.2.2). The function is unique \(\mu\)-almost everywhere. The interpretation of this result is simply that measures in \(M(\mu)\) possess what can be viewed as densities with respect to \(\mu\) which makes interpretation of results to come more straightforward. To this end, for \(\eta\in M(\mu)\) set \(p_{\eta,\mu}\coloneqq\frac{\mathrm{d}\eta}{\mathrm{d}\mu}\) as the Radon-Nikodym derivative of \(\eta\) with respect to \(\mu\) for notational convenience. Note that \(p_{\mu,\mu}=1\) in the sense that \(p_{\mu,\mu}\) is the function constantly equal to one. Then \(\eta=_{B}\nu\) if and only if there exists a constant \(c>0\) such that \(p_{\eta,\mu}=cp_{\nu,\mu}\)\(\mu\)-almost everywhere. Indeed, anytime that these Radon-Nikodym derivatives are written as equal it is to be interpreted in the \(\mu\)-almost everywhere sense.
Define \(B(\mu)\) to be the set of equivalence classes with respect to \(=_{B}\) within \(M(\mu)\). With this notion of equality in place, which is helpful when studying Bayesian problems as described in Section 4, the next step is to define addition and scalar multiplication. For \(\eta,\nu\in B(\mu)\) addition is defined
\[(\eta\oplus\nu)(A)\coloneqq\int_{A}p_{\eta,\mu}(\theta)\cdot p_{\nu,\mu}( \theta)\mathrm{d}\mu(\theta)\]
for every \(A\in\mathcal{A}\), For \(\eta,\nu\in B(\mu)\) and \(\alpha\in\mathbb{R}\) scalar multiplication is defined as
\[(\alpha\odot\eta)(A)\coloneqq\int_{A}p_{\eta,\mu}(\theta)^{\alpha}d\mu(\theta)\]
for every \(A\in\mathcal{A}\). The combination of addition and scalar multiplication facilitates the identification of an additive inverse, namely for \(\nu\in B(\mu)\) define \(\ominus\nu\coloneqq(-1\ominus\nu)\) and for \(\eta,\nu\in B(\mu)\) define \(\eta\ominus\nu\coloneqq\eta\oplus(\ominus\nu)\).
While these definitions may look abnormal at first, all of the operations \(\oplus,\odot,\ominus\) can be interpreted straightforwardly through the Radon-Nikodym derivatives. Namely, for any \(\eta,\nu\in B(\mu)\) the operation \(\oplus\) can be written as \(p_{\eta,\mu}\oplus p_{\nu,\mu}\coloneqq p_{\eta,\mu}\cdot p_{\nu,\mu}\) where \(\cdot\) means standard multiplication so that \(p_{\eta\oplus\nu,\mu}=p_{\eta,\mu}\oplus p_{\nu,\mu}\). Scalar multiplication \(\odot\) can be written as \(\alpha\odot p_{\eta,\mu}\coloneqq p_{\eta,\mu}^{\alpha}\) so that \(p_{\alpha\odot\eta,\mu}=\alpha\odot p_{\eta,\mu}\). Finally, \(\ominus\) can be written as \(p_{\eta\ominus\nu,\mu}=p_{\eta,\mu}/p_{\nu,\mu}\) so that \(p_{\eta\ominus\nu,\mu}=p_{\eta,\mu}-p_{\eta,\mu}\). These operations defined directly on the Radon-Nikodym derivatives make the natural identification \(\eta\) with \(p_{\eta,\mu}\) for all \(\eta\in B(\mu)\) coherent in the sense that if one writes \(\eta=_{B}p_{\eta,\mu}\,\forall\eta\in B(\mu)\) then \(\eta\oplus\nu=_{B}p_{\eta\oplus\nu,\mu}\) for all \(\eta,\nu\in B(\mu)\), similarly for \(\odot,\ominus\). Therefore, for ease of notation \(\eta\) may be replaced with \(p_{\eta,\mu}\) at certain places in the sequel as at times the focus shall be on the Radon-Nikodym derivatives rather than the measures themselves.
The next result links all these operations together and assures us that these operations result in a valid real vector space.
**Theorem 1**.: _[_van den Boogart et al._,_ 2011_, Theorem 5]__\(B(\mu)\) equipped with \(\oplus,\odot\) is a real vector space with \(\mu\) the additive zero element._
This vector space is known as a _Bayes linear space_(van den Boogart et al., 2011) and provides the basic structure needed to proceed to Bayes Hilbert spaces. The next step is defining the following set
\[B^{2}(\mu)=\left\{\eta\in B(\mu)\colon\mathbb{E}_{\mu}\left[ \left|\log\frac{\mathrm{d}\eta}{\mathrm{d}\mu}\right|^{2}\right]<\infty\right\}. \tag{1}\]
Note that \(\eta\in B^{2}(\mu)\) if and only if \(\log p_{\eta,\mu}\in L^{2}(\mu)\), where \(L^{2}(\mu)\) is the typical space of (equivalence classes of) functions on \(\Theta\) that are square integrable with respect to \(\mu\).
As previously mentioned, the central tool for compositional data analysis was a logarithm transform to map the compositional data into a nice space. The analogy to this for the present case is the _centred log-ratio_ (CLR) transform which is defined (Egozcue et al., 2006; van den Boogaart et al., 2014)
\[\Psi_{\mu}(\eta)=\log p_{\eta,\mu}-\mathbb{E}_{\mu}[\log p_{\eta,\mu}],\]
where the expectation is being taken with respect to the input argument of \(\log p_{\eta,\mu}\). A few remarks are in order. First, the expectation is finite given the assumption that \(\mu\) is a finite measure and that \(\eta\in B^{2}(\mu)\). Second, since \(p_{\eta,\mu}\) accepts as argument an element of \(\Theta\) so to does the CLR \(\Psi_{\mu}(\eta)\). Third, \(\Psi_{\mu}(\mu)=0\), in the sense that it is the function constantly zero, since \(p_{\mu,\mu}=1\) meaning that the base measure, which is the additive identity in \(B^{2}(\mu)\) is mapped to the zero function. The CLR maps into \(L^{2}_{0}(\mu)=\{f\in L^{2}(\mu)\colon\mathbb{E}_{\mu}[f]=0\}\), the elements of \(L^{2}(\mu)\) that have zero mean, which is the analogy to the nice space that is used in compositional data analysis. This space is also used in information geometry as a tangent space to manifolds of measures, see Section 2.3 for more discussion. Finally, the integrability condition in \(B^{2}(\mu)\) is very weak and means that \(B^{2}(\mu)\) can contain infinite measures. Discussion on this is given in Section 8.
The operations \(\oplus,\odot\) relate addition with multiplication and scalar multiplication with exponentiation, these relations are maintained by logarithms which is why the CLR transform is helpful. Also, the subtraction of the expectation of the logarithm ensures the scale invariance property in the definition of \(=_{B}\) holds. Overall, this means for any \(\eta,\nu\in B^{2}(\mu)\) and \(\alpha\in\mathbb{R}\) the following linearity
holds \(\Psi_{\mu}(\alpha\odot(\eta\oplus\nu))=\alpha\cdot(\Psi_{\mu}(\eta)+\Psi_{\mu}(\nu))\) and that for any \(\eta,\nu\in B^{2}(\mu)\) such that \(\eta=_{B}\nu\) the CLR maps \(\eta,\nu\) to the same output, meaning \(\Psi_{\mu}(\eta)=\Psi_{\mu}(\nu)\). The inverse function of the CLR map is simply \(\exp\), to see this take any \(\eta\in B^{2}(\mu)\) then
\[\exp(\Psi_{\mu}(\eta))=_{B}p_{\eta,\mu}\cdot\exp(-\mathbb{E}_{\mu}[\log p_{ \eta,\mu}])=_{B}p_{\eta,\mu}\]
due to the scale invariance of \(=_{B}\)(van den Boogart et al., 2011), where the natural identification between \(\eta\) and \(p_{\eta,\mu}\), as described immediately before Theorem 1, has been used.
With the CLR established an inner product structure can be defined. For \(\eta,\nu\in B^{2}(\mu)\) define
\[\langle\eta,\nu\rangle_{B^{2}(\mu)}\coloneqq\langle\Psi_{\mu}(\eta),\Psi_{\mu }(\nu)\rangle_{L^{2}(\mu)}=\int_{\Theta}\Psi_{\mu}(\eta)(\theta)\Psi_{\mu}( \nu)(\theta)\mathrm{d}\mu(\theta). \tag{2}\]
with corresponding norm
\[\|\eta-\nu\|_{B^{2}(\mu)}=\|\Psi_{\mu}(\eta)-\Psi_{\mu}(\nu)\|_{L^{2}(\mu)}. \tag{3}\]
The next result shows the inner product on \(B^{2}(\mu)\) provides the desired Hilbertian structure.
**Theorem 2**.: _(van den Boogaart et al., 2014, Theorem 1) The map \(\Psi_{\mu}\) is an isometry between \(B^{2}(\mu)\) and \(L^{2}_{0}(\mu)=\{f\in L^{2}(\mu)\colon\mathbb{E}_{\mu}[f]=0\}\). The inverse of \(\Psi_{\mu}\) is \(\exp\) and \(B^{2}(\mu)\) is a Hilbert space._
The space \(B^{2}(\mu)\) equipped with the inner product (2) is called a _Bayes Hilbert space_. Theorem 2 reveals critical structure of \(B^{2}(\mu)\). This Hilbertian structure will facilitate approximations based on dictionaries of functions and optimisation methods based on orthogonal projections. Such structure is typically not present in common representations of measures and therefore Bayes Hilbert spaces have great potential for applications in measure approximation. For example, depending on the choice of \(\mu\) it can be straightforward to derive an orthonormal basis for \(B^{2}(\mu)\) by using orthogonal polynomials. The case of \(\mu\) being a Gaussian and using Hermite polynomials is studied in Barfoot and D'Eleuterio (2023) in the context of Gaussian variational inference. More discussion regarding the potential use of these basis approximations for computational Bayesian problems is given in Section 8.
### Bayes' Theorem and Bayes Hilbert Spaces
The aim of this section is to explain how Bayes Hilbert spaces have structure which is coherent with Bayes' theorem, justifying their name. In particular, it will be shown that the notions of addition and scalar multiplication used in \(B^{2}(\mu)\) are coherent with using Baye's theorem to update a prior belief using likelihoods to obtain a posterior. This section contains no novel technical content with exposition taken from van den Boogart et al. (2011), van den Boogaart et al. (2014).
One may go back to compositional data analysis, the genesis of Bayes Hilbert spaces as outlined in Section 2.1, to see how the connection to Bayes's theorem occurs. Aitchison noted that the simplex, the canonical sample space for compositional data analysis, is "familiar in other areas of statistics \(\ldots\) as the operation of Bayes's formula to change a prior probability assessment into a posterior probability assessment through the perturbing influence of the likelihood function" (Aitchison, 1986, van den Boogart et al., 2011).
These allusions will now be made concrete. The following recites content from van den Boogart et al. (2011) and for more on Bayes' theorem consult Stuart (2010). The sample space will be \(\Theta\) and the prior measure \(\pi_{0}\). Suppose there are \(N\) observations \(\{x_{n}\}_{n=1}^{N}\subset\mathcal{X}\), the data space, that are independently and identically distributed. Assume the likelihood for each observation is \(l\) which
takes as input an observation from \(\mathcal{X}\) and a parameter from \(\Theta\). Then, assuming the observations are independent and identically distributed, the likelihood for all the data is \(L(\theta)\coloneqq\prod_{n=1}^{N}l(\theta,x_{n})\), or using shorter notation \(L=\prod_{n=1}^{N}l_{x_{n}}\) where \(l_{x_{n}}(\theta)\coloneqq l(\theta,x_{n})\). Bayes' theorem states the posterior measure \(\pi\) over \(\Theta\), given the prior, likelihood and observations, satisfies
\[\frac{\mathrm{d}\pi}{\mathrm{d}\pi_{0}}=Z^{-1}L, \tag{4}\]
where \(Z=\mathbb{E}_{\pi_{0}}[L]\) is a constant and is known as the evidence. It is implicitly assumed that the prior, likelihood and data are such that \(Z<\infty\) which is a very mild and common assumption.
Mathematically, (4) shows that the Radon-Nikodym derivative of the posterior with respect to the prior is proportional to the likelihood. Therefore, using the definitions of \(=_{B}\) and \(p_{\pi,\pi_{0}}\) given in Section 2.1
\[p_{\pi,\pi_{0}}=_{B}L=_{B}\bigoplus_{n=1}^{N}l_{x_{n}}. \tag{5}\]
This means shows that Bayes' theorem is simply a sum over the likelihood terms when written in Bayes Hilbert space notation. This is the sense in which the Bayes Hilbert space operations are coherent with Bayes' theorem. Another helpful property of a Bayes Hilbert space is the definition of \(=_{B}\) as being equality up to scalar constant. This is useful since in practice posterior sampling algorithms are agnostic of scalar constants and therefore a mathematical framework for posterior approximation should also be invariant to scalar constants.
**Example 1**.: _An example of Bayesian inference that will recur is logistic regression. Suppose one observes input output pairs \(x_{n}=\{u_{n},y_{n}\}\) with \(u_{n}\in\mathbb{R}^{d}\) and \(y_{n}\in\{0,1\}\). For example, \(u_{n}\) might represent medical information about the \(n\)-th person in a population and \(y_{n}\) indicates whether they do or do not have a certain illness. A model for the regression problem of predicting \(y_{n}\) given \(u_{n}\) is logistic regression (Gelman et al., 2013, Chapter 3.7). The parameter space is \(\Theta=\mathbb{R}^{d}\) and for parameter choice \(\theta\) the response \(y_{n}\) is modelled as \(y_{n}\sim\mathrm{Bernoulli}(1/1+e^{-(\theta,u_{n})_{\mathbb{R}^{d}}})\) meaning the likelihood is_
\[l_{x_{n}}(\theta)=\frac{y_{n}}{1+e^{-(\theta,u_{n})_{\mathbb{R}^{d}}}}+\frac {(1-y_{n})e^{-(\theta,u_{n})_{\mathbb{R}^{d}}}}{1+e^{-(\theta,u_{n})_{ \mathbb{R}^{d}}}}.\]
_A commonly used prior for \(\theta\) is \(\pi_{0}=\mathrm{N}(0,I_{d})\), the standard multivariate Gaussian._
Under the assumption that \(\pi\in B^{2}(\mu)\) the CLR for \(\pi\) is
\[\Psi_{\mu}(\pi)=\log p_{\pi,\mu}-\mathbb{E}_{\mu}[\log p_{\pi,\mu}].\]
Under the additional assumption \(\pi_{0}\in B^{2}(\mu)\), using the chain rule for Radon-Nikodym derivatives \(p_{\pi,\mu}=p_{\pi,\pi_{0}}p_{\pi_{0},\mu}\) the CLR transform can be written
\[\Psi_{\mu}(\pi) =\log p_{\pi,\pi_{0}}-\mathbb{E}_{\mu}[\log p_{\pi,\pi_{0}}]+( \log p_{\pi_{0},\mu}-\mathbb{E}_{\mu}[\log p_{\pi_{0},\mu}])\] \[=\mathcal{L}-\mathbb{E}_{\mu}\left[\mathcal{L}\right]+(\log p_{ \pi_{0},\mu}-\mathbb{E}_{\mu}[\log p_{\pi_{0},\mu}]), \tag{6}\]
where \(\mathcal{L}\coloneqq\log L=\sum_{n=1}^{N}\log l_{x_{n}}\) is the log-likelihood. Note that if \(\mu=\pi_{0}\) then the final term in brackets is zero since \(p_{\pi_{0},\pi_{0}}=1\) and the CLR transform becomes equal to the centred log-likelihood function.
**Example 2**.: _Taking \(\mu=\pi_{0}\) then continuing Example 1 the CLR for the logistic regression model is_
\[\Psi_{\pi_{0}}(\pi) =\sum_{n=1}^{N}\log\left(\frac{y_{n}}{1+e^{-\left(\theta,u_{n} \right)_{\mathbb{R}^{d}}}}+\frac{(1-y_{n})e^{-\left(\theta,u_{n}\right)_{ \mathbb{R}^{d}}}}{1+e^{-\left(\theta,u_{n}\right)_{\mathbb{R}^{d}}}}\right)\] \[-\sum_{n=1}^{N}\mathbb{E}_{\pi_{0}}\left[\log\left(\frac{y_{n}}{1+ e^{-\left(\theta,u_{n}\right)_{\mathbb{R}^{d}}}}+\frac{(1-y_{n})e^{-\left(\theta,u_{n }\right)_{\mathbb{R}^{d}}}}{1+e^{-\left(\theta,u_{n}\right)_{\mathbb{R}^{d}}} }\right)\right].\]
The importance of these representations from a computational Bayesian point of view should be considered. The main focus in computational Bayesian statistics is the posterior measure. A computational Bayesian typically wishes to draw samples from the posterior and will almost always have to use a sampling algorithm. The typical cost is \(O(N)\) per iteration of the sampling algorithm, with many thousands of iterations typically required, therefore the overall cost becomes prohibitive for large data sets. In this large data scenario one usually employs either a sampling algorithm which makes cheap approximations, such as subsampling data during the execution of the sampling algorithm, or will try and form a cheap approximation of the posterior and sample from that instead. The difficulty in the latter approach is that measures are not typically viewed as lying in a space that is applicable to standard approximation and optimisation theory.
The Bayes Hilbert space provides appealing structure in which to view the posterior. First, it possesses an appropriate notion of equality as it is agnostic to scaling constants, much like common sampling algorithms. Second, the Hilbertian structure facilitates the use of approximation theory, concentration inequalities and projections which will all play a role in the rest of this manuscript to construct approximations to the posterior. Later, Theorem 3 in Section 3 shows that the norm in a Bayes Hilbert space can bound commonly used distances between measures and therefore approximation in a Bayes Hilbert space results in minimising commonly used distances.
### Related Work
The central idea of Bayes Hilbert spaces is to construct a Hilbert space into which measures shall be mapped. The Hilbert space provides nice structure to form distances, construct approximations and perform optimisation. The application of Bayes Hilbert spaces to computational Bayesian problems beyond expressing Bayes rule in terms of Bayes Hilbert space operations (van den Boogart et al., 2011) is currently modest. This is because the application of Bayes Hilbert spaces has so far focused on distributional data analysis (Petersen et al., 2022; Mateu and Giraldo, 2021). The sole contribution to the problem of posterior approximation using a Bayes Hilbert space is Barfoot and D'Eleuterio (2023) which frames variational inference in terms of a Bayes Hilbert space.
Other frameworks exists which embed measures into other function spaces. In information geometry, the most similar topic are exponential manifolds (Pistone and Sempi, 1995; Cena and Pistone, 2006; Pistone, 2013). Informally, this is where a manifold is constructed over the space of probability measures such that at a given measure, call it the reference measure, a Banach space is constructed and a subset found which is homeomorphic to the space of measures that are absolutely continuous with respect to the reference measure. The connection between this method and Bayes Hilbert spaces is that \(B^{2}(\mu)\) plays the roles of the reference measure space and \(L^{2}_{0}(\mu)\) the Banach space, in this context known as the tangent space. In fact, the CLR transform also features in this method, see Pistone and Sempi (1995, Equation 23). In Pistone and Sempi (1995) the Banach space was an Orlicz space which enforces stronger integrability conditions than \(L^{2}_{0}(\mu)\) to ensure that the corresponding measures are all finite, unlike in \(B^{2}(\mu)\). Connections between information geometry and compositional data analysis have been made by Erb and Ay (2021) but extending these to Bayes Hilbert spaces was left for future work.
Orlicz spaces are rather technical and other work has focused on constructing similar manifolds which use Hilbert spaces instead. This includes Newton (2012) which also uses \(L^{2}_{0}(\mu)\) as its Hilbert space but uses a map slightly different from the CLR transform, see Newton (2012, Equation 6). A reproducing kernel Hilbert space approach, see Section 5, was performed by Fukumizu (2009) in which the Banach space is a reproducing kernel Hilbert space. The CLR transform appears again in Fukumizu (2009, Theorem 18.1). This approach is appealing since reproducing kernel Hilbert spaces are very nice spaces to study and manipulate. However, one main difference is that the set of measures absolutely continuous with respect to a given base measure is only homeomorphic to a subset of a Hilbert space. This means it does not inherit the vector space structure from the Hilbert space. This is in contrast to how Bayes Hilbert spaces have vector space structure.
The purpose of all these methods in information geometry, as the name of the field suggests, is to better understand the geometry of spaces of measures rather than to directly approximate them. The closest to tackling an approximation problem is Sriperumbudur et al. (2017) which uses the reproducing kernel Hilbert space approach of Fukumizu (2009) to construct density estimators from samples of a distribution, which is still a distinct task to posterior approximation. Another distinguishing factor between the Bayes Hilbert space approach and the exponential manifold approach is that in the latter the space of measures is bijective with a subset of a function space, not a subspace like the former, since they are tangent spaces to the manifold. More discussion is given in Section 8 regarding the potential application of exponential manifolds to the problems discussed in this manuscript.
Another area which uses maps from measures into Hilbert spaces are kernel mean embeddings (Muandet et al., 2017; Gretton et al., 2012; Berlinet and Thomas-Agnan, 2004; Guilbert, 1978). This method is discussed at length in Section 5. The idea is to represent a measure using a kernel and then reason with the representation of the measure rather than the measure itself, for example forming distances and performing hypothesis testing. This method has also been applied to Bayesian computation (Fukumizu et al., 2013). The crucial difference between this approach and the Bayes Hilbert space and exponential manifold approach is that the map from measures to the Hilbert space is not easy to invert, both theoretically and practically. This makes the kernel mean embedding approach difficult to apply to the posterior approximation. The kernel mean embedding distance will appear in Theorem 4 as a way to relate distances in the Bayes Hilbert space to distances between the empirical measures of observed data in posterior approximation.
## 3 Measure Discrepancies and Bayes Hilbert Spaces
The aim of this section is to prove a novel result which shows the Bayes Hilbert space norm between two measures upper bounds typical notions of discrepancy between measures. Specifically, Theorem 3 provides bounds for the Hellinger, Kullback-Leibler and Wasserstein-1 distances in terms of a Bayes Hilbert space norm. Such bounds are important if the Bayes Hilbert space is to be used for posterior approximation as they show that the approximation is meaningful. The bounds are obtained by a simple application of a recent result from the Bayesian inverse problems literature regarding the stability of posterior measures (Sprungk, 2020). This result is possible since the notion of error of posterior approximation employed within Bayesian inverse problem literature coincides with the distance in Bayes Hilbert spaces.
Before the result is stated some notation needs to be outlined. First, \(\Theta\) is assumed to be a separable, complete metric space with metric \(d_{\Theta}\) and let \(\mathcal{B}(\Theta)\) denote the set of finite Borel measures on \(\Theta\) and \(\mathcal{P}(\Theta)\subset\mathcal{B}(\Theta)\) the subset of probability measures. In all three following definitions, \(\eta,\nu\in\mathcal{B}(\Theta)\).
Assuming \(\eta,\nu\) are both absolutely continuous with respect to some measure \(\lambda\), such a \(\lambda\) always exists for example \(\lambda=\eta+\nu\), then the square of the Hellinger distance is defined
\[\mathrm{H}(\eta,\nu)^{2}=\frac{1}{2}\int_{\Theta}\left(\sqrt{\frac{\mathrm{d} \eta}{\mathrm{d}\lambda}}-\sqrt{\frac{\mathrm{d}\nu}{\mathrm{d}\lambda}}\right)^ {2}\mathrm{d}\lambda, \tag{7}\]
where this definition is invariant to the choice of \(\lambda\).
Assuming \(\eta\) is absolutely continuous with respect to \(\nu\) the Kullback-Leibler (KL) divergence is defined
\[\mathrm{KL}(\eta\parallel\nu)=\int_{\Theta}\log\frac{\mathrm{d}\eta}{\mathrm{d }\nu}\mathrm{d}\eta,\]
for more discussion on the KL divergence see Section 7.
Let \(\mathrm{Lip}(1)\) denote the set of functions \(f\colon\Theta\to\mathbb{R}\) with Lipschitz constant less than or equal to one, then the Wasserstein-1 distance is defined
\[\mathrm{W}_{1}(\eta,\nu)=\sup_{f\in\mathrm{Lip}(1)}\left|\int_{\Theta}f \mathrm{d}\eta-\int_{\Theta}f\mathrm{d}\nu\right|.\]
For \(\eta\in\mathcal{P}(\Theta)\) define
\[\|\eta\|_{\mathcal{P}^{2}} =\inf_{\theta_{0}\in\Theta}\left(\int_{\Theta}d_{\Theta}(\theta, \theta_{0})^{2}\mathrm{d}\eta(\theta)\right)^{1/2}\] \[\mathcal{P}^{2}(\Theta) =\{\eta\in\mathcal{P}(\Theta)\colon\|\eta\|_{\mathcal{B}^{2}}<\infty\},\]
which is a quantity which will occur in the Wasserstein-1 bound that provides a notion of size of the sample space with respect to the measure and the metric on the space. Finally, for \(f\in L^{2}(\mu)\), \(\mathrm{ess}\sup_{\mu}f\) is the essential supremum of \(f\) and if there is an \(\tilde{f}\) in the \(L^{2}(\mu)\)-equivalence class of \(f\) such that \(\tilde{f}(\theta)\leq B\;\forall\theta\in\Theta\) then \(\mathrm{ess}\sup_{\mu}f\leq B\). For a more detailed description of essential supremum consult Bogachev (2007, Section 2.11).
**Theorem 3**.: _Let \(\Theta\) be a complete, separable metric space and \(\mu\in\mathcal{P}^{2}(\Theta),\eta,\nu\in\mathcal{B}(\Theta)\). Let \(\eta,\nu\in B^{2}(\mu)\) with_
\[\frac{\mathrm{d}\eta}{\mathrm{d}\mu} =Z_{\eta}^{-1}\exp(\Psi_{\mu}(\eta)) Z_{\eta} =\mathbb{E}_{\mu}[\exp(\Psi_{\mu}(\eta))] \tag{8}\] \[\frac{\mathrm{d}\nu}{\mathrm{d}\mu} =Z_{\nu}^{-1}\exp(\Psi_{\mu}(\nu)) Z_{\nu} =\mathbb{E}_{\mu}[\exp(\Psi_{\mu}(\nu))]. \tag{9}\]
_Assume that \(\mathrm{ess}\sup_{\mu}\Psi_{\mu}(\eta),\mathrm{ess}\sup_{\mu}\Psi_{\mu}(\nu)\leq B\) then_
\[\mathrm{H}(\eta,\nu) \leq\frac{1}{2}\left(e^{B}+e^{2B}\right)^{1/2}\|\eta-\nu\|_{B^{2} (\mu)}\] \[\mathrm{KL}(\eta\parallel\nu) \leq 2e^{B}\|\eta-\nu\|_{B^{2}(\mu)}\] \[\mathrm{W}_{1}(\eta,\nu) \leq(e^{B}+e^{2B})\|\mu\|_{\mathcal{P}^{2}}\|\eta-\nu\|_{B^{2}( \mu)}.\]
The proof is contained in the Appendix and is a straightforward application of the results in Sprungk (2020). A bound using an \(L^{1}(\mu)\) norm also holds for the KL and Wasserstein-1 case and \(\mu\) being a probability measure can be relaxed to \(\mu\) being a finite measure as the cost of an extra
constant, as discussed in the proof. So long as the upper bound \(B\) is controlled, Theorem 3 shows that the Bayes Hilbert space norm converging to zero implies that the other three notions of distance converge to zero too. The most similar result is Huggins et al. (2018, Proposition 6.3) which provides a bound on the Wasserstein-1 and Wasserstein-2 distances, the difference is that an \(L^{2}\) norm is used which involves derivatives of the log-likelihoods rather than the Bayes Hilbert space norm. Corollary 1 in Section 4 deals with the specific case of approximating a posterior measure.
Implicit in all of this is the choice of the base measure \(\mu\). The choice does not impact the left hand side of the bounds and will impact the right hand side by changing \(B\) and the value of the norm. Theorem 3 provides a guiding principal of choosing \(\mu\) to make \(B\) as small as possible.
**Example 3**.: _Continuing the logistic regression example, since \(l_{x_{n}}\leq 1\) the log-likelihood is bounded as \(\log l_{x_{n}}\leq 0\) therefore the CLR is bounded as_
\[\Psi_{\pi_{0}}(\pi)\leq-\sum_{n=1}^{N}\mathbb{E}_{\pi_{0}}\left[\log\left( \frac{y_{n}}{1+e^{-(\theta,u_{n})_{\mathbb{R}^{d}}}}+\frac{(1-y_{n})e^{-( \theta,u_{n})_{\mathbb{R}^{d}}}}{1+e^{-(\theta,u_{n})_{\mathbb{R}^{d}}}} \right)\right],\]
_which can be used as a bound in Theorem 3._
The notation for the \(Z_{\eta},Z_{\nu}\) terms in the Radon-Nikodym derivatives in (8),(9) is atypical since they do not factor out all the scalar factors. Specifically, since the CLR map \(\Psi_{\mu}\) involves centering by a constant, see (6), there is still a multiplicative factor which could be absorbed into the normalising terms. This was done when introducing the posterior in (4) with the \(Z\) term. The reason this is not done in Theorem 3 is to retain the full expressions for Bayes Hilbert space norm in the right hand side.
**Example 4**.: _Continuing the logistic regression example, the \(Z\) term in (4) would be_
\[Z=\mathbb{E}_{\pi_{0}}\left[L\right]=\mathbb{E}_{\pi_{0}}\left[\exp\left( \mathcal{L}\right)\right]\]
_where \(L=\prod_{n=1}^{N}l_{x_{n}},\mathcal{L}=\log L\). In contrast to this the \(Z_{\pi}\) term in Theorem 3 is_
\[Z_{\pi}=\mathbb{E}_{\pi_{0}}\left[\exp(\Psi_{\pi_{0}}(\pi))\right]=\mathbb{E} _{\pi_{0}}\left[\exp(\mathcal{L}-\mathbb{E}_{\pi_{0}}[\mathcal{L}])\right].\]
Typically, bounds of the type used in Theorem 3, see Sprungk (2020), assume that \(\mathcal{L}\leq 0\) and involve \(Z^{-1}\) as a constant, where \(Z=\mathbb{E}_{\pi_{0}}[L]\). Note \(\mathcal{L}\leq 0\) is a very mild assumption and can be obtained when assuming first \(\mathcal{L}\leq C\) for constant \(C\) and then using \(\mathcal{L}-C\), see Sprungk (2020, Section 2). The task of upper bounding \(Z^{-1}\) is closely related to the upper bound assumption on \(\Psi_{\pi_{0}}\) in Theorem 3. To see this note that \(Z^{-1}=\mathbb{E}_{\pi_{0}}[\exp(\mathcal{L})]^{-1}\leq\exp(-\mathbb{E}_{\pi_ {0}}[\mathcal{L}])\) by Jensen's inequality. When \(\mathcal{L}\leq 0\) then the bound \(B\) only needs to satisfy \(-\mathbb{E}_{\pi_{0}}[\mathcal{L}]\leq B\). Hence \(Z^{-1}\leq e^{B}\). This shows how despite using different normalising constants in the Radon-Nikodym derivative representation, to ensure the bounds still involve the CLR transform, Theorem 3 recovers constants in the upper bounds typical of results in the Bayesian inverse problems literature, for example Sprungk (2020, Theorem 5), with the difference being the slack in Jensen's inequality.
## 4 Bayesian Coresets and Bayes Hilbert Spaces
The aim of this section will be to introduce Bayesian coreset posterior approximations (Campbell and Broderick, 2019, Huggins et al., 2016) and provide a novel relationship between them and Bayes
Hilbert spaces. This connection is crucial for the rest of this manuscript and will be analysed further in later sections.
As mentioned in the previous section, for a likelihood that is a product of \(N\) terms, the typical cost per iteration of a sampling algorithm is \(O(N)\). This is because typical sampling algorithms have to "touch" each data point in the likelihood during each of their iterations. The idea of a Bayesian coreset is to approximate the likelihood with a weighted product of \(M\) terms. The resulting approximation will have \(O(M)\) cost which can be substantially cheaper if \(M\ll N\). The first paper on Bayesian coresets was focused on logistic regression (Huggins et al., 2016). Since then the area has developed widely and has produced multiple innovations, both in terms of how the distance to the full posterior is evaluated and in the optimisation methods used to form the coreset (Campbell and Broderick, 2018, 2019; Manousakas et al., 2020; Zhang et al., 2021; Naik et al., 2022).
Retaining the same setting as Section 2.2, the prior is \(\pi_{0}\), the observed data \(\{x_{n}\}_{n=1}^{N}\subset\mathcal{X}\), the individual likelihood functions \(l_{x_{n}}(\theta)\coloneqq l(\theta,x_{n})\) and the full likelihood \(L=\prod_{n=1}^{N}l_{x_{n}}\). For the rest of this manuscript, a Bayesian coreset is a collection of points \(\{z_{m}\}_{m=1}^{M}\subset\mathcal{X}\) and non-negative weights \(\{w_{m}\}_{m=1}^{M}\subset\mathbb{R}_{\geq 0}\) for some \(M\in\mathbb{N}\) and the Bayesian coreset posterior approximation corresponding to the Bayesian coreset is the measure \(\pi_{w,z}\) satisfying
\[\frac{\mathrm{d}\pi_{w,z}}{\mathrm{d}\pi_{0}}=Z_{w,z}^{-1}L_{w,z}, \tag{10}\]
where \(L_{w,z}=\prod_{m=1}^{M}l_{z_{m}}^{w_{m}}\) and \(Z_{w,z}=\mathbb{E}_{\pi_{0}}[L_{w,z}]\). Non-negative weights are used so that the resulting posterior is valid in the sense that it has finite measure. This approximation keeps the prior the same and approximates the posterior by using the likelihood \(L_{w,z}\) instead of \(L\). Note that if \(M=N\) and \(w_{m}=1,z_{m}=w_{m}\,\forall m\,\) then \(L=L_{w,z}\) and if \(w_{m}=0\) then \(l_{z_{m}}\) plays no role in the approximation.
The idea behind this approximation is that some of the observed data points may be redundant and do not need to be included, or they may be accounted for by re-weighting a likelihood term based on a different data point. The fact that \(\{z_{m}\}_{m=1}^{M}\) does not have to be a subset of the observed data \(\{x_{n}\}_{n=1}^{N}\) facilitates a flexible and efficient approximation. The canonical example in this case is a Gaussian mean inference task. In this scenario the posterior can be written exactly with \(M=1\), see Manousakas et al. (2020, Section 3). The scenario where \(\{z_{m}\}_{m=1}^{M}\not\subset\{x_{n}\}_{n=1}^{N}\) was named a _pseudo-coreset_ in Manousakas et al. (2020). This terminology is not maintained in the present discussion since the general case, where no assumptions are placed on \(\{z_{m}\}_{m=1}^{M}\) so it could or could not be a subset of \(\{x_{n}\}_{n=1}^{N}\), is investigated.
The CLR transform of \(\pi_{w,z}\) is easily obtained by using (6). Let \(\mu\) be the base measure and assume that \(\pi_{w,z},\pi_{0}\in B^{2}(\mu)\), then
\[\Psi_{\mu}(\pi_{w,z})=\mathcal{L}_{w,z}-\mathbb{E}_{\mu}\left[\mathcal{L}_{w, z}\right]+(\log p_{\pi_{0},\mu}-\mathbb{E}_{\mu}[\log p_{\pi_{0},\mu}]), \tag{11}\]
where \(\mathcal{L}_{w,z}\coloneqq\log L_{w,z}=\sum_{m=1}^{M}w_{m}\log l_{z_{m}}\) is the weighted log-likelihood based on \(\{z_{m}\}_{m=1}^{M}\).
The Bayesian coreset approximation is leveraging the structure inherent in the Bayesian methodology that the Radon-Nikodym derivative of the posterior with respect to the prior is equal, up to a scalar constant, to a product of likelihoods. This means the full posterior can be recovered exactly with the choice \(M=N\) and \(w_{m}=1,z_{m}=w_{m}\,\forall m\,\). Therefore, exact recovery only involves a finite number of parameter choices. This is different to typical function approximation methods where the target function can only be recovered exactly with an infinite number of parameter choices, for example an infinite basis expansion.
Some simple choices of \(\mu\) in the Bayesian coreset scenario are are \(\mu=\pi_{0}\), setting \(\mu\) to be the posterior based upon a random subset of the data points and a Laplace approximation to the
posterior (Campbell and Broderick, 2019). It is currently an open question as to how to chose \(\mu\) and what it means for \(\mu\) to be a best choice. Intuitively one hopes that \(\mu\) has most mass where the posterior has most mass but this causes a chicken and the egg problem as the entire task is to approximate the posterior. as mentioned after Theorem 3 a guide can be to choose a \(\mu\) which minimises the bound \(B\) using in Theorem 3.
The expressions for \(\pi,\pi_{w,z}\) can be inserted into Theorem 3 to understand how the Bayes Hilbert space norm implies a bound on the common notion of distance for probability measures in this particular case. A typical scenario for Bayesian coresets is having \(M=N\) and \(z_{n}=x_{n}\) and only changing the weights, with some weights being zero to provide sparsity. The next lemma applies Theorem 3 to this case.
**Lemma 1**.: _Under the assumptions of Theorem 3, if \(\mathcal{L}\leq C\) for some constant \(C\), \(M=N\), \(z_{n}=x_{n}\,\forall n\), and the weights \(w_{n}\) are non-negative with \(\|w\|_{\infty}\leq W\) then \(B\) can be set as \(B=-\max(1,W)\mathbb{E}_{\mu}[\mathcal{L}]+C\)._
Proof.: Under the assumptions, \(\Psi_{\mu}(\pi)=\mathcal{L}-C-\mathbb{E}_{\mu}[\mathcal{L}]+C\leq-\mathbb{E}_ {\mu}[\mathcal{L}]+C\) with the analogous argument for \(\Psi_{\mu}(\pi_{w,z})\). Using the assumptions on \(w_{m},z_{m}\)
\[-\mathbb{E}_{\mu}[\mathcal{L}_{w,z}]+C=\sum_{n=1}^{N}-w_{m} \mathbb{E}_{\mu}[l_{z_{m}}]+C \leq\max(1,W)\sum_{n=1}^{N}-\mathbb{E}_{\mu}[l_{x_{n}}]+C\] \[=-\max(1,W)\mathbb{E}_{\mu}[\mathcal{L}]+C,\]
which completes the proof.
Lemma 1 shows that if one forms a Bayesian coreset using points that are a subset of the observed data points, which is very common (Campbell and Broderick, 2018, 2019; Naik et al., 2022), then as long as the weights are bounded the term \(B\) will be agnostic of the choice of weights. The requirement that the weights are bounded is not typically explicitly enforced in existing Bayesian coreset algorithms although it is reasonable. This is because the norm of the full likelihood is \(\|\Psi_{\mu}(\pi)\|_{L^{2}(\mu)}\) so one would expect each weight to not be much larger than this overall norm value. An exception where the weights are bounded is in the importance sampling coreset methods (Huggins et al., 2016).
Armed with the approximation scheme (10) and the ability to bound common notions of discrepancy in terms of the Bayes Hilbert space norm using Corollary 1 and Theorem 3, the question is now how to choose the weights and the points in the Bayesian coreset approximation. Many methods have emerged in the literature over the past five years or so and the subsequent sections will be dedicated to viewing these through the lens of a Bayes Hilbert space. The first step in this process begins in the next section which makes a novel connection between the Bayes Hilbert space distance and maximum mean discrepancy distance, a kernel-based distance for measures.
## 5 Maximum Mean Discrepancy and Bayes Hilbert Spaces
This aim of this section is to outline a novel relationship between maximum mean discrepancy (MMD) and Bayesian coresets by relating the MMD distance to a Bayes Hilbert space distance. The relationship is made via the kernel pre-image problem (Scholkopf and Smola, 2002; Chapter 18). This relationship will later be used in Section 6 to investigate a genre of Bayesian coreset methods called Hilbert Bayesian coresets. The rest of this section will introduce MMD then the kernel pre-image problem and finally make the relation to Bayesian coresets.
MMD is a kernel-based discrepancy between measures and has been studied for nearly two decades in statistical machine learning (Gretton et al., 2006, 2012; Muandet et al., 2017) and computational statistics (Teymur et al., 2021; Dwivedi and Mackey, 2021; Mak and Joseph, 2018), with its origins being in the 70s in more abstract mathematical statistics under a different name (Guibart, 1978). MMD has both theoretical and practical strengths. The theoretical strengths include the elegant representation in terms of a reproducing kernel Hilbert space (RKHS), explained below, which facilitates MMD being rewritten in multiple helpful ways. The main practical strength of MMD is that it can be easily estimated by simply evaluating the kernel on samples from the two measures in question.
To begin the description of MMD one must first start with a kernel. A kernel \(k\colon\mathcal{X}\times\mathcal{X}\to\mathbb{R}\) is a symmetric, positive definite function. The point of a kernel is that it measures similarity between its inputs. A simple way to construct a kernel is to take any function \(\phi\colon\mathcal{X}\to H\), which will be called a feature map, where \(H\) is some real Hilbert space, called the feature space, then set \(k(x,y)=\langle\phi(x),\phi(y)\rangle_{H}\). In this case the kernel is measuring the similarity between \(x\) and \(y\) by measuring the similarity of the features \(\phi(x)\) and \(\phi(y)\) using the inner product in the feature space.
For any kernel \(k\) there is a space of functions called the reproducing kernel Hilbert space that is uniquely determined by the kernel which has specific properties relating to the kernel. In particular, the RKHS, denoted \(\mathcal{H}_{k}\), is the unique Hilbert space of functions from \(\mathcal{X}\) to \(\mathbb{R}\) such that \(k(x,\cdot)\in\mathcal{H}_{k}\;\forall\;x\in\mathcal{X}\) and \(\langle f,k(x,\cdot)\rangle_{k}=f(x)\;\forall f\in\mathcal{H}_{k},x\in \mathcal{X}\) where \(\langle\cdot,\cdot\rangle_{k}\) is the inner product of \(\mathcal{H}_{k}\), see Christmann and Steinwart (2008, Section 4.2). The latter property is called the reproducing property and it is extremely useful since, for functions in \(\mathcal{H}_{k}\), it facilitates many quantities of interest to be written purely in terms of the kernel, which is easy to evaluate. For more details regarding kernels and their RKHS consult (Christmann and Steinwart, 2008; Berlinet and Thomas-Agnan, 2004; Paulsen and Raghupathi, 2016).
Before outlining MMD a point on notation. Latin letters will be used to denote measures and random variables on \(\mathcal{X}\), the space where observed data for the Bayesian inference problem will lie, as is common in the MMD literature (Sriperumbudur et al., 2010). This is to distinguish them from measures on \(\Theta\), the parameter space, which use Greek letters as was done in Section 2.1.
MMD is now defined, for a more in depth discussion than what is presented here consult Sriperumbudur et al. (2010). Let \(\mathcal{B}_{k}(\mathcal{X})=\{P\in\mathcal{B}(\mathcal{X})\colon\mathbb{E}_ {P}[\sqrt{k(X,X)}]<\infty\}\) where in the expectation \(X\) denotes the random variable in \(\mathcal{X}\) with law \(P\) and \(\mathcal{B}(\mathcal{X})\) is the set of finite Borel measures on \(\mathcal{X}\). Note that all finite sums of empirical measures are in \(\mathcal{B}_{k}\). For \(P,Q\in\mathcal{B}_{k}(\mathcal{X})\) the MMD using kernel \(k\) is
\[\mathrm{MMD}_{k}(P,Q)\coloneqq\sup_{\|f\|_{k}\leq 1}\left|\mathbb{E}_{P}[f]- \mathbb{E}_{Q}[f]\right|, \tag{12}\]
where the supremum is being taken over the unit ball of the RKHS. The key property of MMD is that it may be re-written in different forms to make it more interpretable and easily estimated.
The first of these involves kernel mean embeddings. This is a mapping of a measure into the RKHS via the kernel. For a kernel \(k\) the kernel mean embedding \(\Phi_{k}\colon\mathcal{B}_{k}(\mathcal{X})\to\mathcal{H}_{k}\) is defined
\[\Phi_{k}(P)\coloneqq\int_{\mathcal{X}}k(x,\cdot)\mathrm{d}P(x).\]
The idea of a kernel mean embedding is to provide a feature expansion in the RKHS of a measure \(P\), much like how standard kernel methods on \(\mathbb{R}^{d}\) involve feature expansions of the data.
Analogous to how the reproducing property represents pointwise evaluation of functions in the RKHS, kernel mean embeddings represent integration of functions in the RKHS. For \(f\in\mathcal{H}_{k}\) and
\(P\in\mathcal{B}_{k}(\mathcal{X})\)
\[\mathbb{E}_{P}[f]=\int_{\mathcal{X}}f(x)\mathrm{d}P(x)=\int_{\mathcal{X}}\langle f,k(x,\cdot)\rangle_{k}\mathrm{d}P(x)=\langle f,\int_{\mathcal{X}}k(x,\cdot) \mathrm{d}P(x)\rangle_{k}=\langle f,\Phi_{k}(P)\rangle_{k}, \tag{13}\]
where the swapping of the integral and inner product is a result of the integrability assumption on \(k\) and \(P\) made when assuming \(P\in\mathcal{B}_{k}(\mathcal{X})\)(Sriperumbudur et al., 2010, Theorem 1).
Kernel mean embeddings relate to MMD in the following elegant way
\[\mathrm{MMD}_{k}(P,Q) =\sup_{\|f\|_{k}\leq 1}|\mathbb{E}_{P}[f]-\mathbb{E}_{Q}[f]|\] \[=\sup_{\|f\|_{k}\leq 1}|\langle f,\Phi_{k}(P)-\Phi_{k}(Q) \rangle_{k}| \tag{14}\] \[=\|\Phi_{k}(P)-\Phi_{k}(Q)\|_{k}, \tag{15}\]
where (14) is by (13) and (15) is by the Cauchy-Schwarz theorem. This shows that MMD is equal the the difference in the RKHS of the feature expansions of the two measures.
A third representation of MMD can be obtained by starting with (15) and again using (13)
\[\mathrm{MMD}_{k}(P,Q)^{2} =\|\Phi_{k}(P)-\Phi_{k}(Q)\|_{k}^{2}\] \[=\langle\Phi_{k}(P),\Phi_{k}(P)\rangle_{k}-2\langle\Phi_{k}(P), \Phi_{k}(Q)\rangle_{k}+\langle\Phi_{k}(Q),\Phi_{k}(Q)\rangle_{k} \tag{16}\] \[=\mathbb{E}_{P\times P}[k(X,X^{\prime})]-2\mathbb{E}_{P\times Q}[ k(X,Y)]+\mathbb{E}_{Q\times Q}[k(Y,Y^{\prime})], \tag{17}\]
where (16) is simply expanding the RKHS norm and (17) is by using (13). In the derivation above \(X\) has law \(P\), \(Y\) has law \(Q\) and \(X^{\prime},Y^{\prime}\) are i.i.d. copies of \(X,Y\), respectively. This double expectation formula is what makes MMD practical since it can be easily estimated empirically given samples from \(P,Q\)(Gretton et al., 2012).
MMD has found many uses in machine learning and computational statistics, for example two-sample testing (Gretton et al., 2012), parameter inference (Cherief-Abdellatif and Alquier, 2020), training generative models (Li et al., 2017) and distribution compression (Dwivedi and Mackey, 2021). A long list of other applications and references may be found in Muandet et al. (2017, Section 3.5).
The use for MMD that is the focus of this section is the following pre-image problem (Scholkopf and Smola, 2002, Chapter 18).
**Problem 1** (The pre-image problem).: _Given a kernel \(k\), \(\{x_{n}\}_{n=1}^{N}\subset\mathcal{X}\) and \(M\in\mathbb{N}\) find non-negative weights \(\{w_{m}\}_{m=1}^{M}\subset\mathbb{R}_{\geq 0}\) and \(\{z_{m}\}_{m=1}^{M}\subset\mathcal{X}\) which minimise_
\[\mathrm{MMD}_{k}(P_{w,z},P_{x})^{2} =\bigg{\|}\sum_{m=1}^{M}w_{m}k(z_{m},\cdot)-\sum_{n=1}^{N}k(x_{n}, \cdot)\bigg{\|}_{k}^{2}\] \[=\sum_{m,m^{\prime}=1}^{M}w_{m}w_{m^{\prime}}k(z_{m},z_{m^{ \prime}})-2\sum_{m=1}^{M}\sum_{n=1}^{N}w_{m}k(z_{m},x_{n})+\sum_{n,n^{\prime}= 1}^{N}k(x_{n},x_{n^{\prime}}),\]
_where \(P_{w,z}=\sum_{m=1}^{M}w_{m}\delta_{z_{m}},P_{x}=\sum_{n=1}^{N}\delta_{x_{n}}\) are the empirical measures corresponding to the two sets of data._
The pre-image problem was studied heavily in the 90s and early 00s in the context of support vector machines (Burges, 1996, Scholkopf et al., 1999) and kernel principal component analysis (Kwok and Tsang, 2004, Mika et al., 1998). It is called a pre-image problem since it is seeking to find weights and points which map close to \(\sum_{n=1}^{N}k(x_{n},\cdot)\) in the RKHS.
This problem relates to many topics within kernel methods and computational statistics. Reduced set methods solve a similar pre-image problem to find sparse representations of kernel-based algorithms (Burges, 1996). Kernel principal component analysis also solves a similar pre-image problem (Scholkopf et al., 1997). Kernel herding (Chen et al., 2010; Bach et al., 2012) is a method of solving the pre-image problem by greedily picking points and then choosing weights. The main two methods either use uniform weights or line-search. More will be discussed about kernel herding in Section 6. The scenario where \(\{z_{m}\}_{m=1}^{M}\subset\{x_{n}\}_{n=1}^{N}\) is known as distribution compression (Dwivedi and Mackey, 2021), or quantisation (Teymur et al., 2021; Graf and Luschgy, 2000), where the weights are typically left as uniform over the data points. The scenario where a user is not trying to approximate an empirical distribution but rather a continuous distribution using kernel-based approaches is covered within the kernel Stein discrepancy literature (Riable et al., 2022; Anastasiou et al., 2023) with quasi-Monte Carlo being a related field which aims to find discretisations of given measures for integral approximation (Caflisch, 1998; Dick et al., 2013). In statistical depth, the \(h\)-depth (Wynne and Nagy, 2021) is an instance of the pre-image problem when \(M=1\) so only one point is used to represent the data.
The pre-image problem immediately appears to be related to Bayesian coresets since both involve starting with a data set and finding weights and points. The subtlety is that the effectiveness of a Bayesian coreset is measured by the quality of its corresponding posterior approximation. This means the notion of quality of weights and points lying in \(\mathcal{X}\) is expressed in terms of a distance between measures on \(\Theta\). This is in contrast to most investigations of the pre-image problem and the related problems outlined above. The difference is due to the common use of kernels which involve expressions that measure distance between their inputs purely in terms of the geometry of \(\mathcal{X}\). The trick to link the pre-image problem (1) to Bayesian coresets is to use a kernel which maps the data into a Bayes Hilbert space. This will facilitate comparison of points in \(\mathcal{X}\) in terms of corresponding posteriors.
This is done by using a kernel of the form \(k(x,y)=\langle\phi_{\mu}(x),\phi_{\mu}(y)\rangle_{H}\), such kernels were discussed at the start of this section. The same setting and notation for Bayes' theorem used in Section 2.2 is maintained. The likelihood function based at a point \(x\in\mathcal{X}\) is \(l_{x}\). For some base measure \(\mu\) the feature map is \(\phi_{\mu}(x)=\log l_{x}-\mathbb{E}_{\mu}[\log l_{x}]\) with feature space \(H=L^{2}(\mu)\) and the kernel is defined
\[k(x,y)=\langle\phi_{\mu}(x),\phi_{\mu}(y)\rangle_{L^{2}(\mu)}. \tag{18}\]
This kernel measures the similarity between two points in \(\mathcal{X}\) by comparing the similarity of the centred versions of the log-likelihoods based at those points. This provides the crucial link between the data space and the sample space.
**Theorem 4**.: _Let \(\mathcal{X}\) be a metric space equipped with its Borel \(\sigma\)-algebra. Let \(\mu\) be a finite measure on the measurable space \(\Theta\) and \(l\) a likelihood function such that \(\phi_{\mu}\colon\mathcal{X}\to L^{2}(\mu)\), \(\phi_{\mu}(x)=\log l_{x}-\mathbb{E}_{\mu}[\log l_{x}]\) is well-defined and measurable. Let \(\pi_{0}\in B^{2}(\mu)\) and \(\pi\in B^{2}(\mu)\) be the posterior with prior \(\pi_{0}\), likelihood \(l\) and observations \(\{x_{n}\}_{n=1}^{N}\subset\mathcal{X}\). Let \(M\in\mathbb{N}\), \(\{w_{m}\}_{m=1}^{M}\subset\mathbb{R}\) and \(\{z_{m}\}_{m=1}^{M}\subset\mathcal{X}\). If \(k\) is the kernel (18) then_
\[\mathrm{MMD}_{k}(P_{w,z},P_{x})=\left\|\pi_{w,z}-\pi\right\|_{B^{ 2}(\mu)}, \tag{19}\]
_where \(\pi_{w,z}\) is the posterior based on the weighted likelihood (10)._
Proof.: The assumptions ensure that \(k\) is well-defined and measurable and that empirical measures
on \(\mathcal{X}\) are measurable.
\[\mathrm{MMD}_{k} (P_{w,z},P_{x})^{2}=\left\|\sum_{m=1}^{M}w_{m}k(z_{m},\cdot)-\sum_{n =1}^{N}k(x_{n},\cdot)\right\|_{k}^{2} \tag{20}\] \[=\sum_{m,m^{\prime}=1}^{M}w_{m}w_{m^{\prime}}k(z_{m},z_{m^{\prime} })-2\sum_{m=1}^{M}\sum_{n=1}^{N}w_{m}k(z_{m},x_{n})+\sum_{n,n^{\prime}=1}^{N}k (x_{n},x_{n^{\prime}})\] (21) \[=\left\|\sum_{m=1}^{M}\phi_{\mu}(z_{m})-\sum_{n=1}^{N}\phi_{\mu}( x_{n})\right\|_{L^{2}(\mu)}^{2}\] (22) \[=\left\|\Psi_{\mu}(\pi_{w,z})-\Psi_{\mu}(\pi)\right\|_{L^{2}}^{2} (\mu)\] (23) \[=\left\|\pi_{w,z}-\pi\right\|_{B^{2}(\mu)}^{2}, \tag{24}\]
where (20) is by (15), (21) is by the reproducing property, (22) is by the definition of \(k\), (23) is by the expression for the CLR transform of the approximation posterior (11) and the full posterior (6) where the term in the brackets in the expressions cancel out and finally (24) is by the definition of the Bayes Hilbert space norm (3).
Theorem 4 shows that the pre-image problem, Problem 1, is equivalent to minimising a Bayes Hilbert space distance between the Bayesian coreset posterior and the target posterior when using the kernel (18). An immediate consequence of this result is that bounds on MMD can be translated into bounds on the Bayes Hilbert space distance, and therefore by Theorem 3 bounds on commonly used distances between measures. An simple example is the following corollary which deals with the case of approximating a posterior by using a likelihood based on a subset of the observed data uniformly sampled without replacement. This random uniform subsampling without replacement is used as a common basic benchmark for evaluating approximate posterior methods (Campbell and Broderick, 2019).
**Corollary 1**.: _Under the same assumptions as Theorem 4, let \(M\leq N\) and set \(\pi_{M}\) to be the approximate posterior satisfying \(p_{\pi_{M},\pi_{0}}=_{B}(N/M)\odot\bigoplus_{m=1}^{M}l_{z_{m}}\) where \(\{z_{m}\}_{m=1}^{M}\) is randomly uniformly subsampled without replacement from \(\{x_{n}\}_{n=1}^{N}\). Assume that \(\|\phi_{\mu}(x_{n})\|_{L^{2}(\mu)}\leq\gamma\;\forall\;1\leq n\leq N\) then with probability at least \(1-\delta\)_
\[\|\pi-\pi_{M}\|_{B^{2}(\mu)}\leq\sqrt{\frac{8\gamma^{2}N(N-M+1)}{M}}\sqrt{2 \log\left(\frac{2}{\delta}\right)}.\]
The proof is a direct application of the kernel mean embeddings concentration inequality of Schneider (2016). The bound is similar to other concentration inequalities for Bayesian coresets, such as Campbell and Broderick (2019, Theorem 4.1). The difference is that Corollary 1 focuses on the simple random sub-sample case. More sophisticated sub-sampling methods, such as importance sampling as was done in Campbell and Broderick (2019, Theorem 4.1) could also be easily applied in the context of a Bayes Hilbert space due to the Hilbertian structure.
The connection between MMD and Bayes Hilbert spaces will be used again in the next section to outline a novel connection between Bayesian coreset algorithms and kernel-based methods for solving the pre-image problem.
## 6 Hilbert Bayesian Coresets
The aim of this section is to provide a novel connection between Hilbert Bayesian coreset algorithms and Bayes Hilbert spaces. The former are Bayesian coreset algorithms which revolve around using
a Hilbert norm and inner product to define a notion of distance between the target posterior and the approximation. The primary focus will be on showing that the Frank-Wolfe Bayesian coreset algorithm (Campbell and Broderick, 2019) is equivalent to Frank-Wolfe kernel-herding (Chen et al., 2010; Bach et al., 2012) when using the kernel (18). This novel relationship will be shown to be a consequence of Theorem 4. A secondary focus will be on the iterative hard thresholding coreset algorithm (Zhang et al., 2021) which will be related to MMD descent methods.
### Frank-Wolfe Bayesian coresets
The goal of both the Frank-Wolfe Bayesian coreset algorithm and Frank-Wolfe kernel-herding is essentially is to approximate an element of a Hilbert space using a candidate set. In both cases, as the name suggests, the Frank-Wolfe optimisation method is used to find the best approximation over the set. To begin with, the approximation problem of concern and the Frank-Wolfe algorithm solution is outlined only in the generality needed to make the desired connections. For further discussion of the method consult Clarkson (2010).
For some Hilbert space \(H\), fix some approximation target \(f\in H\) and let \(\{g_{n}\}_{n=1}^{N}\subset H\) be a set of elements which will be used to approximate the target. Let \(\{\sigma_{n}\}_{n=1}^{N}\subset\mathbb{R}_{>0}\) be a set of positive real numbers, set \(\sigma=\sum_{n=1}^{N}\sigma_{n}\) and define the polytope \(\mathcal{W}=\{w\in\mathbb{R}^{N}\colon w\geq 0,\sum_{n=1}^{N}w_{n}\sigma_{n} \leq\sigma\}\), noting it has vertices \(v_{n}=(\sigma/\sigma_{n})\mathbf{1}_{n}\) where \(\mathbf{1}_{n}\) is the one-hot vector of all zeros except a \(1\) in the \(n\)-th entry. The goal is to find
\[\operatorname*{arg\,min}_{w\in\mathcal{W}}\frac{1}{2}\|f-g_{w}\|_{H}^{2}, \tag{25}\]
where \(g_{w}=\sum_{n=1}^{N}w_{n}g_{n}\). The Frank-Wolfe method to solve this problem involves iterative, conditional gradient updates. It looks at the residual of the current approximation, finds the direction most aligned with it, then performs line search in that direction. Specifically, at the \(t\)-th iteration one performs the update of the current guess \(u_{t}\in\mathcal{W}\) as follows
\[\overline{u}_{t} =\operatorname*{arg\,max}_{v_{n}}\langle f-g_{u_{t}},g_{v_{n}} \rangle_{H} \tag{26}\] \[u_{t+1} =(1-\rho_{t})u_{t}+\rho_{t}\overline{u}_{t},\]
where \(\rho_{t}\) is a scalar value calculated in closed form using \(\overline{u}_{t},u_{t},f\)(Bach et al., 2012) and \(v_{n}=(\sigma/\sigma_{n})\mathbf{1}_{n}\) are the vertices of \(\mathcal{W}\), meaning \(g_{v_{n}}=(\sigma/\sigma_{n})g_{n}\). Only the vertices are optimised over in (26) as this is equivalent to optimising over the whole polytope \(\mathcal{W}\) since the function to optimise in (26) is linear. This means that only one member of the approximating dictionary is added at each iteration, meaning after \(T\) iterations the current solution is \(T\)-sparse.
The Frank-Wolfe Hilbert coreset algorithm derived in Campbell and Broderick (2019) fits this template. A version which includes log-likelihood centering is now outlined, which was not used in Campbell and Broderick (2018, 2019) though was later advocated by Campbell and Beronov (2019); Zhang et al. (2021) since it enforces the desired scale invariance property for the likelihood approximation. Set \(H=L^{2}(\mu)\), \(f=\Psi_{\mu}(\pi),g_{n}=\phi_{\mu}(x_{n})\) and \(\sigma_{n}=\|\phi_{\mu}(x_{n})\|_{L^{2}(\mu)}\). With these substitutions the goal becomes finding
\[\operatorname*{arg\,min}_{w\in\mathcal{W}}\frac{1}{2}\left\|\Psi_{\mu}(\pi)- \sum_{n=1}^{N}w_{n}\phi_{\mu}(x_{n})\right\|_{L^{2}(\mu)}^{2}=\operatorname* {arg\,min}_{w\in\mathcal{W}}\frac{1}{2}\left\|\Psi_{\mu}(\pi)-\Psi_{\mu}(\pi_ {w})\right\|_{L^{2}(\mu)}^{2}, \tag{27}\]
where \(\pi_{w}\) denotes the Bayesian coreset approximation (10) with weights \(w\) and \(z_{n}=x_{n}\,\forall\;n\). The Frank-Wolfe iterations are
\[\overline{u}_{t}=\frac{\sigma}{\sigma_{x_{n}}}\mathbf{1}_{n}\text{ where }n =\operatorname*{arg\,min}_{n\in[N]}\left\langle\Psi_{\mu}(\pi)-g_{u_{t}}, \frac{\sigma}{\sigma_{x_{n}}}\phi_{\mu}(x_{n})\right\rangle_{L^{2}(\mu)} \tag{28}\] \[u_{t+1} =(1-\rho_{t})u_{t}+\rho_{t}\overline{u}_{t},\]
where the full expression for the update of \(\overline{u}_{t}\), see (26), has been written for clarity and \([N]=\{1,\ldots,N\}\). Note Campbell and Broderick (2019, Equation 4.2) matches (25) and Campbell and Broderick (2019, Equation 4.4) matches (26) once the centred log-likelihoods are substituted for the log-likelihoods. As (27) and (28) are written in terms of CLR transforms, the \(L^{2}(\mu)\) inner products can be written in terms of the \(B^{2}(\mu)\) inner product, see (2). This shows that the Frank-Wolfe Hilbert coreset algorithm can be written in the language of Bayes Hilbert spaces.
The Frank-Wolfe coreset algorithm will now be related to Frank-Wolfe kernel-herding. Kernel-herding is where the Hilbert space \(H\) is set to be an RKHS and the algorithm is used in the context of solving the kernel pre-image problem, see Problem 1. Let \(k\) be the kernel (18), \(H=\mathcal{H}_{k}\) with approximation target \(f=\sum_{n=1}^{N}k(x_{n},\cdot)\) and retain the same choice of \(g_{n},\sigma_{n}\) as above. Kernel-herding aims to find
\[\operatorname*{arg\,min}_{w\in\mathcal{W}}\frac{1}{2}\left\|\sum_{n=1}^{N}k(x_ {n},\cdot)-\sum_{n=1}^{N}w_{n}k(x_{n},\cdot)\right\|_{k}^{2}, \tag{29}\]
which, using Theorem 4, is equal to (27). The kernel-herding Frank-Wolfe updates are
\[\overline{u}_{t}=\frac{\sigma}{\sigma_{x_{n}}}\mathbf{1}_{n}\text { where }n =\operatorname*{arg\,min}_{n\in[N]}\left\langle\sum_{n=1}^{N}k(x_ {n},\cdot)-g_{u_{t}},\frac{\sigma}{\sigma_{x_{n}}}k(x_{n},\cdot)\right\rangle _{k}, \tag{30}\] \[=\operatorname*{arg\,min}_{n\in[N]}\left\langle\Psi_{\mu}(\pi)-g _{u_{t}},\frac{\sigma}{\sigma_{x_{n}}}\phi_{\mu}(x_{n})\right\rangle_{L^{2}( \mu)},\] (31) \[u_{t+1} =(1-\rho_{t})u_{t}+\rho_{t}\overline{u}_{t},\]
where (31) is again by Theorem 4.
Due to the equivalences between the optimisation goals (29) and (27) and updates (30) and (28) it can be concluded that the Frank-Wolfe Hilbert coreset algorithm is equivalent to Frank-Wolfe kernel-herding when one uses the kernel (18) whose features map uses the CLR to map data to likelihoods in a Bayes Hilbert space. This identification immediately proposes some questions. For example, when kernel-herding is typically employed it does not used the subset of data requirement enforced in the Frank-Wolfe Bayesian coreset method, instead it chooses points that are anywhere on the domain which are called "super-samples" in Chen et al. (2010). This leads to the question of how kernel-herding performs when used to construct Bayesian coresets without the limitation of the coreset points being a subset of the observed data points. Such a methodology was called _pseudo-coresets_ in Manousakas et al. (2020) when investigating coresets constructed using KL divergence rather than a Hilbert norm.
### Iterative hard thresholding Bayesian coresets
The iterative hard thresholding (IHT) coreset algorithm (Zhang et al., 2021) proposes to find the weights of a Bayesian coreset using gradient descent and then thresholds the weights to ensure they
non-negative and sparse. More specifically, the problem under consideration is
\[\operatorname*{arg\,min}_{w\in\mathcal{W}_{M}}\frac{1}{2}\left\|\Psi_{\mu}(\pi)- \sum_{n=1}^{N}w_{n}\phi_{\mu}(x_{n})\right\|_{L^{2}(\mu)}^{2}, \tag{32}\]
where \(\mathcal{W}_{M}=\{w\in\mathbb{R}^{N}\colon w\geq 0,\|w\|_{0}\leq M\}\) is the set of vectors in \(\mathbb{R}^{N}\) with non-negative weights and at most \(M\) non-zero values. The parameter \(M\) dictates the level of sparseness of the approximation. The IHT technique (Zhang et al., 2021) solves this problem by performing gradient descent with respect to \(w\) at each iteration and then projecting the weights to \(\mathcal{W}_{M}\), to maintain a valid, sparse choice of weights.
Using Theorem 4 is it made apparent that (32) is equivalent to finding a sparse, non-negative weights solution to the kernel pre-image problem, Problem 1. Gradient descent methods were some of the first methods considered for the kernel pre-image problem (Burges, 1996) although there were not studied extensively due to the computational considerations at the time. The IHT method utilises advances in accelerated optimisation methods to efficiently find choices of weights that result in a good approximation to the posterior.
The link to the pre-image problem and the IHT method can be taken further by realising that the IHT method is optimising an MMD criterion. Therefore, IHT can be seen as performing MMD descent on the empirical measure that corresponds to the choice of points and weights. MMD descent is a method of empirical measure approximation that uses gradients to minimise an MMD objective (Arbel et al., 2019). There are related methods which use other kernel-based discrepancies (Korba et al., 2021; Xu et al., 2022). This begs the question of the performance of these other kernel-based descent methods on the task of constructing Bayesian coresets when using the kernel (18).
## 7 Kullback-Leibler Bayesian Coresets
The aim of this section is to provide a novel connection between Kullback-Leibler (KL) Bayesian coreset algorithms and Bayes Hilbert spaces. The former are Bayesian coreset algorithms which use the Kullback-Leibler divergence to measure the discrepancy between the target posterior and the Bayesian coreset posterior. The primary focus will be on showing that the quasi-newton KL Bayesian coreset algorithm (Naik et al., 2022) is equivalent, up to hyper-parameter choices, to variational inference in a Bayes Hilbert space. This novel relationship will be shown to be a consequence of the description of variational inference techniques in terms of Bayes Hilbert spaces provided by Barfoot and D'Eleuterio (2023).
The KL divergence, see Section 2.1 is a divergence that is used often in variational inference (Blei et al., 2017). This is a measure approximation method where the KL divergence is minimised over a user chosen set of measures, called the variational family. The variational family is often parameterised using the parameters of a given distribution. For example, the variational family could be a mixture of Gaussian and the variational parameters would be the mixture weights and the means and covariances of each Gaussian. The optimisation to find the best approximation within the variational family typically involves gradient descent updates. This does not require samples from the target measure, an essential property in the case where the target measure is expensive to sample from, for example when it is the posterior after observing a large number of data points.
Variational inference was framed in terms of Bayes Hilbert spaces by Barfoot and D'Eleuterio (2023) and this perspective is now outlined. Let \(B^{2}(\mu)\) be a Bayes Hilbert space, \(\nu\in B^{2}(\mu)\) be
the target measure one wishes to approximate and \(\{b_{n}\}_{n=1}^{N}\subset B^{2}(\mu)\) some set of \(N\) elements which form the dictionary of the approximation. The variational family is \(\{\oplus_{n=1}^{N}w_{n}\odot b_{n}\colon w\in\mathbb{R}^{N}\}\) with corresponding optimisation problem
\[\operatorname*{arg\,min}_{w\in\mathbb{R}^{N}}\operatorname{KL}\left(\bigoplus_ {n=1}^{N}w_{n}\odot b_{n}\parallel\nu\right), \tag{33}\]
where the notation \(\oplus,\odot\) is defined in Section 2.1.
The key insight of Barfoot and D'Eleuterio (2023) is that solving this by a quasi-Newton gradient descent method with respect to \(w\) is equivalent to doing iterative projections in a Bayes Hilbert space. Specifically, Barfoot and D'Eleuterio (2023, Equation 41) provide the quasi-Newton update
\[w_{t+1}=w_{t}+G_{\nu_{t}}(\mathbf{b})^{-1}\langle\mathbf{b},\nu\ominus\nu_{t} \rangle_{B^{2}(\nu_{t})}, \tag{34}\]
where \(\nu_{t}\) is the approximation at iteration \(t\) using weights \(w_{t}\), \(G_{\nu_{t}}(\mathbf{b})\) is the \(N\times N\) Gram matrix of the approximating dictionary \(\{b_{n}\}_{n=1}^{N}\) with \((n,m)\)-th entry \(\langle b_{n},b_{m}\rangle_{B^{2}(\nu_{t})}\) and \(\langle\mathbf{b},\nu\ominus\nu_{t}\rangle_{B^{2}(\nu_{t})}\) is the \(\mathbb{R}^{N}\) vector with \(n\)-th entry \(\langle b_{n},\nu\ominus\nu_{t}\rangle_{B^{2}(\nu_{t})}\). See Section 2.1 for the definition of \(\ominus\) and the Bayes Hilbert space inner product. It was noted that this quasi-Newton gradient update can be written as
\[w_{t+1}=G_{\nu_{t}}(\mathbf{b})^{-1}\langle\mathbf{b},\nu\rangle_{B^{2}(\nu_{ t})},\]
see Barfoot and D'Eleuterio (2023, Equation 43), meaning that at each iteration the target \(\nu\) is being projected to the span of \(\{b_{n}\}_{n=1}^{N}\) with respect to the \(B^{2}(\nu_{t})\) inner product. This shows how the geometry of Bayes Hilbert spaces links with variational inference methods.
The connection between the quasi-Newton Bayesian coreset algorithm derived by Naik et al. (2022) and this Bayes Hilbert space perspective on variational inference is now made. The method of Naik et al. (2022) starts by setting the target measure to be \(\pi\), the posterior, and choosing some \(M\in\mathbb{N}\). Then a size \(M\) subset from \(\{x_{n}\}_{n=1}^{N}\) is uniformly sampled, which has its indices re-ordered and is denoted \(\{x_{m}\}_{m=1}^{M}\). The weights for the log-likelihood terms \(\{l_{x_{m}}\}_{m=1}^{M}\) are then optimised using a quasi-Newton iterations and thresholding the weights at each iteration to be non-negative.
The connection to Barfoot and D'Eleuterio (2023) is made by setting \(\nu=\pi\), \(N=M\) and \(b_{m}=l_{x_{m}}\) in (33). This means the optimisation is
\[\operatorname*{arg\,min}_{w\in\mathbb{R}_{\geq 0}^{M}}\operatorname{KL}\left( \bigoplus_{m=1}^{M}w_{m}\odot b_{m}\parallel\pi\right)=\operatorname*{arg\,min }_{w\in\mathbb{R}_{\geq 0}^{M}}\operatorname{KL}(\pi_{w,x}\parallel\pi),\]
which is equal to (33) except with an additional non-negative weight constraint.
Reading off the quasi-Newton iteration (Naik et al., 2022, Equation 13) one sees it is equal, up to hyper-parameter choices, to the update (34) since the covariance terms in Naik et al. (2022, Equation 13) coincide with the \(B^{2}(\nu_{t})\) inner product. The method of Naik et al. (2022) then thresholds the weights to ensure non-negativity.
Overall, this shows that the quasi-Newton KL coreset method can be viewed in terms of iterative projections in a Bayes Hilbert space. This cross pollination of perspectives is productive since it provides a Bayes Hilbert space geometric interpretation to the quasi-Newton Bayesian coreset algorithm. In turn, the numerical and theoretical investigations of Naik et al. (2022) bolster the geometric perspective. Given that Bayes Hilbert spaces are tangent spaces in the information geometry sense (Fukumizu, 2009, Pistone, 2013) the quasi-Newton coreset method (Naik et al., 2022) can be seen as iterative updates of projecting in a direction in a tangent space and then
updating the base location of the tangent space. This is also related to natural gradient descent as noted by Barfoot and D'Eleuterio (2023). This perspective was introduced for the greedy KL coreset method in Campbell and Beronov (2019) and the results of this section show it applies in a broader sense via the Bayes Hilbert space perspective.
## 8 Conclusion
This manuscript has described Bayes Hilbert spaces and shown how they are appropriate spaces to perform posterior measure approximation. Along the way a novel bound relating the Bayes Hilbert space norm to common discrepancies was provided as well as a novel connection between kernel-based discrepancies and Bayes Hilbert spaces in the context of posterior approximation. Multiple Bayesian coreset methods were expressed in terms of Bayes Hilbert spaces which provided insight and new interpretations of how they work.
Future avenues of research relating to Bayes Hilbert spaces, Bayesian coresets and posterior approximation are legion. Three main avenues are now outlined. First, an issue noted when defining Bayes Hilbert spaces was that they are in a sense too big, because they can contain infinite measures. The related work in information geometry (Fukumizu, 2009) defines a related set of measures that is homeomorphic to a subset of a Hilbert space that has stronger integrability conditions, but the set of measures is not a vector space whereas the Bayes Hilbert space is. Therefore, a question is what is an appropriate space with stricter integrability conditions to maintain finite measures while also having a vector space structure. Second, as noted in van den Boogaart et al. (2014) it is straight forward to construct a basis for the Bayes Hilbert space by using orthogonal polynomials. The specific case of a Gaussian base measure and Hermite polynomials was studied in Barfoot and D'Eleuterio (2023). There have been many recent advances in the study of sparse, high-dimensional approximation using sparse polynomials (Adcock et al., 2022). This begs the question of how these methods can be applied to the Bayes Hilbert space. Using polynomials for likelihood approximation in the context of Bayesian computation was studied in Huggins et al. (2017) and combining this approach with the aforementioned advances could provide further advantages. Finally, there have been many recent innovations in distribution compression and quantisation of empirical measures (Riabiz et al., 2022; Dwivedi and Mackey, 2021). Given that Theorem 4 links this problem to the Bayesian coreset problem the question immediately arises as to how to leverage thee innovations in the context of posterior approximation, in particular in the case where the points to form the likelihood \(\{z_{m}\}_{m=1}^{M}\) are not a subset of the observed data. This pseudo-coreset method has only been explored for Bayesian coresets in the KL coreset setting (Manousakas et al., 2020) but it is a natural thing to do in distribution compression setting (Chen et al., 2010).
## 9 Appendix
### Proof of Theorem 3
The proof of this result is simply an adaptation of Theorem 5, Theorem 11 and Theorem 14 from Sprungk (2020). All the proofs leverage a local Lipschitz continuity which follows from the assumption involving the bound \(B\) on the CLR transforms.
Recall that the measures in question \(\eta,\nu\) are written as
\[\frac{\mathrm{d}\eta}{\mathrm{d}\mu} =Z_{\eta}^{-1}\exp(\Psi_{\mu}(\eta)) Z_{\eta}=\mathbb{E}_{\mu}[\exp(\Psi_{\mu}(\eta))] \tag{35}\] \[\frac{\mathrm{d}\nu}{\mathrm{d}\mu} =Z_{\nu}^{-1}\exp(\Psi_{\mu}(\nu)) Z_{\nu}=\mathbb{E}_{\mu}[\exp(\Psi_{\mu}(\nu))]. \tag{36}\]
By Jensen's inequality, this means that \(Z_{\eta}\geq\exp(\mathbb{E}_{\mu}[\Psi_{\mu}(\eta)])=1\) since \(\Psi_{\mu}(\eta)\in L_{0}^{2}(\mu)\), the same lower bound applies to \(Z_{\nu}\) too. Therefore, \(Z_{\eta}^{-1},Z_{\nu}^{-1}\leq 1\).
#### Hellinger Distance
First the Hellinger distance bound is derived. Starting from the definition (7) and using \(\mu\) as the base measure
\[2\mathrm{H}(\eta,\nu)^{2} =\int_{\Theta}\left(e^{\frac{1}{2}\Psi_{\mu}(\eta)}Z_{\eta}^{- \frac{1}{2}}-e^{\frac{1}{2}\Psi_{\mu}(\nu)}Z_{\nu}^{-\frac{1}{2}}\right)^{2} \mathrm{d}\mu\] \[\leq 2\int_{\Theta}\left(e^{\frac{1}{2}\Psi_{\mu}(\eta)}Z_{\eta} ^{-\frac{1}{2}}-e^{\frac{1}{2}\Psi_{\mu}(\nu)}Z_{\eta}^{-\frac{1}{2}}\right) ^{2}\mathrm{d}\mu\] \[+2\int_{\Theta}\left(e^{\frac{1}{2}\Psi_{\mu}(\nu)}Z_{\eta}^{- \frac{1}{2}}-e^{\frac{1}{2}\Psi_{\mu}(\nu)}Z_{\nu}^{-\frac{1}{2}}\right)^{2} \mathrm{d}\mu\] \[\eqqcolon I_{1}+I_{2},\]
where the inequality \((a-b)^{2}\leq 2(a-c)^{2}+2(c-b)^{2}\) has been used which is a result of the triangle inequality. The terms \(I_{1},I_{2}\) will now be upper bounded.
\[I_{1} =2Z_{\eta}^{-1}\int_{\Theta}\left(e^{\frac{1}{2}\Psi_{\mu}(\eta)} -e^{\frac{1}{2}\Psi_{\mu}(\nu)}\right)^{2}\mathrm{d}\mu\] \[\leq 2e^{B}\int_{\Theta}\left(\frac{\Psi_{\mu}(\eta)}{2}-\frac{ \Psi_{\mu}(\nu)}{2}\right)^{2}\mathrm{d}\mu \tag{37}\] \[=\frac{1}{2}e^{B}\|\Psi_{\mu}(\eta)-\Psi_{\mu}(\nu)\|_{L^{2}(\mu) }^{2}=\frac{1}{2}e^{B}\|\eta-\nu\|_{B^{2}(\mu)}^{2},\]
where (37) uses \(|e^{x}-e^{y}|\leq e^{\max(x,y)}|x-y|\forall x,y,\in\mathbb{R}\) and the assumed upper bound \(B\) on \(\Psi_{\mu}(\eta),\Psi_{\mu}(\nu)\). To bound \(I_{2}\) first note that by the definition of the normalisation constants,
\[I_{2} =2\int_{\Theta}\left(e^{\frac{1}{2}\Psi_{\mu}(\nu)}Z_{\eta}^{- \frac{1}{2}}-e^{\frac{1}{2}\Psi_{\mu}(\nu)}Z_{\nu}^{-\frac{1}{2}}\right)^{2} \mathrm{d}\mu\] \[=2Z_{\eta}^{-1}\left(Z_{\eta}^{\frac{1}{2}}-Z_{\nu}^{\frac{1}{2} }\right)^{2}\leq 2\left(Z_{\eta}^{\frac{1}{2}}-Z_{\nu}^{\frac{1}{2}}\right)^{2}.\]
Now, \(|x^{1/2}-y^{1/2}|\leq\frac{1}{2}\min(x,y)^{-1/2}|x-y|\,\forall\,x,y>0\) gives
\[I_{2}\leq\frac{1}{\min(Z_{\eta},Z_{\nu})}|Z_{\eta}-Z_{\nu}|^{2}\leq\frac{1}{2} |Z_{\eta}-Z_{\nu}|^{2}.\]
Finally,
\[|Z_{\eta}-Z_{\nu}|^{2}\leq\int_{\Theta}\left|e^{\Psi_{\mu}(\eta)}-e^{\Psi_{ \mu}(\nu)}\right|^{2}\mathrm{d}\mu\leq e^{2B}\|\Psi_{\mu}(\eta)-\Psi_{\mu}( \nu)\|_{L^{2}(\mu)}^{2}=e^{2B}\|\eta-\nu\|_{B^{2}(\mu)}^{2},\]
where the same bounding technique as (37) was used. Putting this together gives
\[\mathrm{H}(\eta,\nu)^{2}\leq\frac{1}{4}\left(e^{B}+e^{2B}\right)\|\eta-\nu\|_ {B^{2}(\mu)}^{2},\]
as desired.
#### KL Divergence
The case for KL divergence is now covered. First note
\[\frac{\mathrm{d}\eta}{\mathrm{d}\nu}=\frac{\mathrm{d}\eta}{\mathrm{d}\mu}\frac{ \mathrm{d}\mu}{\mathrm{d}\nu}=Z_{\eta}^{-1}Z_{\nu}e^{\Psi_{\mu}(\eta)-\Psi_{\mu }(\nu)},\]
therefore
\[\mathrm{KL}(\eta\parallel\nu)=\int_{\Theta}\log\frac{\mathrm{d} \eta}{\mathrm{d}\nu}\mathrm{d}\eta \leq\left|\log Z_{\eta}-\log Z_{\nu}\right|+\int_{\Theta}\left| \Psi_{\mu}(\eta)-\Psi_{\mu}(\nu)\right|Z_{\eta}^{-1}e^{\Psi_{\mu}(\eta)} \mathrm{d}\mu\] \[\eqqcolon I_{1}+I_{2}.\]
Using \(|\log x-\log y|\leq\min(x,y)^{-1}|x-y|\,\forall x,y>0\) gives
\[I_{1} \leq\min(Z_{\eta},Z_{\nu})^{-1}|Z_{\eta}-Z_{\nu}|\] \[\leq e^{B}\|\Psi_{\mu}(\eta)-\Psi_{\mu}(\nu)\|_{L^{1}(\mu)}\] \[\leq e^{B}\|\Psi_{\mu}(\eta)-\Psi_{\mu}(\nu)\|_{L^{2}(\mu)}=e^{B} \|\eta-\nu\|_{B^{2}(\mu)},\]
where the local Lipschitz bound \(|e^{x}-e^{y}|\leq e^{\max(x,y)}|x-y|\,\forall\,x,y,\in\mathbb{R}\) was used to get the \(L^{1}(\mu)\) norm upper bound, similar to its use in (37). The \(L^{2}(\mu)\) norm upper bounds the \(L^{1}(\mu)\) norm since \(\mu\) is a probability measure. If \(\mu\) was assumed only to be a finite measure then there would be an extra factor of \(\mu(\Theta)\) in the bounds. Bounding \(I_{2}\) is done in a straightforward way
\[I_{2}=\int_{\Theta}\left|\Psi_{\mu}(\eta)-\Psi_{\mu}(\nu)\right| Z_{\eta}^{-1}e^{\Psi_{\mu}(\eta)}\mathrm{d}\mu \leq e^{B}\|\Psi_{\mu}(\eta)-\Psi_{\mu}(\nu)\|_{L^{1}(\mu)}\] \[\leq e^{B}\|\Psi_{\mu}(\eta)-\Psi_{\mu}(\nu)\|_{L^{2}(\mu)}=e^{B} \|\eta-\nu\|_{B^{2}(\mu)}.\]
Putting \(I_{1},I_{2}\) together gives the desired bound for the KL divergence.
#### Wasserstein-\(1\) Distance
Finally, the Wasserstein-\(1\) case is dealt with. Take any \(\theta_{0}\in\Theta\) then by simply shifting the values it suffices to consider the supremum over functions in \(\mathrm{Lip}(1)\) such that \(f(\theta_{0})=0\). For such functions \(|f(\theta)|\leq d_{\Theta}(\theta,\theta_{0})\). Following Sprungk (2020, Theorem 14),
\[\left|\int_{\Theta}f\mathrm{d}\eta-\int_{\Theta}f\mathrm{d}\nu\right| =\left|\int_{\Theta}f\cdot\left(Z_{\eta}^{-1}e^{\Psi_{\mu}(\eta)} -Z_{\nu}^{-1}e^{\Psi_{\mu}(\nu)}\right)\mathrm{d}\mu\right|\] \[\leq\left|Z_{\eta}^{-1}-Z_{\nu}^{-1}\right|\left|\int_{\Theta}fe^ {\Psi_{\mu}(\eta)}\mathrm{d}\mu\right|+\left|Z_{\nu}^{-1}\int_{\Theta}f\cdot \left(e^{\Psi_{\mu}(\eta)}-e^{\Psi_{\mu}(\nu)}\right)\mathrm{d}\mu\right|\] \[\eqqcolon I_{1}+I_{2},\]
where the inequality is by the triangle inequality. To bound \(I_{1}\) note
\[\left|Z_{\eta}^{-1}-Z_{\nu}^{-1}\right|=\frac{|Z_{\eta}-Z_{\nu}|}{Z_{\eta}Z_{ \nu}}\leq|Z_{\eta}-Z_{\nu}|\leq e^{B}\|\Psi_{\mu}(\eta)-\Psi_{\mu}(\nu)\|_{L^ {1}(\mu)},\]
where the first inequality is by the lower bound of \(1\) on the normalising constants and the second by using again the local Lipschitz result, as was done in (37). Using this gives
\[I_{1}\leq\int_{\Theta}\left|fe^{\Psi_{\mu}(\eta)}\right|\mathrm{d}\mu\cdot e^{ B}\|\Psi_{\mu}(\eta)-\Psi_{\mu}(\nu)\|_{L^{1}(\mu)}\leq e^{2B}\|\mu\|_{\mathcal{P}^{1}} \|\Psi_{\mu}(\eta)-\Psi_{\mu}(\nu)\|_{L^{1}(\mu)},\]
where the second inequality is by the bounds on \(|f(\theta)|\), \(\Psi_{\mu}(\eta)\) and the way the choice of \(\theta_{0}\) was arbitrary, which allows the infimum bound \(norm\mu_{\mathcal{P}^{1}}\).
Bounding \(I_{2}\) is largely similar,
\[I_{2}\leq\int_{\Theta}\!\!\left|d_{\Theta}(\theta,\theta_{0})\right|\left|e^{ \Psi_{\mu}(\eta)}-e^{\Psi_{\mu}(\nu)}\right|\mathrm{d}\mu\leq e^{B}\|\mu\|_{ \mathcal{P}^{2}}\int_{\Theta}\!\!\|\Psi_{\mu}(\eta)-\Psi_{\mu}(\nu)\|_{L^{2}( \mu)},\]
where the second inequality is by the Cauchy-Schwarz inequality and the local Lipschitz bound again. Overall, using the Cauchy-Schwartz inequality along with how \(\mu\) is a probability measure to bound the \(L^{1}\) norm with the \(L^{2}\) norm gives
\[\mathrm{W}_{1}(\eta,\nu)\leq\left(e^{B}+e^{2B}\right)\|\mu\|_{\mathcal{P}^{2}} \|\Psi_{\mu}(\eta)-\Psi_{\mu}(\nu)\|_{L^{2}(\mu)}=\left(e^{B}+e^{2B}\right)\| \mu\|_{\mathcal{P}^{2}}\|\eta-\nu\|_{B^{2}(\mu)}.\]
### Proof of Corollary 1
The proof is a simple application of Schneider (2016, Theorem 2). First, note that by Theorem 4
\[\|\pi_{M}-\pi\|_{B^{2}(\mu)}=\mathrm{MMD}_{k}(P_{M},P_{N})=\left\|\Phi_{k}P_{M }-\Phi_{k}P_{N}\right\|_{k},\]
where \(k\) is the kernel (18), \(P_{M}=\frac{N}{M}\sum_{m=1}^{M}z_{m}\) and \(P_{N}=\sum_{n=1}^{N}\delta_{x_{n}}\). Therefore, the quantity to be bounded is the same as in Schneider (2016, Theorem 2) up to an additional scaling of \(N\) since the referenced result involves the normalised empirical measures \(\widetilde{P}_{M}=\frac{1}{N}P_{M},\widetilde{P}_{N}=\frac{1}{N}P_{N}\). This means the bound in Schneider (2016, Theorem 2) implies for any \(\varepsilon_{0}\)
\[\mathbb{P}\left(\left\|\Phi_{k}\widetilde{P}_{M}-\Phi_{k}\widetilde{P}_{N} \right\|_{k}\geq\varepsilon_{0}\right)=\mathbb{P}(\left\|\Phi_{k}P_{M}-\Phi_{ k}P_{N}\right\|_{k}\geq N\varepsilon_{0})\leq 2\exp\left(-\frac{M\varepsilon_{0}^ {2}}{8\gamma^{2}(1-(M-1)/N)}\right),\]
and then the final step is simply substituting \(\varepsilon=N\varepsilon_{0}\) and rearranging to obtain the bound purely in terms of \(\delta\).
|
2310.02099 | Re-evaluation of the $^{22}$Ne($p$,$γ$)$^{23}$Na reaction rate:
$R-$matrix analysis of the non-resonant capture and effect of the 8945 keV
(${7/2}^{-}$) resonance strength | The $^{22}$Ne($p,\gamma$)$^{23}$Na capture reaction is a key member of the
Ne-Na cycle of hydrogen burning. The rate of this reaction is critical in
classical novae nucleosynthesis and hot bottom burning processes (HBB) in
asymptotic giant branch (AGB) stars. Despite its astrophysical importance,
significant uncertainty remains in the reaction rate due to several narrow low
energy resonances lying near the Gamow window. The present work revisits this
reaction by examining the contribution of the 8664 keV subthreshold state and
the 151 keV doublet resonance state of 7/2$^-$ configuration in $^{23}$Na.
Finite range distorted-wave Born approximation (FRDWBA) analyses of existing
$^{22}$Ne($^3$He,$d$)$^{23}$Na transfer reaction data were carried out to
extract the peripheral asymptotic normalization coefficients (ANC) of the 8664
keV state. The ANC value obtained in the present work is $\sim 25\%$ higher
compared to the previous work by Santra et al.~\cite{SA20}. Systematic
$R$-matrix calculations were performed to obtain the non-resonant astrophysical
$S$-factor utilizing the enhanced ANC value. The resonance strengths of the
8945 keV doublets were deduced from shell model calculations. The total
reaction rate is found to be $\sim 15\%$ higher at temperatures relevant for
the HBB processes, compared to the recent rate measured by Williams et
al.~\cite{WI20}, and matches the rate by Williams et al.~\cite{WI20} at
temperatures of interest for classical novae nucleosynthesis. | Sk Mustak Ali, Rajkumar Santra, Sathi Sharma, Ashok kumar Mondal | 2023-10-03T14:45:09Z | http://arxiv.org/abs/2310.02099v1 | Re-evaluation of the \({}^{22}\)Ne(\(p\),\(\gamma\))\({}^{23}\)Na reaction rate: \(R-\)matrix analysis of the non-resonant capture and effect of the 8945 keV (\(7/2^{-}\)) resonance strength
###### Abstract
The \({}^{22}\)Ne(\(p,\gamma\))\({}^{23}\)Na capture reaction is a key member of the Ne-Na cycle of hydrogen burning. The rate of this reaction is critical in classical novae nucleosynthesis and hot bottom burning processes (HBB) in asymptotic giant branch (AGB) stars. Despite its astrophysical importance, significant uncertainty remains in the reaction rate due to several narrow low energy resonances lying near the Gamow window. The present work revisits this reaction by examining the contribution of the 8664 keV subthreshold state and the 151 keV doublet resonance state of \(7/2^{-}\) configuration in \({}^{23}\)Na. Finite range distorted-wave Born approximation (FRDWBA) analyses of existing \({}^{22}\)Ne(\({}^{3}\)He,\(d\))\({}^{23}\)Na transfer reaction data were carried out to extract the peripheral asymptotic normalization coefficients (ANC) of the 8664 keV state. The ANC value obtained in the present work is \(\sim 25\%\) higher compared to the previous work by Santra et al. [21]. Systematic \(R\)-matrix calculations were performed to obtain the non-resonant astrophysical \(S\)-factor utilizing the enhanced ANC value. The resonance strengths of 8945 keV doublets were deduced from shell model calculations. The total reaction rate is found to be \(\sim 15\%\) higher at temperatures relevant for the HBB processes, compared to the recent rate measured by Williams et al. [18], and matches the rate by Williams et al. [18] at temperatures of interest for classical novae nucleosynthesis.
## I Introduction
The neon-sodium (Ne-Na) cycle is of enormous importance in stellar nucleosynthesis as it is responsible for the hydrogen burning in massive stars, and is involved in the synthesis of elements between Ne and Mg [1; 2]. Within the Ne-Na cycle, the proton capture reaction \({}^{22}\)Ne(\(p,\gamma\))\({}^{23}\)Na (\(Q=8794.11\pm 0.02\) keV) is of significant interest. It not only consumes \({}^{22}\)Ne, which is the third most abundant nuclide produced in stellar helium burning, but also produces \({}^{23}\)Na, the only stable isotope of sodium [3; 4]. This reaction influences the weak \(s-\)process nucleosynthesis by competing with the \({}^{22}\)Ne(\(\alpha\),\(n\))\({}^{25}\)Mg reaction, which is a major neutron source in asymptotic giant branch (AGB) stars. The rate of this reaction impacts the stellar models that seek to explain the puzzling anticorrelation in oxygen and sodium abundances observed in globular clusters [5; 6]. It affects the abundance ratios of Ne isotopes in presolar grains extracted from meteorites [7]. Further, sensitivity studies have shown that the nuclear uncertainties of the \({}^{22}\)Ne(\(p,\gamma\))\({}^{23}\)Na reaction can have drastic impact on the \({}^{22}\)Ne and \({}^{23}\)Na abundances in classical novae nucleosynthesis [8].
The reaction rate of \({}^{22}\)Ne(\(p,\gamma\))\({}^{23}\)Na at the astrophysical energies depends on the contribution of several low energy resonances in \({}^{23}\)Na and a slowly varying non-resonant capture component. The uncertainty in the rate spanned a factor of 1000 between the rates from NACRE [9] and others [10; 11; 12]. The dominant source of this uncertainty is due to various unobserved or poorly constrained narrow resonances at the relevant astrophysical energies. In recent years, several measurements were carried out to address the large discrepancy in the \({}^{22}\)Ne(\(p,\gamma\))\({}^{23}\)Na rate by precisely measuring the pertinent resonance strengths at proton energies \(E_{p}\sim 70-500\) keV [13; 14; 15; 16; 17; 18]. As a result, the uncertainty in the \({}^{22}\)Ne(\(p,\gamma\))\({}^{23}\)Na rate was reduced by 3 orders of magnitude at \(T=0.1\) GK [17].
Despite this major improvement, contentions remain on the resonance strength measurements and existence of some of the low energy resonances lying inside and near the Gamow window. The 8945 keV state in \({}^{23}\)Na affects the \({}^{22}\)Ne(\(p,\gamma\))\({}^{23}\)Na reaction rate as it lies within the Gamow window at \(T=0.1\) GK. Previous measurements of the 8945 keV resonance considered it as a single state with \(J^{\pi}=7/2^{-}\)[10; 19]. However, the measurement by Jenkins _et al._[20] with Gammasphere reported that this resonance actually comprises a doublet, one with \(J^{\pi}=7/2^{-}\) and the other with a tentative \(J^{\pi}=3/2^{+}\). Several direct measurements have obtained the resonance strength for the \(3/2^{+}\) state, and the value ranges from \(1.48\times 10^{-7}\) eV to \(2.2\times 10^{-7}\) eV [14; 15; 16; 17; 18]. But no such direct measurements exist for the \(7/2^{-}\) state and only an upper limit of \(9.7\times 10^{-8}\) eV has been recommended for its strength from (\({}^{3}\)He,\(d\)) transfer reaction [10]. This
strength was calculated from the spectroscopic factor for the 8945 keV state, assuming \(l=3\) transfer, consistent with \(J^{\pi}=7/2^{-}\). However, this assumption is questionable due to the limited number of data points in the angular distribution of the 8945 keV state.
Recently, Santra et al. [21] reanalyzed the data of Hale et al. [10] considering the contribution of both the \(3/2^{+}\) (\(l=2\) transfer) and \(7/2^{-}\) (\(l=3\) transfer) states, extracting the spectroscopic factors for both the states. They observed a better reproduction of the limited angular distribution data. In their indirect study of the \({}^{22}\)Ne(\(p,\gamma\))\({}^{23}\)Na reaction, they carried out a systematic \(R-\)matrix analysis of the direct capture (DC) component and including the contribution of the 8664 keV subthreshold state in \({}^{23}\)Na. The low energy behaviour of the \(S-\)factor of \({}^{22}\)Ne(\(p,\gamma\))\({}^{23}\)Na reaction is controlled by this subthreshold resonance [17]. In Ref. [21], the \(R-\)matrix calculations were constrained by the asymptotic normalization coefficients (ANC) extracted from the \({}^{20}\)Ne(\({}^{3}\)He,\(d\))\({}^{23}\)Na transfer data at 15 MeV [22] for the first six bound states and 20 MeV [10] for the 8664 keV state. However, the ANC value of 144 fm\({}^{-1/2}\) obtained for the 8664 keV state in Ref. [21] did not satisfy the necessary peripherality conditions. The resulting \(S-\)factor using this ANC value could not reproduce the DC \(\to\) 8664 keV capture data, particularly for \(E_{p}<500\) keV. A better fit to the data was obtained by simultaneous R-matrix fit to the direct capture data of Rolfs _et al._[2], Gorres _et al._[19] and Ferraro _et al._[17] keeping the ANC and the \(\Gamma_{\gamma}\) values of the background poles as free parameters. As a result, they could reproduce the rising effect in the low energy astrophysical \(S-\)factor of the ground state capture data as observed by Ferraro _et al._[17]. They reported a value of \(48.8\pm 9.5\) keV b for the total direct capture \(S-\)factor at zero relative energy, and the resultant reaction rate was distinctly higher compared to the previously obtained rates [10; 15; 17] for \(T\leq 0.1\) GK.
In the present work, we attempt to re-examine the \({}^{22}\)Ne(\(p,\gamma\))\({}^{23}\)Na reaction by focusing on the extraction of peripheral ANC of the 8664 keV state. The angular distribution data of the \({}^{20}\)Ne(\({}^{3}\)He,\(d\))\({}^{23}\)Na one-proton stripping reaction at energies of 12 and 15 MeV has been used to obtain the corresponding ANC and its peripheral nature is checked. Also, the contributions of the excited states 7080, 7449 and 7890 keV which were not considered in the previous analysis [21] have been included in the present work. As discussed earlier, the spectroscopic factor for the \(7/2^{-}\) configuration of the 8945 keV state still remain elusive of direct measurements. Hence, detailed microscopic shell model calculations have been performed to yield the required proton width (\(\Gamma_{p}\)) and the corresponding resonance strength for this state. The resultant reaction rate as a function of temperature is compared with the recent measurement by Williams et al. [18].
## II Analysis
### Finite-range DWBA analysis and ANC extraction
The finite-range distorted wave Born approximation (FRDWBA) calculations were performed for the 8664 keV (\(1/2^{+}\)) subthreshold state in \({}^{23}\)Na using the existing angular distribution data of \({}^{22}\)Ne(\({}^{3}\)He,\(d\))\({}^{23}\)Na reaction at bombarding energy of 15 MeV [22]. The FRDWBA calculations required the optical model potential (OMP) parameters for the entrance channel \({}^{22}\)Ne+\({}^{3}\)He, exit channel \(d\)+\({}^{23}\)Na, and the core-core \({}^{22}\)Ne+\(d\) interactions.
The real binding potentials for the (\(d\)+\(p\)) and \({}^{22}\)Ne+\(p\) systems were also included, with their depths adjusted to reproduce the effective proton separation energy. The potentials were of the standard Woods-Saxon shape. The potential parameters are listed in Table 1. The code FRESCO [24] was used to carry out the calculations. The resultant DWBA calculations along with the data are shown in Fig. 1. The proton spectroscopic factors \(S\), were extracted by normalizing the calculated DWBA calculations to the experimental data,
\[\left(\frac{d\sigma}{d\Omega}\right)_{\rm Exp}=S\!\left(\frac{d\sigma}{d \Omega}\right)_{\rm DWBA} \tag{1}\]
The spectroscopic factor for \({}^{3}\)He in (\(d+p\)) configuration is taken as 1.16 [26]. The proton spectroscopic factors for the 8664 keV state from the present calculations are relatively higher than those obtained in Ref. [22; 23] (Table 2). Note that in these previous works, the zero-range DWBA calculations used a normalizing factor of 4.42 to explain the experimental data [22; 23]. Inclusion of the complex remnant term in the present FRDWBA
Figure 1: Angular distributions of the 8664 keV state from the \({}^{22}\)Ne(\({}^{3}\)He,\(d\))\({}^{23}\)Na reaction at 15 MeV [22]. The FRDWBA calculations are shown by the solid and dotted lines.
calculations results in a better overall fit to the data, as also seen by Ref. [21]. The dotted lines in Fig. 1 represent the FRDWBA calculation sans the remnant term.
As discussed in Ref. [21], the spectroscopic factors are dependent on the choice of potential parameters and are sensitive to the geometric parameters of the bound state potentials. Thus, instead of spectroscopic factor, the ANC is a more appropriate quantity. The ANC method is free from the geometrical parameters of the binding potentials and relies primarily on the peripheral nature of the reaction. The square of the ANC (\(C^{2}\)) of a particular state is related to the spectroscopic factor (\(S\)) via the single particle ANC (\(b\)) as
\[C^{2}=Sb^{2} \tag{2}\]
The single particle ANC (\(b\)) is the normalization of the bound state wave function of the composite nucleus \({}^{23}\)Na at large radii with respect to the Whittaker function.
To test the peripheral condition, the variation of the spectroscopic factor (\(S\)) against the single particle ANC (\(b\)) was studied for the 15 MeV data as shown in Fig. 2 (a). The single particle ANC (\(b\)) is varied by changing the geometrical parameters of the \({}^{22}\)Ne + \(p\) binding potential in small steps. According to Eq. 2, the variation of \(S\) should be proportional to the inverse square of \(b\). From Fig. 2(a), the \(S\) obtained from the 15 MeV data follows the inverse square relation. Hence, the ANC extracted from the 15 MeV data is peripheral and this ANC is considered for all further calculations. In Fig. 2 (b), the extracted ANC is plotted as a function of \(b\). The mean ANC obtained is 179.5 fm\({}^{-1/2}\) and is shown with the dotted line. In Fig. 3, the dependence of the ANC as a function of binding energy for the 8664 keV state is shown. The binding energy for the 8664 keV is \(130\pm 3\) keV [21]. It is varied by keeping the geometry parameters of the bound state potential fixed (\(r_{0}=1.26\) fm, \(a_{0}=0.60\) fm) corresponding to the mean ANC value. The plots show that the ANC value of the 8664 keV subthreshold state decreases with increasing binding energy. The ANC value for the 8664 keV state from the present work is \(\sim 25\%\) higher compared to that in Ref. [21].
The uncertainty of the mean value of ANC has contribution from two sources. The first one arises due to the propagation of the error of spectroscopic factors (\(S\)) through the relation given in Eq. 2. The errors in the values of \(S\) include the uncertainty of experimental angular distribution data. Secondly, the contribution of the uncertainty in binding energy is added in quadrature to obtain the total uncertainty in the mean ANC value. The dotted lines in Fig. 3 shows the variation in ANC due to the \(\pm 3\) keV uncertainty in binding energy of the 8664 keV state.
Similar analyses were carried out to obtain the ANC values for the 7080, 7449 and 7890 keV states using the 15 MeV data which were not considered in the earlier work by Santra _et al._[21]. The extracted spectroscopic
Figure 2: (a) Variation of spectroscopic factor (\(S\)) with single particle ANC (\(b\)) for the 8664 keV state. (b) Variation of ANC (\(C\)) with single particle ANC (\(b\)) for the 8664 keV state.
Figure 3: Variation of ANC (\(C\)) with binding energy for the 8664 keV state.
factors and ANCs from the present work along with the values available in the literature are listed in Table 2.
### _R_-matrix calculations
In the present work, a phenomenological \(R-\)matrix analysis was performed using the code AZURE2[27]. The basic \(R\)-matrix theory used in the code AZURE2 is described in Ref. [27; 28]. In the \(R\)-matrix calculations, the channel radius (\(r_{c}\)) divides the radial space into internal and external parts. For the present calculations, \(r_{c}=5.3\) fm, is obtained for the \({}^{22}\)Ne + \(p\) system by \(\chi^{2}\) minimization procedure employing a grid search technique, keeping the ANCs fixed. The search was performed on the total non-resonant \(S-\)factor data to choose the \(r_{c}\). The \(R\)-matrix fitting has been performed on the available capture data of individual states and total non-resonance capture data simultaneously. The ANCs for the ground and the first five bound excited states were taken from Santra et al. [21]. For the three bound excited states at 7080, 7449, 7890 keV and the 8664 keV subthreshold state, the ANC values are taken from this work. Two background poles with spin parity \(1/2^{-}\) and \(3/2^{-}\) are included, and only the \(E1\) decay was considered to simulate the internal capture part of present \(R\)-matrix calculations. The excitation energies of the poles are chosen at 15 MeV and proton partial widths are fixed at 5 MeV, calculated from Wigner limit approximations. However, the gamma decay partial widths of the background poles are left as free parameters, with the initial values taken from the Weisskopf limit for the corresponding gamma transitions. The fitted background pole parameters are shown in Table 3.
The present \(R\)-matrix modelling have two parts, at first the DC\(\rightarrow\)8664 keV calculations are carried out, the resultant \(S-\)factor is compared with the existing data. The next part consists of the calculations for the DC\(\rightarrow\)GS
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline \(J^{\pi}\) & E\({}_{x}\) (MeV) & \(\Gamma_{p}\) (MeV) & \(R\rightarrow\) g.s & \(R\to 0.44\) & \(R\to 2.39\) & \(R\to 2.98\) & \(R\to 6.30\) & \(R\to 6.91\) & \(R\to 8.66\) \\ \hline \(1/2^{-}\) & 15 & 5.0 & 822.91 & & \(3.17\times 10^{3}\) & 697.67 & \(1.58\times 10^{3}\) & & 139.528 \\ \(3/2^{-}\) & 15 & 5.0 & 20.92 & 974.42 & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Background pole parameters obtained from \(R-\)matrix fits.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline Channel & \(V\) & \(r_{V}\) & \(a_{V}\) & \(W_{V}\) & \(r_{W}\) & \(a_{W}\) & \(W_{S}\) & \(r_{S}\) & \(a_{S}\) & \(V_{SO}\) & \(r_{SO}\) & \(a_{SO}\) & \(r_{C}\) & Ref. \\ \hline \({}^{22}\)Ne + \({}^{3}\)He & 177.0 & 1.14 & 0.72 & 13.0 & 1.60 & 0.77 & \(-\) & \(-\) & \(-\) & 8.0 & 1.14 & 0.72 & 1.40 & 0.23 \\ \(d\) + \({}^{23}\)Na & 105.0 & 1.02 & 0.86 & \(-\) & \(-\) & \(-\) & 80.0 & 1.42 & 0.65 & 6.0 & 1.02 & 0.86 & 1.30 & 0.23 \\ \(d\) + \({}^{22}\)Ne & 88.0 & 1.17 & 0.73 & 0.24 & 1.33 & 0.73 & 35.8 & 1.33 & 0.73 & 13.85 & 1.07 & 0.66 & 1.33 & 0.24 \\ \(d\) + \(p\) & \({}^{a}\) & 1.25 & 0.65 & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & 6.2 & 1.25 & 0.65 & 1.30 & 0.24 \\ \(p\) + \({}^{22}\)Ne & \({}^{a}\) & 1.26 & 0.60 & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & 6.2 & 1.26 & 0.60 & 1.33 & 0.24 \\ \hline \hline \multicolumn{10}{l}{\({}^{a}\)varied to match separation energy} \\ \end{tabular}
\end{table}
Table 1: Potential parameters used in the present work. \(V\) and \(W\) are the real and imaginary depths in MeV, \(r\) and \(a\) are the radius and diffuseness in fm. \(R_{x}=r_{x}A^{1/3}\) fm (\(x=V,W,S,SO,C\)).
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline E\({}_{x}\) & \(J^{\pi}\) & \(nl_{j}\) & \(S\) & \(S\) & \(b\) (fm\({}^{-1/2}\)) & \(C\) (fm\({}^{-1/2}\)) & \(C\) (fm\({}^{-1/2}\)) \\ (keV) & & & (Present) & (Literature) & (Present) & (Present) & (Present) & (Literature) \\ \hline
8664 & \(1/2^{+}\) & \(2s_{1/2}\) & \(0.50\pm 0.05\) & \(0.32\pm 0.05\)[21] & 249 & \(179.5\pm 19.7\) & \(143.7\pm 15.2\)[21] \\ & & & & \(0.58\pm 0.08\)[25] & & & & \\ & & & & \(0.42\pm 0.08\)[17] & & & & \\ & & & & \(0.30\)[19] & & & & \\ & & & & & \(0.29\)[10] & & & \\ & & & & & \(0.31\)[23] & & & \\ & & & & \(0.27\)[22] & & & & \\ \hline
7080 & \(1/2^{-}\) & \(2p_{1/2}\) & \(0.08\pm 0.01\) & \(0.3\)[23] & \(8.95\) & \(2.53\pm 0.16\) & \\ & & & & \(0.085\)[22] & & & & \\
7449 & \(3/2^{+}\) & \(1d_{3/2}\) & \(0.06\pm 0.01\) & \(0.28\)[23] & \(3.93\) & \(0.96\pm 0.06\) & \\ & & & & \(0.14\)[22] & & & & \\
7890 & \(3/2^{+}\) & \(1d_{3/2}\) & \(0.05\pm 0.01\) & \(0.15\)[23] & \(3.75\) & \(0.84\pm 0.08\) & \\ & & & & \(0.11\)[22] & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Spectroscopic factors (\(S\)) and ANC (\(C\)) of \(E_{x}=7080,7449,7890\) and 8664 keV state of \({}^{23}\)Na from the present work.
transition and the total direct capture contribution. In order to see the effect of the enhanced ANC of the 8664 keV state, first the \(R-\)matrix calculations for DC\(\rightarrow\)8664 keV transition is carried out. The resultant \(S-\)factor and the differential \(S-\)factor at \(\theta=90^{\circ}\) obtained using the mean peripheral ANC of 179.5 fm\({}^{-1/2}\) are shown with the red dotted lines in Fig. 4(a) and (b), respectively. The band corresponds to the error in \(S-\)factor due to the \(\sim 11\%\) uncertainty in the ANC value. The \(S-\)factor obtained by including the peripheral ANC from this work is able to explain the DC\(\rightarrow\)8664 keV capture data by Gorres _et al._[19] (open black triangles) within the uncertainty band. However, the data points by Rolfs _et al._[2] (filled black squares) and Kelly _et al._[15] (filled blue circle) lie above the maximum limit of the \(S-\)factor band.
The 8664 keV state has a lifetime of \(0.14\pm 0.03\) fs and it decays to the ground state with a branching of \((84\pm 3)\%\) (\(\Gamma_{\gamma}=4.7\pm 1\) eV) [19]. The DC\(\rightarrow\)GS transition is influenced by the high energy tail of this \(s-\)wave subthreshold resonance (\(E_{p}=-130\) keV) as evident from the rise in the low energy \(S-\)factor data of Gorres _et al._[19] and Ferraro _et al._[17] (red squares) in Fig. 5(a). The corresponding \(S-\)factor for the DC\(\rightarrow\)GS transition from the \(R-\)matrix calculations are shown with the red dotted lines in Fig. 5(a). The calculations are in very good agreement with the low energy data (E\({}_{\rm cm}<400\) keV) of Gorres _et al._[19]. The data of Ferraro _et al._[17] and Kelly _et al._[15] lie inside the uncertainty band. At higher energies, the calculated \(S-\)factor passes through the data points of Gorres _et al._[19] but underestimates the data of Rolfs _et al._[2]. The rising nature of the \(S-\)factor at low energies is very well reproduced by the calculations.
The total \(S-\)factor for the non-resonant capture in \({}^{22}\)Ne(\(p\),\(\gamma\))\({}^{23}\)Na reaction is obtained by adding the \(S\)-factors of all the individual transitions to the ground and the bound excited states is shown with the red dotted line in Fig. 5(b). The calculations were carried out by
Figure 4: (a) Astrophysical \(S-\)factor and (b) differential \(S-\)factor at \(\theta=90^{\circ}\), obtained from the \(R-\)matrix calculations for the DC \(\rightarrow\) 8664 keV transition. The red dotted line corresponds to the \(S-\)factor using the mean ANC of 179.5 fm\({}^{-1/2}\) for the 8664 keV state. The bands correspond to error in \(S-\)factor due to uncertainty in the ANC value.
Figure 5: Astrophysical \(S-\)factor of the non-resonant capture in \({}^{22}\)Ne(\(p\),\(\gamma\))\({}^{23}\)Na from previous direct measurements [2; 15; 17; 18; 19] and present \(R-\)matrix calculations. (a) Capture to the ground state in \({}^{23}\)Na, (b) total \(S-\)factor. (see text for details)
including the ANCs for the three bound excited states 7080, 7449 and 7890 keV and the 8664 keV subthreshold state obtained from this work (Table 2). The ANCs for the ground and the first five bound excited states were taken from Ref. [21]. The bands correspond to error in \(S-\)factor due to uncertainty in the ANC values and uncertainty in the decay width of the 8664 keV state. The total \(S-\)factor from the calculations is in excellent agreement with the recent direct measurement data of Williams _et al._[18] as well as with the lower energy data of Ferraro _et al._[17].
### Shell model calculations and partial widths for the 8945 keV resonance
The extraction of spectroscopic factors for the 8945 keV doublet by Hale _et al._[10] and Santra _et al._[21] is not completely reliable due to the scarce angular distribution data. In this work, the proton spectroscopic factor for the \(7/2^{-}_{2}\) state of \({}^{23}\)Na at 8945 keV has been calculated using the NUSHELLX code [30]. Large basis shell model (LBSM) calculations were performed. The positive parity states were easily be reproduced using the _sd_ model space. But, for the negative parity state, the upper _pf_ shell were taken with the _sd_ shell. Thus, the _sdpf_ model space was used to get the negative parity states. The full model space calculation is constrained due to the present computational capacity. Hence, a suitable truncation scheme was adopted. Subshell restrictions were chosen with zero occupancy in the \(1f_{5/2}\) and \(2p_{1/2}\) subshells. The _sdpfmu_ interaction [31] with the mentioned truncation scheme reproduces the experimentally observed \(7/2^{-}_{2}\) state of \({}^{23}\)Na at 9173 keV. The calculated energy level is 228 keV above the experimentally adopted energy level. Similarly, the \(3/2^{+}_{8}\) doublet state has been reproduced theoretically at 9023 keV energy.
The single proton spectroscopic factor is calculated for the astrophysically important \(7/2^{-}_{2}\) and \(3/2^{+}_{8}\) doublet states of \({}^{23}\)Na at 8945 and 8944 keV. The proton spectroscopic factor for the \(3/2^{+}_{8}\) state from the shell model calculations is \(1\times 10^{-4}\) which is consistent with the order of the experimental values. For the \(7/2^{-}_{2}\) state, the value obtained from the calculation is 0.0104 which is substantially higher compared to the earlier studies [10; 21]. To validate the theoretical calculations, the spectroscopic factors for the low-lying states of \({}^{23}\)Na were also calculated and compared with the corresponding experimental values as shown in Table 4. The theoretical calculations are in good agreement with the experimentally determined values. This consistency provides strong confidence to our theoretically obtained spectroscopic factor for the \(7/2^{-}_{2}\) state.
The proton partial widths (\(\Gamma_{p}\)) for the 8945 keV doublet is obtained using the relation
\[\Gamma_{p}=S\Gamma_{sp} \tag{3}\]
where \(\Gamma_{sp}\) is the single particle width of a resonance for pure single particle configuration calculated from the code DWUCK4 [32] and \(S\) is the spectroscopic factor from shell model calculations. For comparison, the proton partial widths using the Wigner limit is also calculated. The values of \(\Gamma_{sp}\) for the \(3/2^{+}\) and \(7/2^{-}\) states are \(1.8\times 10^{-4}\) eV and \(2.49\times 10^{-6}\) eV respectively. The proton partial width (\(\Gamma_{p}\)) can be expressed as the product of an energy-dependent penetration factor, \(P_{l}(E)\) and an energy independent reduced width, \(\gamma^{2}_{p}\), as
\[\Gamma_{p}=2P_{l}(E)\gamma^{2}_{p} \tag{4}\]
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline E\({}_{x}\) & E\({}_{\rm cm}\) & J\({}^{\rm g}\) & \(\omega\gamma^{\rm UL}\) & \(\omega\gamma^{\rm SM}\) & \(\omega\gamma^{\rm exp.}\) \\ (keV) & (keV) & & (eV) & (eV) & (eV) \\ \hline
8945 & 151 & \(3/2^{+}\) & 1.17\(\times 10^{-7}\) & 3.6\(\times 10^{-8}\) & (\(1.9\pm 0.1\))\(\times 10^{-7}\)[18] \\ & & & & & (\(1.48\pm 0.1\))\(\times 10^{-7}\)[14] \\ & & & & & (\(1.8\pm 0.2\))\(\times 10^{-7}\)[16] \\ & & & & & (\(2.2\pm 0.2\))\(\times 10^{-7}\)[17] \\ & & & & & 2.03(40)\(\times 10^{-7}\)[15] \\ & & & & & (\(2.0\pm 0.5\))\(\times 10^{-7}\)[21] \\ & & & & & 1.7\({}^{+0.50}_{-0.40}\)\(\times 10^{-7}\)[7] \\
8944 & 150 & \(7/2^{-}\) & 1.89\(\times 10^{-10}\) & 9.97\(\times 10^{-8}\) & \(\leq\)9.2\(\times 10^{-9}\)[10] \\ & & & & & \(\leq\)9.7\(\times 10^{-8}\)[15] \\ & & & & & (\(3.93\pm 0.9\))\(\times 10^{-9}\)[21] \\ \hline \end{tabular}
\end{table}
Table 5: Resonance strengths of E\({}_{x}\) = 8945 keV doublets (\(3/2^{+}\), \(7/2^{-}\)) from the Wigner limit and present shell model calculations, compared with previous experimental measurements.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline E\({}_{x}\) & J\({}^{\rm g}\) & \(nl_{j}\) & \(S^{\rm SM}\) & \(S\) \\ (keV) & & & (This work) & (Literature) \\ \hline g\(\cdot\)s & \(3/2^{+}\) & \(1d_{3/2}\) & 0.055 & 0.08 [22] \\ & & & & \(0.082\pm 0.012\)[21] \\
440 & \(5/2^{+}\) & \(1d_{5/2}\) & 0.41 & 0.35 [22] \\ & & & & & \(0.38\pm 0.08\)[21] \\
2392 & \(1/2^{+}\) & \(2s_{1/2}\) & 0.20 & 0.25 [22] \\ & & & & \(0.26\pm 0.05\)[21] \\
2982 & \(3/2^{+}\) & \(1d_{3/2}\) & 0.23 & 0.32 [22] \\ & & & & \(0.35\pm 0.04\)[21] \\
6308 & \(1/2^{+}\) & \(2s_{1/2}\) & 0.10 & 0.13 [22] \\ & & & & \(0.14\pm 0.02\)[21] \\
8945 & \(3/2^{+}\) & \(1d_{3/2}\) & 1\(\times 10^{-4}\) & \((5.54\pm 1.41)\times 10^{-4}\)[21] \\ & & & & \(8.32\times 10^{-4}\)[10] \\
8944 & \(7/2^{-}\) & \(1f_{7/2}\) & 0.0104 & \((3.94\pm 0.9)\times 10^{-4}\)[21] \\ & & & & \(\leq\)1.08\(\times 10^{-3}\)[10] \\ \hline \end{tabular}
\end{table}
Table 4: Comparison of spectroscopic factors (\(S\)) for the low-lying bound states and the 8945 keV doublet states of \({}^{23}\)Na from the present shell model calculations (\(S^{\rm SM}\)) and literature.
where,
\[\gamma_{p}^{2}=\frac{\hbar^{2}}{\mu a^{2}}\theta_{p}^{2} \tag{5}\]
The constant \(\hbar^{2}/(\mu a^{2})\) is the Wigner limit, where \(a\) is the channel radius, \(\mu\) is the reduced mass and \(\theta_{p}^{2}\) is proportional to the proton spectroscopic factor. The partial gamma decay widths (\(\Gamma_{\gamma}\)) for \(M1\) and \(E1\) transitions obtained from the Weisskopf estimates [29] are 0.37 eV and 133.62 eV, respectively. The \(7/2^{-}\) state of the 8945 keV doublet undergoes decay to \(9/2^{-}\) and \(9/2^{+}\) states emitting \(\gamma\)-rays with energies 2592 keV(\(M1\)) and 6240 keV(\(E1\)), respectively [20].
## III Thermonuclear reaction rate of \({}^{22}\)Ne(\(p,\gamma\))\({}^{23}\)Na
The thermonuclear reaction rate of \({}^{22}\)Ne(\(p\),\(\gamma\))\({}^{23}\)Na reaction is governed by various low energy narrow resonances and the non-resonant component. As there are no interfering resonances to consider for the \({}^{22}\)Ne(\(p\),\(\gamma\))\({}^{23}\)Na reaction, the total resonant rate \(N_{A}<\sigma v>_{R}\) is given by the sum of all narrow resonances [14],
\[N_{A}<\sigma v>_{R}=\frac{1.5399\times 10^{5}}{(\mu T_{9})^{3/2}}\sum_{i}( \omega\gamma)_{i}e^{\frac{-11.608\mathrm{E_{cm,i}}}{79}} \tag{6}\]
where \(T_{9}\) is the temperature in GK, \(\mu\) is the reduced mass in amu, \((\omega\gamma)_{i}\) the strength of resonance \(i\) in eV, and \(\mathrm{E_{cm,i}}\) is the center-of-mass energy of resonance \(i\) in MeV.
The resonance strengths for the 8945 keV doublet is derived using the relation,
\[\omega\gamma=\frac{2J+1}{(2j_{1}+1)(2j_{2}+1)}(1+\delta_{12})\frac{\Gamma_{p} \Gamma_{\gamma}}{\Gamma} \tag{7}\]
where \(j_{1}\) and \(j_{2}\) are the spins of the interacting particles i.e., \({}^{22}\)Ne and \(p\), and \(J\) is the spin of the excited state populated in the compound nucleus i.e., \({}^{23}\)Na. \(\Gamma\) is the total width i.e., \(\Gamma_{p}+\Gamma_{\gamma}\). The values of the resonance strengths for the \(3/2^{+}\) and \(7/2^{-}\) state obtained from the present shell model calculations (\(\omega\gamma^{\mathrm{SM}}\)) and Wigner limit (\(\omega\gamma^{\mathrm{UL}}\)) calculations are compared with the corresponding experimental values (\(\omega\gamma^{\mathrm{Exp.}}\)) in Table 5. Unlike the \(7/2^{-}\) state, the strength of the \(3/2^{+}\) state has been very well constrained by various measurements (Table 5). In this work, we use the strength value adopted by Williams et al. [18] for the \(3/2^{+}\) state and the strength of the \(7/2^{-}\) state is determined from the present shell model calculations. Recently, a new experimental study of the \({}^{23}\)Na+\(p\) inelastic-scattering reaction at the Q3D magnetic spectrometer at Munich has ruled out the earlier reported resonances at E\({}_{x}\) = 8862, 8894 and 9000 keV [33; 34]. Hence, these resonances are omitted from the present work. For resonances at E\({}_{\mathrm{cm}}\) = 35, 178, 417, 458, 610, 632 and 1222 keV, the strength values are also taken from Ref. [18]. The strengths of resonances located between 632 and 1222 keV, and beyond 1222 keV, are adopted from Ref. [12]. The strength values were further divided by the corresponding electron screening enhancement factor, taken from Ref. [18].
The contributions of various individual resonances and the non-resonant component (DC+subthres.) normalized to the median STARLIB-2013 rates [12] are shown in the top panel of Fig. 6. At very low temperatures, \(T_{9}\leq 0.05\), the reaction rate is dominated by the 35 keV resonance. The temperature range \(T_{9}=0.08-0.1\) is significant for the process of hot bottom burning (HBB) in asymptotic giant branch (AGB) stars [17]. In the previous studies by Ferraro et al. [17] and Santra et al. [21], the 68 and 100 keV resonance had large contributions at these temperatures. However, in this work, these resonances are removed, and the rate is affected by the doublets at 150 keV, the resonance at 178 keV and the non
Figure 6: Reaction rate for the \({}^{22}\)Ne(\(p,\gamma\))\({}^{23}\)Na reaction as a function of temperature in GK. (a) Comparison of the individual resonant and direct capture contributions relative to the STARLIB2013 rate. (b) The total rate from the present work (red) and from previous measurement (blue) by Williams et al. [18] normalized to the STARLIB2013 rate. The bands correspond to the uncertainties in the rates and the dashed lines represent the mean rates.
resonant capture component. The non-resonant component is obtained from the present \(R\)-matrix calculations of the \(S\)-factor due to the DC\(\rightarrow\)8664 keV transition. The non-resonant rate from the present work is \(\sim 30\%\) higher at \(T_{9}=0.05\)-0.1 compared to the earlier rates of Depalo et al. [14] and Kelly et al. [15], and slightly higher relative to the rates of Ferraro et al. [17] and Santra et al. [21].
The bands in Fig. 6 correspond to the uncertainties in the rates, and the lines represent the mean rates. The uncertainties in the individual resonant rates are primarily due to the uncertainties in the resonance strength values, whereas for the non-resonant component, the major contribution is the uncertainty in the \(S\)-factor. The uncertainty in the \(7/2^{-}\) doublet at 150 keV is shown with the cyan band. The upper and lower limits of the rate for the \(7/2^{-}\) state are due to the strength values from the present shell model calculations and Wigner limit, respectively (Table 5). The mean value (dotted lines in blue) is obtained by taking the average of the upper and lower limits.
The total reaction rate is obtained by adding the resonant and non-resonant capture contributions. In the bottom panel of Fig. 6, the normalized total rate from the present work (red) is compared to the rate by Williams et al. [18] (blue). The rate from Williams et al. [18] lies within the uncertainty limits of the rate from the present work. At low temperatures \(T_{9}\leq 0.05\), and at slightly higher temperatures, \(T_{9}=0.08-0.1\), relevant for the AGB stars, our mean rate (red dashed lines) is \(\sim 15\%\) higher than the mean rate of Williams et al. [18] (blue dashed lines). For high temperatures \(T_{9}=0.2-0.25\), responsible for classical novae nucleosynthesis, our mean rate coincides with the mean rate of Williams et al. [18].
## IV Conclusion
The \({}^{22}\)Ne(\(p\),\(\gamma\))\({}^{23}\)Na reaction is reanalysed by extracting the ANC of the 8664 keV subthreshold state from finite-range DWBA analysis of the existing transfer reaction data of \({}^{22}\)Ne(\({}^{3}\)He,\(d\))\({}^{23}\)Na at 12 and 15 MeV [22; 23]. The contributions of the previously neglected bound excited states at 7080, 7449 and 7890 keV are also included in the present work. ANC value of \(217\pm 38.6\) fm\({}^{-1/2}\) for the 8664 keV state, obtained from the 12 MeV data, satisfied the necessary peripherality checks and is further utilized to carry out \(R\)-matrix calculations. The astrophysical \(S\)-factor for the DC\(\rightarrow\)8664 keV using the enhanced ANC value explains the existing data of Rolfs et al. [2] and Kelly et al. [15] but overestimates the data of Gorres et al. [19]. The observed rise in the \(S\)-factor of the capture to ground state at low energies is reproduced nicely without requiring any background poles fitting. The total non-resonant \(S\)-factor from the present work is in good agreement with the measurements by Ferraro et al. [17] and Williams et al. [18].
The proton partial widths for the 8945 keV doublets (\(3/2^{+}\) and \(7/2^{-}\)) are deduced from shell model calculations with the code NUSHELLX and are compared with the widths from Wigner limit. The resonance strength of the \(3/2^{+}\) state is very well constrained from several experimental measurements, the value adopted by Williams et al. [18] is used in the present work. For the poorly studied \(7/2^{-}\) state, the strength yielded from shell model and Wigner limit calculations have been used.
The thermonuclear reaction rate evaluated in this work omits the resonances at E\({}_{x}=8862\), 8894 and 9000 keV. The total reaction rate normalized to the STARLIB-2013 rate [12] is compared to the rate of Williams et al. [18]. At the temperature range of interest for HBB processes (\(T_{9}=0.08-1\)), the mean rate from present work is \(\sim 15\%\) higher than that by Williams et al. [18] and for higher temperatures relevant for classical novae nucleosynthesis (\(T_{9}=0.2-0.25\)), the present mean rate coincides with William's rate [18].
|
2306.07607 | Practice with Graph-based ANN Algorithms on Sparse Data: Chi-square
Two-tower model, HNSW, Sign Cauchy Projections | Sparse data are common. The traditional ``handcrafted'' features are often
sparse. Embedding vectors from trained models can also be very sparse, for
example, embeddings trained via the ``ReLu'' activation function. In this
paper, we report our exploration of efficient search in sparse data with
graph-based ANN algorithms (e.g., HNSW, or SONG which is the GPU version of
HNSW), which are popular in industrial practice, e.g., search and ads
(advertising).
We experiment with the proprietary ads targeting application, as well as
benchmark public datasets. For ads targeting, we train embeddings with the
standard ``cosine two-tower'' model and we also develop the ``chi-square
two-tower'' model. Both models produce (highly) sparse embeddings when they are
integrated with the ``ReLu'' activation function. In EBR (embedding-based
retrieval) applications, after we the embeddings are trained, the next crucial
task is the approximate near neighbor (ANN) search for serving. While there are
many ANN algorithms we can choose from, in this study, we focus on the
graph-based ANN algorithm (e.g., HNSW-type).
Sparse embeddings should help improve the efficiency of EBR. One benefit is
the reduced memory cost for the embeddings. The other obvious benefit is the
reduced computational time for evaluating similarities, because, for
graph-based ANN algorithms such as HNSW, computing similarities is often the
dominating cost. In addition to the effort on leveraging data sparsity for
storage and computation, we also integrate ``sign cauchy random projections''
(SignCRP) to hash vectors to bits, to further reduce the memory cost and speed
up the ANN search. In NIPS'13, SignCRP was proposed to hash the chi-square
similarity, which is a well-adopted nonlinear kernel in NLP and computer
vision. Therefore, the chi-square two-tower model, SignCRP, and HNSW are now
tightly integrated. | Ping Li, Weijie Zhao, Chao Wang, Qi Xia, Alice Wu, Lijun Peng | 2023-06-13T08:05:30Z | http://arxiv.org/abs/2306.07607v1 | Practice with Graph-based ANN Algorithms on Sparse Data: Chi-square Two-tower model, HNSW, Sign Cauchy Projections
###### Abstract.
\({}^{1}\)Sparse data are common. The traditional "handcrafted" features are often sparse. Embedding vectors from trained models can also be very sparse, for example, embeddings trained via the "ReLu" activation function. In this paper, we report our exploration of efficient search in sparse data with graph-based ANN algorithms (e.g., HNSW, or SONG which is the GPU version of HNSW), which are popular in industrial practice, e.g., search and ads (advertising).
We experiment with the proprietary ads targeting application, as well as benchmark public datasets. For ads targeting, we train embeddings with the standard "cosine two-tower" model and we also develop the "chi-square two-tower" model. Both models produce "highly sparse embeddings when they are integrated with the "ReLu" activation function. In EBR (embedding-based retrieval) applications, after we the embeddings are trained, the next crucial task is the approximate near neighbor (ANN) search for serving. While there are many ANN algorithms we can choose from, in this study, we focus on the graph-based ANN algorithm (e.g., HNSW-type).
Sparse embeddings should help improve the efficiency of EBR. One benefit is the reduced memory cost for the embeddings. The other obvious benefit is the reduced computational time for evaluating similarities, because, for graph-based ANN algorithms such as HNSW, computing similarities is often the dominating cost. In addition to the effort on leveraging data sparsity for storage and computation, we also integrate "sign cauchy random projections" (SignCRP) to hash vectors to bits, to further reduce the memory cost and speed up the ANN search. In NIPS'13, SignCRP was proposed to hash the chi-square similarity, which is a well-adopted nonlinear kernel in NLP and computer vision. Therefore, the chi-square two-tower model, SignCRP, and HNSW are now tightly integrated.
chi-square similarity, two-tower model, sparse data, graph-based approximate near neighbor search, sign cauchy random projections +
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
## 1. Introduction
Embedding has become the standard component in deep learning. Embedding models including BERT (DevDev et al., 2015), GLOVE (Yang et al., 2016), GPT-3 (Chen et al., 2017) etc. have been widely adopted in practice in NLP, knowledge graphs, computer vision, information retrieval, etc. (Chen et al., 2017; Li et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2018; Li et al., 2019; Li et al., 2018; Li et al., 2019; Li et al., 2019). The two-tower model (Li et al., 2018) has become the standard neural architecture for generating embeddings which can be subsequently used for retrievals and other downstream applications. The two-tower model is the foundation for the "embedding-based retrieval" (EBR), which has been broadly adopted in the (search and advertising) industry, for example, for efficiently generating quality candidates as input to (e.g.,) the subsequent advertisements (ads) ranking algorithm (Chen et al., 2017) or video recommendation algorithm (Chen et al., 2017).
Figure 1 (left panel) provides a simplified illustration of the two-tower model. The top-layer of two-tower model computes the inner product of the two embeddings (which are typically normalized to have the unit \(l_{2}\) norm) of the two towers. After the model has been trained, the embeddings can be directly used for matching: given a query embedding, searching for the item embeddings with the highest cosine similarity to the query embedding. In recent years, graph-based ANN methods such as HNSW (Li et al., 2019) or SONG (Wang et al., 2019), which is the GPU version of HNSW, are often adopted to speed up the search process.
The cosine similarity between two \(d\)-dimensional embeddings (vectors) \(u,v\in\mathbb{R}^{d}\) is defined as
\[\rho=\sum_{i=1}^{d}u_{i}v_{i},\qquad\sum_{i=1}^{d}u_{i}^{2}=\sum_{i=1}^{d}v_{i }^{2}=1, \tag{1}\]
if we assume the vectors are pre-normalized to have the unit \(l_{2}\) norm. In this study, we propose the "chi-square two-tower" model which
Figure 1. Left: the cosine two-tower model. Right: the chi-square two-tower model (which requires to use ReLU).
only slightly modifies the common "cosine two-tower" model, by replacing the inner product with the following chi-square similarity:
\[\rho_{\chi^{2}}=\sum_{i=1}^{d}\frac{2u_{i}v_{i}}{u_{i}+v_{i}},\quad\quad u_{i} \geq 0,\quad v_{i}\geq 0,\quad\sum_{i=1}^{d}u_{i}=\sum_{i=1}^{d}v_{i}=1 \tag{2}\]
As shown in the right panel of Figure 1, in combination with the "ReLU" activation function [10; 13; 26], the embedding vectors are naturally non-negative. The sum-to-one constraint is easy to be enforced in the neural network training process. In fact, the sum-to-one constraint is the same as the \(l_{1}\) (lasso) constraint in non-negative data [33]. When we use the ReLU activation, we observe that both the cosine two-tower model and the chi-square two-tower model produce sparse embeddings, as expected [10; 13; 26]. We are aware of other existing works which aim to produce sparse embeddings [22; 34].
There are notable advantages with sparse embeddings. The storage cost can be substantially reduced if the embeddings are highly sparse. For example, if the sparsity (i.e., the number of nonzero entries divided by the vector length) is \(<10\sim 20\%\), we can expect a considerable saving in storage using a sparse format. Also, we expect the cost for similarity computations can be substantially reduced if the embeddings are highly sparse. We will illustrate these advantages via extensive experiments on HNSW for fast approximate near neighbor search. After the graph is built, the major cost of HNSW is spent on computing similarities on the fly while walking on the graph to search for nodes with highest similarities.
The chi-square similarity (2), as a nonlinear kernel, is popular in NLP and computer vision [1; 5; 18; 35; 36; 37]. Typically those applications used the "chi-square distance" \(d_{\chi^{2}}\) instead of \(\rho_{\chi^{2}}\):
\[d_{\chi^{2}} =\sum_{i=1}^{d}\frac{(u_{i}-v_{i})^{2}}{u_{i}+v_{i}}=\sum_{i=1}^{ d}\frac{(u_{i}+v_{i})^{2}-4u_{i}v_{i}}{u_{i}+v_{i}}\] \[=\sum_{i=1}^{d}u_{i}+\sum_{i=1}^{d}v_{i}-2\sum_{i=1}^{d}\frac{2u_ {i}v_{i}}{u_{i}+v+i}=2-2\rho_{\chi^{2}}\]
since \(\sum_{i=1}^{d}u_{i}=\sum_{i=1}^{d}v_{i}=1\). In this study, it is not our major focus to demonstrate the advantage of the chi-square over the cosine. In the experiments, we will show that the chi-square two-tower model produces even more sparse embeddings than the cosine two-tower model, and the chi-square similarity achieves similar (in some cases even better) accuracy (recall, classification error, etc). We apply the chi-square two-tower model for a proprietary ads targeting task, and we also conduct HNSW experiments on public datasets.
To conclude this introduction section, we shall point out the well-known limitation of the standard two-tower model. This model does not sufficiently consider interactions between queries and items. The rising field of "neural ranking" is a promising direction [11; 31; 32; 40; 43; 45; 46; 47], which however does not have the convenience of the two-tower model, because neural ranking models cannot directly use the popular ANN algorithms. In the industry practice, the two-tower model is still very popular especially as a retrieval model, and practitioners should be aware of the limitations.
Next, we review HNSW and the graph-based ANN search.
## 2. Graph-based ANN Search
**Graph index.** Graph-based ANN search constructs a proximity graph as index, where a vertex in the graph corresponds to an item embedding and an edge connects two "neighboring vertices". The neighborhood relationship is defined on various constraints to make the graph index applicable for the ANN problem. For instance, graph constraints like Delaunay Graphs [2] ensure that there exists a path with monotonic increasing similarity to the query embedding from any starting vertex. NSW [23], NSG [9], HNSW [24] and SONG [41] (which is the GPU version of HNSW) approximate the Delaunay Graph to reduce the proximity graph construction complexity to subquadratic time. The graph construction is realized by inserting vertices iteratively, i.e., finding the nearest neighbors using the graph searching methods on the current graph and connecting the new vertex with them. Recently, there have been a wide range of research activities based on HNSW, for example, efficiently maintaining/updating the graphs [38], integrating constraints (such as geo-filtering or other filtering mechanisms) with ANN search [42], using HNSW for the maximum inner product search (MIPS) [25; 30; 44], etc.
```
1:Initialize a priority queue \(q\); res \(\leftarrow\emptyset\)
2:\(q\).insert(start_vertex, similarity(start_vertex, query))
3:whiletruedo
4:curr_idx, curr_similarity \(\leftarrow\)\(q\).pop_highest_similarity()
5:ifcurr_similarity cannot improve the \(L^{th}\) similarity in resthen
6:break
7:endif
8:res \(\leftarrow\)res \(\cup\)\(\{(curr\_idx,curr\_similarity)\}\)
9:for each neighbor \(x\) of currdo
10:if x is not visited then
11:\(q\).insert(\(x\), similarity(\(x\), query))
12:endif
13:endfor
14:endwhile
15:returnres
```
**Algorithm 1** Graph Searching Algorithm
**Graph searching.** Once a graph is (partially) constructed, ANN search can be performed by traversing the graph. At each step, the algorithm chooses the neighbor that is closest to the query point and continues the search in that direction. This process is repeated until a stopping criterion is met, i.e., no neighbors can improve the current found \(L\) nearest neighbors, where \(L\) is a parameter that controls the trade-off between searching time. The larger \(L\) generates more accurate nearest neighbors but consumes more time. The graph searching is similar to an A* heuristic search [14], where the priority is the similarity to the query embedding. The details of the searching is illustrated in Algorithm 1. A priority queue \(q\) is initialized from the similarity of the starting vertex (e.g., vertex 1) and the query. Then, in each iteration, we extract the vertex _curr_idx_ from the priority queue with the highest similarity, check the stopping criterion, and explore its unvisited neighbors.
Figure 2 depicts an example. Consider a query vector at the space of the dotted circle. We start the search from vertex 1. With \(L=2\), the searching stops after exploring vertex 3 because vertex 4 cannot improve the currently found top-2 results (2 and 3). However, the true top-2 result of the query should be 2 and 5. Using a greater \(L\), e.g., \(L=3\), will let the search continue and successfully locate 5. Therefore, the \(L\) should be chosen properly and should be tuned with the data.
## 3. Ebr for ads Targeting
We illustrate the effectiveness of the chi-square two-tower model using an example from ads targeting, for which we learn member and campaign embeddings from a two-tower model trained on ads engagement tasks, and then use it to retrieve matching campaigns for members based on relevance (similarity) scores computed from those embeddings. For the training, we use various features - member demographics features (e.g., geo location, degree, title), user behavioral features, contextual features and campaign features. All input features first go through a shared embedding layer, where numerical, categorical, textual and id features are mapped to a dense embedding vector. We then explore the effectiveness of different similarity measures in the two-tower model training in terms of top-K ranking metrics and retrieval efficiency. The model was trained with 4.5 billion records from data collected on a social network.
Figure 3 illustrates the neural architecture for training the embeddings. It is a simple model and we believe the simplicity allows us to clearly demonstrate the effectiveness of using the chi-square similarity to replace the standard cosine similarity inside the model. Basically, the normalization layer in Figure 3 enforces the sum-to-one constraint. The (top) matching layer applies the usual cross-entropy loss with the standard bias and temperature terms. For the comparison, we use essentially the same architecture to train the cosine two-tower model by replacing the chi-square similarity with cosine and the sum-to-one normalization with the \(l_{2}\) normalization.
Figure 4 visualizes the (average) sparsity values for embedding sizes varying from \(k=128\) to \(k=2048\). As the cosine two-tower model also uses the ReLU activation function, we can see that the model is also quite sparse especially for large embedding size \(d\). Nevertheless, the embeddings from the chi-square two-tower model are substantially more sparse. For example, at \(d=1024\), the sparsity for the cosine two-tower model is about 5.4%, and for the chi-square two-tower model is about 2.8%. In fact, when \(d=2048\), the sparsity values are 3.6% and 0.6% respectively for the cosine and the chi-square model. It would be interesting to understand the limit of sparsity by keeping increasing \(d\), but we have not conducted such an investigation for \(d>2048\). The AUC evaluations are pretty
Figure 4. Average sparsity values of the embeddings. In this paper, “sparsity” is defined as the fraction of non-zero entries, i.e., the number of non-zero entries divided by the embedding dimension.
Figure 3. The neural architecture for using the chi-square two-tower model for ads targeting.
Figure 2. Illustration of the graph searching algorithm. When \(L=2\), the searching follows the white arrows and stops by returning vertices 2 and 3 and results; when \(L=3\), the searching continues (following with the hatched arrows) and eventually returns 2, 5, and 3. If our goal is to find top-2 neighbors of the query vector, it might not be sufficient to use \(L=2\) (it misses the true neighbor 5). A greater \(L\) helps to return more accurate results.
close for both models. For smaller \(d\), the AUC scores increase with increasing \(d\), and we do not observe improvement after \(d\geq 512\).
After the two-tower models have been trained, we compute the campaign and member embeddings. Then we use member embeddings to search for the most similar campaign embeddings, using the popular graph-based (HNSW) ANN algorithm. Figure 5 reports the top-1 and top-10 recalls for \(d=256\) and \(d=1024\). Recall that in HNSW the critical parameter is \(L\), which controls the trade-off between the search quality and the search time. We observe from the experiments that for both models, the number of retrieved data points at the same \(L\) is very close (also see Figure 7). We thus use \(L\) as a convenient measure for comparisons (at the same \(L\)).
Figure 5 confirms the high-quality of HNSW in that it does not need to search many points (i.e., small \(L\) values) in order to achieve a reasonable recall say 0.9. Another interesting observation is that using larger embeddings can be actually faster for this application. For example, with \(d=1024\), we just need \(L=5\) to achieve a top-1 recall of 0.9, while we need \(L=30\) to achieve 0.9 if \(d=256\).
We should mention that, with our chi-square model, the number of nonzeros at \(d=256\) is actually more than the number of nonzeros at \(d=1024\) (i.e., 41 versus 29). This set of experiments suggests that it might be better to use larger and sparser embeddings instead of shorter embeddings, as far as the search quality/efficiency is concerned. We will also report similar results on public data.
Figure 6 reports the saving in time by using sparse data format in the implementation of HNSW. Typically, in EBR applications, embedding vectors are stored with dense representations. With our chi-square two-tower model, as the sparsity is merely 2.8% at \(d=1024\), we obtain a huge saving in memory using sparse format. Figure 6 shows that we achieve a 3-fold reduction in time for computing similarities by using the sparse format when \(d=1024\).
## 4. Experiments on Public Data
To further understand the difference between chi-square and cosine similarities in the context of approximate near neighbor search, we conduct HNSW experiments on four public datasets. The dataset specifications are shown in Table 1.
For all four datasets, we conduct ANN experiments separately for two different ground-truths: the cosine similarity and the chi-square similarity. Figure 7 reports the number of retrieved vectors with
\begin{table}
\begin{tabular}{c r r r r r} \hline \hline dataset & \# queries & \# vectors & \# dim & \# nonzero & Sparsity \\ \hline Webspam & 5,000 & 345,000 & 16.6M & 3,720 & 0.02\% \\ News20 & 3,993 & 15,935 & 62,061 & 79.9 & 0.13\% \\ RCV1 & 15,564 & 518,571 & 47,236 & 64.6 & 0.14\% \\ MNIST & 10,000 & 60,000 & 784 & 149.9 & 19.12\% \\ \hline \hline \end{tabular}
\end{table}
Table 1. Summary statistics of four public datasets.
Figure 5. Recalls at different \(L\) values. Recall in Section 2, the searching stops when the current searching vertices cannot improve current found \(L\) nearest neighbors.
Figure 6. Ratios of distance computing times for the chi-square two-tower model: time for using dense format over the time for using sparse format.
Figure 7. The number of retrieved data vectors with respect to the HNSW parameter \(L\).
respect to the parameter \(L\) in HNSW. We can see that the numbers are essentially proportional to \(L\) and do not differ much between chi-square and cosine. Figure 7 confirms that it is appropriate to use \(L\) for measuring/comparing the performance of HNSW experiments.
Figure 8 reports the proportions of the times for computing similarities within the total search times. For Webspam, as the average number of nonzeros is high, \(>90\%\) of the search time is spent on computing similarities. For News20, the average number of nonzeros is small and only \(40\%\) of the search time is spent on computing similarities. If we use dense format for News20, then \(>95\%\) of the search time would be spent on computing similarities.
Figure 9 presents the top-10 recalls for all four datasets with respect to \(L\), for both the cosine and the chi-square similarities. Figure 10 reports the classification accuracy based using 10-NN (10 nearest neighbors) from the retrieved vectors. These results confirm that the chi-square similarity can be a good similarity measure to use for machine learning as well as retrieval.
## 5. Sign Cauchy Random Projections
Before concluding the paper, we also report a solution for potentially further reducing the memory consumption of EBR in particular for the chi-square similarity. For example, for Webspam, even though the vectors are highly sparse, the absolute number of nonzeros is still high (3720). We propose to use "sign cauchy random projections" [(21)] to reduce each vector to \(k\) bits. Specifically, let \(x_{j}=\sum_{i=1}^{d}u_{i}r_{ij}\) and \(y_{j}=\sum_{i=1}^{d}v_{i}r_{ij}\), \(j=1\) to \(k\), where \(\{r_{ij}\}\) is a random matrix with entries \(r_{ij}\) sampled i.i.d. from the standard cauchy distribution. It was shown in [(21)] that the collision probability \(P(sign(x_{j})=sign(y_{j}))\) is proportional to the chi-square similarity. This is the foundation of "sign cauchy random projections".
Figure 11 reports the experiments on Webspam, which has 3720 nonzeros per vector. With \(k=1024\) sign cauchy projections for estimating the original chi-square in HNSW, we only need 1024/32 = 32 (4-byte) integers. To achieve the same 0.9 recall (respectively 0.9 classification accuracy), HNSW with the original data would need 12x (respectively 18x) more time for computing similarities. In addition, note that having greater \(k\) yields better time for the same recall level. The reason is that a greater \(k\) better approximates the original chi-square. Although it takes a slightly longer time to
Figure 8. For Webspam, \(>90\%\) of the search time in HNSW is spent on computing similarities. For MNIST and News20, computing similarities counts for \(40\%\sim 50\%\) of the search time. For News20, if we use the dense format, then computing similarities would take more than \(95\%\) of the total time.
Figure 10. Classification accuracy using the retrieved top-10 similar vectors (i.e., 10-NN), with respect to the HNSW parameter \(L\). The chi-square performs better on these datasets.
Figure 9. Recalls for retrieving top-10 similar vectors, for all four datasets, and for both cosine and chi-square similarities, with respect to the HNSW parameter \(L\).
compute the hamming distance for greater \(k\), a smaller \(L\) is required to obtain the same recall, which still saves the entire searching cost.
## 6. Conclusion
The core contribution of this study is the integration of several techniques including the chi-square two-tower model, sign cauchy random projections (SignCRP), graph-based approximate near neighbor (ANN) search, and HNSW. These techniques are needed in part because the search data are often very sparse. We demonstrate these aspects through an industrial application in ads targeting.
For many search applications, the data can be from the traditional "handcrafted" features, and data can also be generated from the trained deep learning models via (e.g.,) the two-tower models. In addition to the standard "cosine two-tower" model, in this study we also develop the "chi-square two-tower" model. Both models produce sparse embeddings when the "ReLU" activation function is used. The chi-square similarity or chi-square loss function were very popular in the "pre-deep-learning era", for histogram-based features common in NLP and computer vision applications. It is a good exploration to bring the chi-square similarity to deep learning.
Approximate near neighbor (ANN) search is a critical component in modern recommender systems. Among many ANN algorithms, the graph-based methods such as HNSW or SONG (which is a GPU version of HNSW) become increasingly popular owing to the excellent performance. We focus on HNSW in this study. After the graph has been built, the major computational cost of HNSW is the evaluation of distance/similarity on the fly. Typically the HNSW implementations assume dense data. Our study provides two solutions. The first (and obvious) solution is to use sparse representations for the storage as well as similarity computations.
The second (and less obvious) solution is to apply "sign cauchy random projections" (SignCRP) to produce highly compact bits representations of the data and then leverage extremely efficient bit-wise operations to estimate the chi-square similarity (which is proportional to the number of matched bits). This strategy has been implemented and well-integrated into HNSW (and SONG). Experiments have confirmed the effectiveness of the proposed methods.
|
2305.11535 | Semilattices of Stratified Semigroups | In 1995 Grillet introduced the concept of a stratified semigroup as a kind of
generalisation of finite nilsemigroups. We extend these ideas here by allowing
a more general Base and describe them in terms of extensions of semigroups by
stratified semigroups. We consider semillatices of certain types of group-bound
semigroups and also semillatices of Clifford semigroups and show how to
describe them as semilattices of these stratified extensions and provide a
number of interesting examples. | James Renshaw, William Warhurst | 2023-05-19T09:00:54Z | http://arxiv.org/abs/2305.11535v2 | # Semilattices of Stratified Semigroups
###### Abstract
In 1995 Grillet introduced the concept of a stratified semigroup as a kind of generalisation of finite nilsemigroups. We extend these ideas here by allowing a more general Base and describe them in terms of extensions of semigroups by stratified semigroups. We consider semillattices of certain types of group-bound semigroups and also semillattices of Clifford semigroups and show how to describe them as semilattices of these stratified extensions and provide a number of interesting examples.
**Keywords** Semigroup, stratified, extension, semilattice, group-bound, Clifford semigroup.
**Mathematics Subject Classification** 2020: 20M10.
## 1 Introduction and Preliminaries
Grillet [2] defines a semigroup \(S\) with zero to be _stratified_ whenever \(\bigcap_{m>0}S^{m}=\{0\}\). A semigroup without zero is called _stratified_ if \(S^{0}\) is stratified. He shows that this class of semigroups includes the class of all free semigroups, free commutative semigroups, homogeneous semigroups and nilpotent semigroups with finite degree. Our aim is to generalise this concept and consider some semigroups that can be decomposed as semilattices of some of these more general kinds of stratified semigroups.
After some basic definitions and preliminary results, in section 2, we introduce the concept of a _stratified extension_ as a generalisation of Grillet's stratified semigroups, and we provide a number of interesting results on the overall structure of such
semigroups. In section 3 our focus is on semigroups in which every regular \(\mathcal{H}-\)class contains an idempotent. We show that group-bound semigroups with this property are semilattices of stratified extensions of completely simple semigroups and describe the semilattice structure. Finally in section 4 we look at strict extensions of Clifford semigroups and show amongst other things that strict stratified extensions of Clifford semigroups are semilattices of stratified extensions of groups. For all terminology in semigroups not otherwise defined see Howie ([4]).
Let \(S\) and \(T\) be semigroups, with \(T\) containing a zero. A semigroup \(\Sigma\) is called an _ideal extension_ of \(S\) by \(T\) if it contains \(S\) as an ideal and the Rees quotient \(\Sigma/S\) is isomorphic to \(T\). Grillet and Petrich [3] define an extension as _strict_ if every element of \(\Sigma\setminus S\) has the same action on \(S\) as some element of \(S\) and _pure_ if no element of \(\Sigma\setminus S\) does. They also showed that any extension of an arbitrary semigroup \(S\) is a pure extension of a strict extension of \(S\).
**Proposition 1.1** ([3, Proposition 2.4]): _Every extension of \(S\) is strict if and only if \(S\) has an identity._
Let \(S\) and \(T\) be disjoint semigroups. A _partial homomorphism_[1] from \(T\) to \(S\) is a map \(f:T\setminus\{0\}\to S\) such that for all \(x,y\in S,f(xy)=f(x)f(y)\) whenever \(xy\neq 0\).
We adopt the convention used by Clifford and Preston ([1]) that elements of \(T\setminus\{0\}\) are denoted by capital letters and elements of \(S\) by lowercase letters. A partial homomorphism from \(T\setminus\{0\}\) to \(S\) given by \(A\mapsto\overline{A}\) defines an extension \(\Sigma=S\bigcup T\setminus\{0\}\) with multiplication given by
1. \(A*B=\begin{cases}AB&AB\neq 0\\ \overline{A}\,\overline{B}&AB=0\end{cases}\)
2. \(A*s=\overline{A}s\)
3. \(s*A=s\overline{A}\)
4. \(s*t=st\)
where \(A,B\in T\setminus\{0\}\) and \(s,t\in S\). From parts (2) and (3) above, all extensions defined in this way are strict.
Let \(S\) be a semigroup and let \(a,b\in S\). We say that \(a\) and \(b\) are _interchangeable_ if
\[\forall x\in S,ax=bx\text{ and }xa=xb.\]
A semigroup is called _weakly reductive_ if it contains no interchangeable elements. Notice that every monoid is weakly reductive.
**Theorem 1.2** ([3, Theorem 2.5]): _Let \(S\) be weakly reductive. Then every strict extension of \(S\) is determined by a partial homomorphism, and conversely._
Recall that a semigroup is said to be \(E-\)_dense_ (or \(E-\)_inversive_) if for all \(s\in S\) there exists \(t\in S\) such that \(st\in E(S)\). The following is well-known
**Lemma 1.3**: _The following are equivalent_
1. \(S\) _is_ \(E-\)_dense,_
2. _for all_ \(s\in S\) _there exists_ \(t\in S\) _such that_ \(ts\in E(S)\)_,_
3. _for all_ \(s\in S\) _there exists_ \(t\in S\) _such that_ \(st,ts\in E(S)\)_,_
4. _for all_ \(s\in S\) _there exists_ \(s^{\prime}\in S\) _such that_ \(s^{\prime}ss^{\prime}=s^{\prime}\)_._
Such an element, \(s^{\prime}\), in (4) is called a _weak inverse_ of \(s\) and the set of all weak inverses of \(s\) is denoted by \(W(s)\). The set of all weak inverses of elements of \(S\) is denoted by \(W(S)\). Note that \(W(S)=\mathrm{Reg}(S)\). It is easily shown that for all \(s\in S,s^{\prime}\in W(s)\), \(ss^{\prime},s^{\prime}s\in E(S)\) and \(ss^{\prime}\mathcal{L}s^{\prime}\mathcal{R}s^{\prime}s\).
**Lemma 1.4**: _Let \(S\) be a semigroup._
1. _If_ \(s,t\in S\) _then_ \(W(st)\subseteq W(t)W(s)\)_._
2. _If_ \(S\) _is an_ \(E-\)_dense semigroup and_ \(s^{\prime}\in W(s)\) _then_ \(J_{s^{\prime}}\leq J_{s}\)_._
**Proof.** These are fairly straightforward.
1. Let \((st)^{\prime}\in W(st)\). Then \((st)^{\prime}=(st)^{\prime}st(st)^{\prime}\) and so \(t(st)^{\prime}=t(st)^{\prime}st(st)^{\prime}\) and hence \(t(st)^{\prime}\in W(s)\). Similarly \((st)^{\prime}s\in W(t)\). Then \((st)^{\prime}=((st)^{\prime}s)(t(st)^{\prime})\in W(t)W(s)\).
2. Since \(s^{\prime}\in W(s)\), \(s^{\prime}=s^{\prime}ss^{\prime}\) and so \(J_{s^{\prime}}=J_{s^{\prime}ss^{\prime}}\). By [4, Equation 2.1.4] we have \(J_{s^{\prime}}=J_{s^{\prime}ss^{\prime}}\leq J_{s}\).
Note that it is well known that if \(E(S)\) forms a band then \(W(st)=W(t)W(s)\).
An element \(s\) in a semigroup \(S\) is called _eventually regular_ if there exists \(n\geq 1\) such that \(s^{n}\) is regular. A semigroup is _eventually regular_ if all of its elements are eventually regular. It is clear that eventually regular semigroups are \(E-\)dense. A semigroup \(S\) is called _group-bound_ if for every \(s\in S\), there exists \(n\geq 1\) such that \(s^{n}\) lies in a subgroup of \(S\). Clearly group-bound semigroups are eventually regular. If \(S\) is eventually regular and each regular \(\mathcal{H}-\)class is a group then \(S\) is group-bound. A semigroup \(S\) is called _Archimedean_ if for any \(a,b\in S\) there exists \(n\in\mathbb{N}\) such that \(a^{n}\in SbS\).
**Theorem 1.5** ([5, Theorem 3]): _Let \(S\) be a group-bound semigroup. Then \(S\) is a semillatice of Archimedean semigroups if and only if every regular \(\mathcal{H}-\)class of \(S\) is a group._
Let \(S\) be a semigroup with \(0\). We say that an element \(x\in S\), is _nilpotent_ if there is \(n\in\mathbb{N}\) such that \(x^{n}=0\). The semigroup \(S\) is called _nilpotent_ if every element of \(S\) is nilpotent. The semigroup \(S\) is called _nilpotent with degree \(n\in\mathbb{N}\)_ if \(S^{n}=\{0\}\). Note that Grillet [2] and Shevrin [5] call nilpotent semigroups _nilsemigroups_, whereas they refer to nilpotent semigroups with a finite degree as simply nilpotent.
Stratified semigroups
Let \(S\) be a semigroup (not necessarily stratified) and define the _base_ of \(S\) to be the subset \(\mbox{Base}(S)=\bigcap_{m>0}S^{m}\). We shall say that a semigroup \(S\) is a _stratified extension_ of \(\mbox{Base}(S)\) if \(\mbox{Base}(S)\neq\emptyset\). The reason for this name will become apparent later. Clearly \(\mbox{Base}(S)\) is a subsemigroup of \(S\). When \(\mbox{Base}(S)\) is a trivial subgroup then \(S\) is a stratified semigroup. A stratified semigroup \(S\) is not in general a stratified extension as we may have \(\mbox{Base}(S)=\emptyset\), however if \(S\) is a stratified semigroup then \(S^{0}\) is also stratified and is a stratified extension with trivial base. Further, \(S\) is called a _finitely stratified extension_ if there exists \(m\in\mathbb{N}\) such that \(S^{m}=S^{m+1}=\mbox{Base}(S)\). The smallest such \(m\) is called the _height_ of \(S\) and where necessary we shall refer to \(S\) as a _finitely stratified extension with height \(m\)_. If for every \(s\) in \(S\) there is an \(m\in\mathbb{N}\) such that \(s^{m}\in\mbox{Base}(S)\) then \(S\) is a _nil-stratified extension_. All finitely stratified extensions are nil-stratified extensions, but it is easy to demonstrate that not all nil-stratified extensions are finitely stratified extensions.
A finitely stratified extension is a stratified extension over the same base, since \(S^{m}=S^{m+1}\) implies \(S^{n}=S^{m}\) for all \(n\geq m\) and so \(\bigcap_{k>0}S^{k}=S^{m}\), where \(m\) is the height of \(S\). The converse is not true since, for example, if \(S\) is a free semigroup with a zero adjoined, then \(S\) is a stratified extension with trivial base but not a finitely stratified extension. It is clear that a (finitely) stratified extension has a unique base.
Clearly, for all \(m\geq 1\), \(S^{m+1}\subseteq S^{m}\) and so we define the _layers_ of \(S\) as the sets \(S_{m}=S^{m}\setminus S^{m+1}\), \(m\geq 1\). Every element of \(S\setminus\mbox{Base}(S)\) lies in exactly one layer, and if \(s\in S_{m}\) then \(m\) is the _depth_ of \(s\). The layer \(S_{1}\) generates every element of \(S\setminus\mbox{Base}(S)\) and is contained in any generating set of \(S\). However \(\mbox{Base}(S)\not\subseteq\langle S_{1}\rangle\) in general. For example, let \(S\) be a semigroup with \(0\), with no zero divisors. Then \(0\in\mbox{Base}(S)\) but \(0\not\in\langle S_{1}\rangle\).
Since \(\mbox{Base}(S)\subseteq S^{m}\) for any \(m\in\mathbb{N}\), we have an alternative characterisation for the elements of \(\mbox{Base}(S)\). Any \(s\in S\) lies in \(\mbox{Base}(S)\) if and only if \(s\) can be factored into a product of \(m\) elements for any \(m\in\mathbb{N}\), i.e. \(s=a_{1}a_{2}\ldots a_{m}\) for some \(a_{i}\in S\). This characterisation gives us some immediate properties of \(\mbox{Base}(S)\) as a subsemigroup of \(S\).
**Lemma 2.1**: _Let \(S\) be a semigroup and let \(s\in S\). If \(s\in Ss\cup sS\cup SsS\) then \(s\in\mbox{Base}(S)\)._
**Proof.** It follows that for \(m\geq 1\), \(s=x^{m}s\) or \(s=sy^{m}\) or \(s=x^{m}sy^{m}\) and so the result follows from the previous observation.
**Corollary 2.2**: _Suppose that \(S\) is a semigroup._
1. _Every submonoid of_ \(S\) _is a submonoid of_ \(\mbox{Base}(S)\)_._
2. \(\mbox{\rm Reg}(S)\subseteq\mbox{Base}(S)\)_. Hence if_ \(S\) _is regular,_ \(\mbox{Base}(S)=S\)_._
3. \(E(S)=E(\mbox{Base}(S))\)
_._
4. _If_ \(s\in S\setminus{\rm Base}(S)\) _then_ \(|J_{s}|=1\)_, where_ \(J_{s}\) _is the_ \({\cal J}-\)_class of_ \(s\)_._
To see (4) notice that if \(a{\cal J}b\) and \(a\neq b\) then we have \(a=ubv\) for some \(u,v\in S^{1}\) and since \(a\neq b\) we have \(u\) and \(v\) not both equal to \(1\). Similarly \(b=sat\) with \(s,t\in S^{1}\) not both equal to \(1\) and hence \(a\in Sa\cup aS\cup SaS\). The converse is not true, since for example in a semigroup with zero we have \(J_{0}=\{0\}\) but \(0\in{\rm Base}(S)\).
If follows immediately that the class of stratified extensions contains the class of semigroups with regular elements and hence in particular the classes of monoids, finite semigroups and regular semigroups. However, not every semigroup is a stratified extension. Consider for example a semigroup with a length function \(l:S\rightarrow\mathbb{N}\) such that for all \(x,y\in S,l(xy)=l(x)+l(y))\). If \(T\) is the subsemigroup of elements with non-zero length, then the elements of \(T^{m}\) each have length at least \(m\). Hence the elements of length exactly \(m\) lie in \(T_{m}\not\subseteq{\rm Base}(T)\) and so the base is empty. In particular, a free semigroup is not a stratified extension, nor is the semigroup of polynomials of degree \(\geq 1\) over any ring, under multiplication.
This property allows us to prove the following results, justifying the names of stratified, nil-stratified, and finitely stratified extensions.
**Lemma 2.3**: _Let \(S\) be a stratified extension. Then \({\rm Base}(S)\) is an ideal of \(S\)._
**Proof.** : For any \(u,v\in S^{1}\), \(t\in{\rm Base}(S)\) and \(m>3\), we have \(t\in S^{m-2}\) so \(utv\in S^{m}\) and hence \(utv\in{\rm Base}(S)\).
Hence we can regard \(S\) as being an ideal extension of \({\rm Base}(S)\) by \(S/\,{\rm Base}(S)\) and note that \(S/\,{\rm Base}(S)\) is a stratified semigroup with \(0\). If \(S\) is a nil-stratified extension then it follows that for every \(s\in S\) there exists \(m\in\mathbb{N}\) such that \(s^{m}\in{\rm Base}(S)\). Hence in the Rees quotient \(S/\,{\rm Base}(S)\), \(s^{m}=0\) and so \(S/\,{\rm Base}(S)\) is nilpotent and \(S\) is an ideal extension by a nilpotent stratified semigroup. Recall that the nilpotency degree of a semigroup is the smallest value \(m\) such that every product of \(m\) elements is zero. It is easy to see that if the nilpotency degree of \(S/\,{\rm Base}(S)\) is \(m\) then the height of \(S\) is \(m\) and so \(S\) is a finitely stratified extension. Conversely, any nilpotent semigroup \(S\) of finite nilpotency degree \(m\) is a stratified semigroup with \(S^{m}=\{0\}\). We have hence proved the following.
**Proposition 2.4**: _Let \(S\) be a stratified extension. Then_
1. \(S\) _is an ideal extension of_ \({\rm Base}(S)\) _by a stratified semigroup with_ \(0\)_._
2. _If_ \(S\) _is a nil-stratified extension then it is an ideal extension of_ \({\rm Base}(S)\) _by a nilpotent semigroup._
3. _If_ \(S\) _is a finitely stratified extension then it is an ideal extension of_ \({\rm Base}(S)\) _by a nilpotent semigroup of finite degree._
The converses of these results do not hold. To see this, let \(S\) be a free semigroup and \(T\) be the two element nilpotent semigroup. Then \(T\) is a stratified semigroup with \(0\) but an extension of \(S\) by \(T\) is not a stratified extension. Further, \(T\) is a nilpotent semigroup of finite degree and an extension of \(S^{0}\) by \(T\) is a stratified extension, but is not a finitely stratified nor nil-stratified extension.
**Proposition 2.5**: _Let \(S\) be a stratified extension._
1. _If_ \(S\) _is a nil-stratified extension then_ \(\mathrm{Base}(S)\) _is periodic if and only if_ \(S\) _is periodic;_
2. _If_ \(S\) _is a nil-stratified extension then_ \(\mathrm{Base}(S)\) _is eventually regular if and only if_ \(S\) _is eventually regular;_
3. \(\mathrm{Base}(S)\) _is_ \(E-\)_dense if and only if_ \(S\) _is_ \(E-\)_dense._
_So a stratified extension with a periodic base is \(E-\)dense._
**Proof.** The first two statements are easy to deduce. For the third, let \(\mathrm{Base}(S)\) be \(E-\)dense and let \(s\in S\). Then for any \(t\in\mathrm{Base}(S),ts\in\mathrm{Base}(S)\) and so there exists \(u\in\mathrm{Base}(S)\) such that \(uts\in E(\mathrm{Base}(S))=E(S)\) and so \(S\) is \(E-\)dense. Conversely suppose that \(S\) is \(E-\)dense. Since \(W(S)=\mathrm{Reg}(S)\subseteq\mathrm{Base}(S)\), then \(\mathrm{Base}(S)\) is \(E-\)dense.
Notice that periodic \(\Rightarrow\) eventually regular \(\Rightarrow\)\(E-\)dense \(\Rightarrow\) stratified extension.
There is in general little control over the base as a stratified extension can be constructed with any given semigroup as its base. However, finitely stratified extensions allow us to place restrictions on the semigroup forming the base.
**Proposition 2.6**: _Let \(T\) be any semigroup. There exists a stratified extension \(S\) with base \(T\)._
**Proof.** Let \(R\) be any semigroup and let \(S=R\dot{\cup}T\). Define a binary operation \(*\) on \(S\) by \(r_{1}*r_{2}=r_{1}r_{2}\) for \(r_{1},r_{2}\in R\), \(t_{1}*t_{2}=t_{1}t_{2}\) for \(t_{1},t_{2}\in T\), and \(r*t=t*r=t\) for \(r\in R\) and \(t\in T\). It is easy to verify that this operation is associative and so \((S,*)\) is a semigroup. Then \(T\subseteq\bigcap_{m>0}(R\dot{\cup}T)^{m}\) and so \(S\) is a stratified extension. Moreover, if we choose \(R\) such that \(\bigcap_{m>0}R^{m}=\emptyset\), for example \(R=A^{+}\), a free semigroup, we see that \(\bigcap_{m>0}(R\dot{\cup}T)^{m}=T\) and so we can obtain a stratified extension with base \(T\).
In contrast, the possible bases for a finitely stratified extension are much more restricted. Let \(S\) be a finitely stratified extension with \(T=\mathrm{Base}(S)\) and consider \(T^{2}\). There exists \(m\in\mathbb{N}\) such that \(T=S^{m}\), so \(T^{2}=S^{2m}\). But by definition \(S^{m}=S^{m+1}=S^{m+2}=\cdots=S^{2m}\) and so \(T^{2}=T\). A semigroup \(T\) satisfying \(T^{2}=T\) is said to be _globally idempotent_ and so the base of a finitely stratified extension is globally idempotent. Note also that if \(S\) is globally idempotent, then \(S\) is a finitely stratified extension in a trivial sense, with base \(S\) and height \(1\).
**Proposition 2.7**: _A semigroup \(S\) is a finitely stratified extension if and only if it is an ideal extension of a globally idempotent semigroup by a nilpotent semigroup of finite degree._
**Proof.** We need only justify the converse. Let \(\Sigma\) be an ideal extension of a globally idempotent semigroup \(S\) by a nilpotent semigroup \(T\) of finite degree \(m\). Then \(\Sigma^{m}=S\) and \(S=S^{2}\) so \(\Sigma^{m}=\Sigma^{2m}\) and as each \(\Sigma^{i}\subseteq\Sigma^{i+1}\) it follows that \(\Sigma^{m}=\Sigma^{m+1}\). Hence \(\Sigma\) is a finitely stratified extension with base \(S\) and height \(m\).
This is still a very broad class of semigroup, including among its members every monoid and every regular semigroup. It should also be noted that a globally idempotent semigroup need not contain idempotents, the Baer-Levi semigroup being one such example.
**Proposition 2.8**: _There exists a finitely stratified extension of height \(h\), for any \(h\in\mathbb{N}\)._
**Proof.** Let \(G\) be any finite cyclic group of order \(r\) and let \(S\) be the monogenic semigroup of index \(h\) and period \(r\). Then it is reasonably clear that \(S\) is a finitely stratified extension with base \(G\) and height \(h\).
Let \(S\) be a semigroup and let \(\rho\) be a congruence on \(S\). It is easy to see that for any \(m\in\mathbb{N}\) we have \((S/\rho)^{m}=S^{m}/\rho\). Hence if \(S\) is a stratified extension then \(S/\rho\) is also a stratified extension, with \(\mbox{Base}(S/\rho)=\mbox{Base}(S)/\rho\). Further, if \(S\) is a finitely stratified or nil-stratified extension then so is \(S/\rho\).
Let \(S_{i}\) be a family of semigroups. Then \((\prod_{i\in I}S_{i})^{m}=\prod_{i\in I}{S_{i}}^{m}\). Hence if each \(S_{i}\) is a stratified extension, the product \(\prod_{i\in I}S_{i}\) is also a stratified extension with \(\mbox{Base}(\prod_{i\in I}S_{i})=\prod_{i\in I}\mbox{Base}(S_{i})\). If \(I\) is a finite set and each \(S_{i}\) is a nil-stratified extension then so is \(\prod_{i\in I}S_{i}\). Similarly if each \(S_{i}\) is a finitely stratified extension then so is \(\prod_{i\in I}S_{i}\). To see that we cannot remove the condition \(|I|<\infty\), let \(I=\mathbb{N}\) and for each \(i\in I\) let \(S_{i}\) be a finitely stratified (and hence nil-stratified) extension of height \(i\). Then \(\prod_{i\in I}S_{i}\) is a stratified extension but is neither a finitely stratified extension nor a nil-stratified extension.
Subsemigroups of (finitely, nil-) stratified extensions are not necessarily (finitely, nil-) stratified extensions. For example the bicyclic semigroup is a finitely stratified extension (in fact globally idempotent) but contains \((\mathbb{N},+)\) as a subsemigroup which is free and hence not even a stratified extension.
The class of (finitely, nil-) stratified semigroups therefore does not form a variety. However, we have proved the following theorem.
**Theorem 2.9**: _Let \(S\) be a (finitely, nil-) stratified extension and let \(S_{i}\) for \(i\in I\) be a family of stratified semigroups._
1. _If_ \(\rho\) _is a congruence on_ \(S\) _then_ \(S/\rho\) _is a (finitely, nil-) stratified extension with base_ \(\mbox{Base}(S)/\rho\)_;_
2. _the direct product_ \(\prod_{i\in I}S_{i}\) _is a stratified extension with base_ \(\prod_{i\in I}\mbox{Base}(S_{i})\)_;_
3. _if_ \(|I|<\infty\) _and each_ \(S_{i}\) _is a (finitely, nil-) stratified extension then the direct product_ \(\prod_{i\in I}S_{i}\) _is a (finitely, nil-) stratified extension with base_ \(\prod_{i\in I}\mbox{Base}(S_{i})\)
Let \(S=\bigcup_{\alpha\in Y}S_{\alpha}\) be a semilattice of stratified extensions. Then \(\bigcup_{\alpha\in Y}\mbox{Base}(S_{\alpha})\subseteq\bigcap_{m>0}S^{m}\) and so \(S\) is a stratified extension.
In the case of finitely stratified extensions, we can construct a semilattice of finitely stratified extensions which is not a finitely stratified extension. Let \(Y=\mathbb{N}\cup\{0\}\) be a semilattice under the multiplication \(ij=0\) for all \(i,j\in Y\) with \(i\neq j\). For each \(i\in\mathbb{N}\) let \(S_{i}\) be a finitely stratified extension with height \(i\) and let \(S_{0}\) be globally idempotent. Let \(S\) be the union of each \(S_{i}\) as a semilattice of semigroups over \(Y\). Then \(S^{m}=S_{0}\cup\bigcup_{i\in\mathbb{N}}{S_{i}}^{m}\). If \(i>m\) then there are elements in \({S_{i}}^{m}\) which are not in \({S_{i}}^{m+1}\) and so \(S^{m}\neq S^{m+1}\) for any \(m\in\mathbb{N}\).
## 3 Semilattices of group bound semigroups
Let \(S\) be a semigroup such that every regular \(\mathcal{H}\)-class contains an idempotent. Equivalently, every regular element of \(S\) lies in some subgroup of \(S\). We define a relation \(\rho\) on \(S\) by \(s\rho t\) if and only if for every \(\mathcal{D}\)-class \(D\) of \(S\) we have
\[W(s)\cap D\neq\emptyset\iff W(t)\cap D\neq\emptyset.\]
Clearly \(\rho\) is an equivalence relation. We will show that \(\rho\) is in fact a congruence, and moreover that \(S/\rho\) is a semilattice.
We begin by establishing some properties of such semigroups.
**Lemma 3.1**: _Let \(S\) be a semigroup such that every regular \(\mathcal{H}\)-class contains an idempotent._
1. _Every regular_ \(\mathcal{D}\)_-class of_ \(S\) _is a completely simple subsemigroup of_ \(S\)_._
2. _Let_ \(s^{\prime}\in W(s)\)_. Every_ \(\mathcal{H}\)_-class of_ \(D_{s^{\prime}}\) _contains a weak inverse of_ \(s\)_._
**Proof.** These are fairly straightforward.
1. Let \(D\) be a regular \(\mathcal{D}-\)class and let \(a,b\in D\) and let \(e\) be the idempotent lying in \(L_{a}\cap R_{b}\). Then \(ab\mathcal{L}eb\mathcal{R}ee=e\). Hence \(ab\in D\) and \(D\) is a completely simple subsemigroup of \(S\).
2. Let \(D\) be the (regular) \(\mathcal{D}-\)class containing \(s^{\prime}\) and let \(r\) be an idempotent such that \(r\mathcal{R}ss^{\prime}\). Then \(ss^{\prime}r=r\) and \(rss^{\prime}=ss^{\prime}\), and it follows that \(s^{\prime}r\in W(s)\) and \(s^{\prime}\mathcal{R}s^{\prime}r\mathcal{L}r\).
Let \(I=S/{\cal R}\) and \(\Lambda=S/{\cal L}\) and as is normal denote the \({\cal R}-\)classes as \(R_{i}\) (\(i\in I\)), the \({\cal L}-\)classes as \(L_{\lambda}\) (\(\lambda\in\Lambda\)) and the \({\cal H}-\)class \(R_{i}\cap L_{\lambda}\) as \(H_{i\lambda}\). Suppose \(s^{\prime}\in R_{j}\) and \(ss^{\prime}\in R_{i}\). For each \(\lambda\in\Lambda\), let \(r_{i\lambda}\) be the idempotent in \(H_{i\lambda}\) so that we produce a weak inverse of \(s\), \(s^{\prime}_{j\lambda}\), in \(H_{j\lambda}\). Let \(s^{\prime}_{j\lambda}s\in L_{\mu}\), and for each \(k\in I\) let \(l_{k\mu}\) be the idempotent in \(H_{k\mu}\) and note that, using a similar argument to above, \(l_{k\mu}s^{\prime}_{j\lambda}\in W(s)\) and \(l_{k\mu}s^{\prime}_{j\lambda}\in H_{k\lambda}\).
\begin{tabular}{|c|c|c|} \hline \(r_{i\lambda}\) & & & \\ \hline \(l_{k\mu}s^{\prime}_{j\lambda}\) & & & \(l_{k\mu}\) \\ \hline \(s^{\prime}_{j\lambda}\) & & & \(s^{\prime}_{j\lambda}s\) \\ \hline \end{tabular} The second point allows us to give an equivalent definition of \(\rho\): \(s\rho t\) if and only if for every \({\cal H}\)-class \(H\) of \(S\) we have
\[W(s)\cap H\neq\emptyset\iff W(t)\cap H\neq\emptyset.\]
The next result is key in what follows.
**Lemma 3.2**: _Let \(S\) be a semigroup such that every regular \({\cal H}\)-class contains an idempotent and let \(s,t\in S\). For any \({\cal D}\)-class \(D\) of \(S\) we have_
\[W(st)\cap D\neq\emptyset\mbox{ if and only if }W(s)\cap D\neq\emptyset\mbox{ and }W(t)\cap D\neq\emptyset.\]
**Proof.** Let \(s^{\prime}\in W(s)\cap D\) and suppose \(W(t)\cap D\neq\emptyset\). Then \(s^{\prime}s\) is an idempotent lying in \(D\). By Lemma 3.1(2) \(t\) has a weak inverse in every \({\cal H}\)-class of \(D\), so let \(t^{\prime}\) be the weak inverse of \(t\) lying in the \({\cal H}\)-class of \(s^{\prime}s\). Then \(t^{\prime}s^{\prime}stt^{\prime}s^{\prime}=t^{\prime}tt^{\prime}s^{\prime}=t^{ \prime}s^{\prime}\). By Lemma 3.1(1) \(t^{\prime}s^{\prime}\in D\) and so \(W(st)\cap D\neq\emptyset\).
Conversely let \((st)^{\prime}\in W(st)\cap D\). Then \(t(st)^{\prime}st(st)^{\prime}=t(st)^{\prime}\) and so \(t(st)^{\prime}\) is a weak inverse of \(s\). As \(t(st)^{\prime}{\cal L}(st)^{\prime}\) we have \(W(s)\cap D\neq\emptyset\). Similarly we have \((st)^{\prime}s\in W(t)\cap D\neq\emptyset\).
**Corollary 3.3**: _Let \(S\) be a semigroup such that every regular \({\cal H}\)-class contains an idempotent and let \(s,t\in S\). Then \(s\rho s^{2}\) and \(st\rho ts\)._
**Corollary 3.4**: _Let \(S\) be a semigroup such that every regular \({\cal H}\)-class contains an idempotent. Either \(S\) is \(E-\)dense or the set \(\{s\in S|W(s)=\emptyset\}\) is an ideal of \(S\)._
We can now prove the following theorem.
**Theorem 3.5**: _Let \(S\) be a semigroup such that every regular \({\cal H}\)-class contains an idempotent. Then the relation \(\rho\) is a congruence and \(S/\rho\) is a semilattice._
**Proof.** Let \(a,b,c,d\in S\) such that \(a\rho b\) and \(c\rho d\) and let \(D\) be a \({\cal D}\)-class of \(S\). By Lemma 3.2\(W(ac)\cap D\neq\emptyset\) if and only if \(W(a)\cap D\neq\emptyset\) and \(W(c)\cap D\neq\emptyset\). As \(a\rho b\) and \(c\rho d\) this latter condition is equivalent to \(W(b)\cap D\neq\emptyset\) and \(W(d)\cap D\neq\emptyset\) which is in turn equivalent to \(W(bd)\cap D\neq\emptyset\) by Lemma 3.2. It follows that \(ac\rho bd\) and so \(\rho\) is a congruence. That \(S/\rho\) is a semilattice follows from Corollary 3.3.
We can now prove some results about the structure of \(S\).
**Lemma 3.6**: _Let \(S\) be a semigroup such that every regular \({\cal H}\)-class contains an idempotent and let \(s,t\in{\rm Reg}(S)\). Then \(s\rho t\) if and only if \(s{\cal D}t\)._
**Proof.** From Lemma 3.2 it follows that all of Green's relations are contained in \(\rho\). To see this suppose that \((s,t)\in{\cal J}\). Then there exists \(u,v\in S^{1}\) such that \(s=utv\). So for every \({\cal D}-\)class \(D\), if \(W(s)\cap D\neq\emptyset\) then \(W(utv)\cap D\neq\emptyset\). Hence by Lemma 3.2\(W(t)\cap D\neq\emptyset\). By a dual argument we then deduce that \(W(s)\cap D\neq\emptyset\) if and only if \(W(t)\cap D\neq\emptyset\) and so \((s,t)\in\rho\).
As \(s\) is regular it has an inverse which lies in the same \({\cal D}\)-class and so \(W(s)\cap D_{s}\neq\emptyset\). Hence \(W(t)\cap D_{s}\neq\emptyset\) and by Lemma 3.1 there exists \(t^{\prime}\in W(t)\) such that \(t^{\prime}{\cal L}s\). By a similar argument there exists \(s^{\prime}\in W(s)\) such that \(s^{\prime}{\cal R}t\). Then
\[s{\cal L}t^{\prime}{\cal R}t^{\prime}t{\cal L}st{\cal R}ss^{\prime}{\cal L}s^{ \prime}{\cal R}t\]
and so \(s{\cal D}t\) as required.
It follows that for each \(\rho\)-class \(S_{\alpha}\) either \(S_{\alpha}\) has no regular elements or the regular elements in \(S_{\alpha}\) are contained within a single \({\cal D}\)-class and hence by Lemma 3.1 form a completely simple subsemigroup of \(S_{\alpha}\). In the latter case \(S_{\alpha}\) is an \(E-\)dense semigroup as by definition of \(\rho\) each element has a weak inverse lying in the regular \({\cal D}\)-class. Since each \({\cal J}\)-class is contained within a \(\rho\)-class, it also follows that the regular \({\cal J}\)-classes of \(S\) are exactly the regular \({\cal D}\)-classes.
**Lemma 3.7**: _Let \(S\) be a semigroup such that every regular \({\cal H}\)-class contains an idempotent and let \(x\in S\). Then \(x\rho\) is an \(E-\)dense subsemigroup of \(S\) if and only if \(x\rho\) contains a regular element._
**Proof.** One way round is obvious. That \(x\rho\) is a subsemigroup of \(S\) follows from the fact that \(\rho\) is a semilattice. Let \(y\in x\rho\) be regular. Then there exists \(y^{\prime}\in W(y)\cap D_{y}\) and so for any \(z\in x\rho\) there exists \(z^{\prime}\in W(z)\cap D_{y}\). Since \(D_{y}\subseteq x\rho\) then \(x\rho\) is \(E-\)dense.
**Lemma 3.8**: _Let \(S\) be an \(E-\)dense semigroup such that \({\rm Reg}(S)\) is a completely simple semigroup. Then \({\rm Reg}(S)\) is an ideal of \(S\)._
**Proof.** Let \(s\in{\rm Reg}(S)\) and \(t\in S\). Let \(t^{\prime}\in W(t)\) and let \({\cal H}\) be Green's \({\cal H}-\)relation on \({\rm Reg}(S)\). As \({\rm Reg}(S)\) is completely simple every regular \({\cal H}\)-class contains an inverse of \(s\) so we may choose \(s^{\prime}\in V(s)\) such that \(s^{\prime}{\cal R}tt^{\prime}\). Then \(t^{\prime}s^{\prime}stt^{\prime}s^{\prime}=t^{\prime}s^{\prime}ss^{\prime}=t^ {\prime}s^{\prime}\) and \(stt^{\prime}s^{\prime}st=ss^{\prime}st=st\). Hence \(st\) is regular and so \({\rm Reg}(S)\) is a right ideal. A dual argument shows \({\rm Reg}(S)\) is a left ideal and hence an ideal.
**Lemma 3.9**: _Let \(S\) be a semigroup such that every regular \({\cal H}\)-class contains an idempotent, let \(\alpha\in S/\rho\), let \(S_{\alpha}=\rho^{\frac{s}{s}-1}(\alpha)\) and let \(s\in S_{\alpha}\). Then for all \(\beta\in S/\rho\), \(S_{\beta}\) contains a weak inverse of \(s\) if and only if \(S_{\beta}\) contains a regular element and \(\beta\leq\alpha\)._
**Proof.** Suppose \(\beta\leq\alpha\) and \(S_{\beta}\) contains regular elements. Let \(t\in S_{\beta}\). Then \(st\in S_{\alpha\beta}=S_{\beta}\). As \(S_{\beta}\) contains a regular element it is \(E-\)dense by Lemma 3.7, and so there exists \((st)^{\prime}\in W(st)\cap S_{\beta}\). Then \(t(st)^{\prime}st(st)^{\prime}=t(st)^{\prime}\) so \(t(st)^{\prime}\in W(s)\cap S_{\beta}\) as required. Conversely, let \(s^{\prime}\in W(s)\cap S_{\beta}\). Clearly \(s^{\prime}\) is regular, and \(s^{\prime}=s^{\prime}ss^{\prime}\in S_{\beta\alpha\beta}=S_{\alpha\beta}\) so \(\alpha\beta=\beta\) and hence \(\beta\leq\alpha\) as required.
From the perspective of stratified extensions, we cannot say anything about these semigroups in general. For example, a free semigroup \(S\) and a group \(G\) both satisfy the property that every regular \({\cal H}\)-class contains an idempotent, but \({\rm Base}(S)=\emptyset\) and \({\rm Base}(G)=G\). One condition that allows us to make more precise statements is to require that \(S\) is a group-bound semigroup. Note that group-bound implies eventually regular, and when every regular \({\cal H}\)-class contains an idempotent the two concepts are equivalent.
We will show that applying our results to a semigroup which is also group-bound gives the same decomposition as that in Theorem 1.5.
If \(S\) is a group-bound semigroup and \(e\in E(S)\) then let \(H_{e}\) denote the largest subgroup of \(S\) containing \(e\). The set of elements \(s\) such that \(s^{n}\in H_{e}\) for some \(n\in{\mathbb{N}}\) is denoted by \(K_{e}\). This is well defined in the sense that if \(s^{n}\in H_{e}\) we have \(s^{m}\in H_{e}\) for all \(m>n\) ([5, Lemma 1]). It also follows that the sets \(K_{e}\) partition \(S\). In general \(K_{e}\) is not a subsemigroup of \(S\) ([5, Proposition 7]) and in addition in a group bound semigroup \({\cal D}={\cal J}\) ([5, Lemma 4]). As is usual, \(J_{s}\) will denote the \({\cal J}-\)class of \(s\).
The following result is important in what follows.
**Lemma 3.10**: _Let \(S\) be an eventually regular semigroup such that every regular \({\cal H}\)-class contains an idempotent. If \(s\in K_{e}\) then \(J_{e}\) is the greatest \({\cal J}\)-class containing a weak inverse of \(s\). Moreover, if \(e{\cal J}f\) and \(s\in K_{e}\) and \(t\in K_{f}\) then \((s,t)\in\rho\)._
**Proof.** Let \(S\) be a semigroup satisfying the conditions stated. As \(S\) is eventually regular and every regular element lies in a group then \(S\) is group-bound. Let \(s\in K_{e}\) for some idempotent \(e\) so that there exists \(n\in{\mathbb{N}}\) such that \(s^{n}\in H_{e}\). Then
\[(s^{n}(s^{n+1})^{-1})s(s^{n}(s^{n+1})^{-1})=s^{n}(s^{n+1})^{-1}e=s^{n}(s^{n+1} )^{-1}\]
where \((s^{n+1})^{-1}\) is the inverse of \(s^{n+1}\) in \(H_{e}\). Therefore \(s\) has a weak inverse in \(H_{e}\) and hence in \(J_{e}\).
Now let \(s^{\prime}\in W(s)\) and notice that \(s^{\prime}\) is regular and so lies in a group \(H_{f}\), say. By Lemma 3.1 every \(\mathcal{H}\)-class of \(J_{f}\) contains a weak inverse of \(s\). Let \(s^{\prime\prime}\) be a weak inverse of \(s\) such that \(s^{\prime\prime}\mathcal{L}s^{\prime}s\) and note that \(s^{\prime\prime}\in D_{f}=J_{f}\). Then as \(s^{\prime\prime}s^{\prime}s=s^{\prime\prime}\) we have
\[s^{\prime\prime}s^{\prime}s^{2}s^{\prime\prime}s^{\prime}=s^{\prime\prime}ss^{ \prime\prime}s^{\prime}=s^{\prime\prime}s^{\prime},\]
so \(s^{\prime\prime}s^{\prime}\in W(s^{2})\) and by Lemma 3.1, \(s^{\prime\prime}s^{\prime}\in J_{f}\). We can proceed inductively as follows. Let \(s^{\prime\prime\prime}\in W(s)\cap L_{s^{\prime\prime}s^{\prime}s^{2}}\) so that \(s^{\prime\prime\prime}s^{\prime\prime}s^{2}=s^{\prime\prime\prime}\) and
\[s^{\prime\prime\prime}s^{\prime\prime}s^{\prime}s^{3}s^{\prime\prime\prime}s^ {\prime\prime}s^{\prime}=s^{\prime\prime\prime}s^{\prime\prime}s^{\prime}=s^{ \prime\prime\prime}s^{\prime\prime}.\]
Hence \(s^{\prime\prime\prime}s^{\prime\prime}s^{\prime}\in W(s^{3})\cap J_{f}\).
We see then that there is a weak inverse of \(s^{n}\) in \(J_{f}\) for any \(n\in\mathbb{N}\). In particular, since \(s\in K_{e}\), we can choose \(n\) large enough such that \(s^{n}\in H_{e}\subseteq J_{e}\). Let \(s^{*}\) be the associated weak inverse of \(s^{n}\) in \(J_{f}\). Then by Lemma 1.4 we have \(J_{f}=J_{s^{*}}\leq J_{s^{n}}=J_{e}\). Consequently if \(s\in K_{e}\) then \(J_{e}\) is the greatest \(\mathcal{J}\)-class containing a weak inverse of \(s\).
Now let \(s\in K_{e}\) and \(t\in K_{f}\) as in the statement of the lemma. We can assume that \(s\) and \(t\) are regular. To see this, let \(n\in\mathbb{N}\) be the minimum value such that \(s^{n}\in H_{e}\) and note that if \((s^{n})^{\prime}\) is a weak inverse of \(s^{n}\) then \(s^{n-1}(s^{n})^{\prime}\) is a weak inverse of \(s\) with \(s^{n-1}(s^{n})^{\prime}\mathcal{L}(s^{n})^{\prime}\). This, along with the previous argument, shows that \(s\) has a weak inverse in a \(\mathcal{J}-\)class \(J\) if and only if the regular element \(s^{n}\) has a weak inverse in \(J\).
Let \(J\) be a \(\mathcal{J}-\)class containing a weak inverse \(s^{\prime}\) of \(s\). If \(t\mathcal{L}s\) then \(ts^{\prime}\mathcal{L}ss^{\prime}\) and so \(ts^{\prime}\in J\). Then, since \(J\) is regular, there exists \(r\in J\) such that \(ts^{\prime}r\in J\) is an idempotent, and so \(s^{\prime}rts^{\prime}r\in J\) is a weak inverse of \(t\). By a similar argument if \(t\mathcal{R}s\) there is a weak inverse of \(t\) in \(J\) and so if \(s\mathcal{J}t\) there is a weak inverse of \(t\) in \(J\). A dual argument then gives the opposite direction, and the result follows from the definition of \(\rho\).
Note that each \(\mathcal{H}\)-class of \(S\) contains at most one weak inverse of \(s\): if \(s^{\prime},s^{*}\in W(s)\) with \(s^{\prime}\mathcal{H}s^{*}\) then \(s^{\prime}s\mathcal{R}s^{\prime}\mathcal{R}s^{*}\mathcal{R}s^{*}s\). As \(\mathcal{L}\) is a right congruence we also have \(s^{\prime}s\mathcal{L}s^{*}s\). Since \(s^{\prime}s\) and \(s^{*}s\) are idempotents it follows that \(s^{\prime}s=s^{*}s\) and by a similar argument \(ss^{\prime}=ss^{*}\). Then \(s^{\prime}=s^{\prime}ss^{\prime}=s^{*}ss^{\prime}=s^{*}ss^{*}=s^{*}\).
**Theorem 3.11**: _Let \(S\) be a semigroup in which every regular \(\mathcal{H}\)-class contains an idempotent. If \(S\) is group-bound then \(S\) is a semilattice of Archimedean semigroups of the form \(K_{J_{e}}=\bigcup_{f\in E(J_{e})}K_{f}\) for \(e\in E(S)\)._
**Proof.** Let \(e\in E(S)\) and define \(K_{J_{e}}=\bigcup_{f\in E(J_{e})}K_{f}\). Let \(s,t\in K_{J_{e}}\) and notice that \(s\in K_{f},t\in K_{g}\) for some \(f,g\in J_{e}\), so that by Lemma 3.10, \((s,t)\in\rho\). Conversely, if \((s,t)\in\rho\) then there exists \(e,f\in E(S)\) such that \(s\in K_{e}\subseteq K_{J_{e}},t\in K_{f}\subseteq K_{J_{f}}\). By Lemma 3.10, \(J_{e}\) is the greatest \(\mathcal{J}-\)class containing a weak inverse of \(s\) and \(J_{f}\) is the greatest \(\mathcal{J}-\)class containing a weak inverse of \(t\). Since \((s,t)\in\rho\) it easily follows that \(J_{e}=J_{f}\) and so \(s,t\in K_{J_{e}}=K_{J_{f}}\). Hence the sets \(K_{J_{e}}\) are the \(\rho-\)classes and so partition \(S\) and since \(\rho\) is a semilattice then the result follows.
For each \(e,f\in E(S)\) it follows that there exists \(g\in E(S)\) such that \(K_{J_{e}}K_{J_{f}}\subseteq K_{J_{g}}\). Since \(e\in K_{J_{e}}\) and \(f\in K_{J_{f}}\) then \(ef\in K_{J_{g}}\). In addition there exists a uniquely determined \(h\in E(S)\) such that \(ef\in K_{h}\subseteq K_{J_{h}}\) and so \(K_{J_{g}}=K_{J_{h}}\).
To see that \(K_{J_{e}}\) is an Archimedean semigroup, let \(a,b\in K_{J_{e}}\). Then there exist \(m,n\in\mathbb{N}\) such that \(a^{m},b^{n}\in J_{e}\) and so \(a^{m}\in K_{J_{e}}b^{n}K_{J_{e}}\subseteq K_{J_{e}}bK_{J_{e}}\) as required.
Note that a decomposition into a semilattice of Archimedean semigroups is necessarily unique: Let \(S=[Y;S_{\alpha}]=[Y^{\prime};S_{a}]\) be two Archimedian semilattice decompositions of the semigroup \(S\). If \(s,t\in S\) lie in the same subsemigroup \(S_{\alpha}\) where \(\alpha\in Y\) and \(s\in S_{a},t\in S_{b}\) where \(a,b\in Y^{\prime}\), then there exist \(n\in\mathbb{N}\) and \(u,v\in S\) such that \(s^{n}=utv\) and \(a\leq b\). Similarly \(b\leq a\) and so \(s,t\in S_{a}\) and the two semilattices, \(Y\) and \(Y^{\prime}\), are isomorphic. We have hence recovered the same decomposition as Shevrin (Theorem 1.5) in this case.
It is clear from the above structure that these semigroups are group-bound and since it is straightforward to check that \(\mathrm{Reg}(K_{J_{e}})=J_{e}\), then the regular elements form a completely simple subsemigroup.
The converse of Theorem 3.11 does not hold in general as an Archimedean semigroup need not contain regular elements and hence a semilattice of Archimedean semigroups may not be group-bound. It is enough, however, to require that each Archimedean semigroup contains a regular element.
**Corollary 3.12**: _Let \(S\) be a semigroup in which every regular \(\mathcal{H}\)-class contains an idempotent. Then \(S\) is group-bound if and only if \(S=[Y;S_{\alpha}]\) is a semilattice of Archimedean semigroups \(S_{\alpha}\) with \(\mathrm{Reg}(S_{\alpha})\neq\emptyset\)._
**Proof.** Clearly if \(S\) is group-bound then every subsemigroup contains a regular element. Conversely, let \(s\in S\). Then \(s\in S_{\alpha}\) for some \(\alpha\) and let \(t\in\mathrm{Reg}(S_{\alpha})\). Since \(S_{\alpha}\) is an Archimedean semigroup, there exists \(n\in\mathbb{N}\) such that \(s^{n}\in S_{\alpha}tS_{\alpha}\) and hence \(s^{n}\in\mathrm{Reg}(S_{\alpha})\subseteq\mathrm{Reg}(S)\) by Lemmas 3.7 and 3.8.
**Proposition 3.13**: _Let \(S\) be a semigroup. Any two of the following implies the third._
1. \(S\) _is group-bound_
2. _Every regular_ \(\mathcal{H}\)_-class of_ \(S\) _contains an idempotent_
3. \(S\) _is a semilattice of Archimedean semigroups_ \(S_{\alpha}\) _with_ \(\mathrm{Reg}(S_{\alpha})\neq\emptyset\)_._
**Proof.** By Corollary 3.12 we have (1) and (2) imply (3) and (2) and (3) imply (1). The remaining implication follows from Theorem 1.5.
We now turn our attention to describing the subsemigroups \(K_{J_{e}}\) at each vertex of the semilattice. Since each semigroup contains regular elements, they are all stratified extensions with a base consisting of at least the regular elements. From [5, Proposition 3] each \(K_{J_{e}}\) is an ideal extension of the completely simple semigroup \(J_{e}\) by a nilpotent semigroup. If this nilpotent semigroup is stratified then \(K_{J_{e}}\) is a nil-stratified extension with base \(J_{e}\). It is unclear to us whether every nilpotent semigroup is stratified.
**Lemma 3.14**: _Let \(S\) be an eventually regular semigroup such that \(\mathrm{Reg}(S)\) is completely simple and suppose \(S\) is a finitely stratified extension. Then \(\mathrm{Base}(S)\setminus\mathrm{Reg}(S)\) is either empty or infinite._
**Proof.** Suppose \(s_{0}\in\mathrm{Base}(S)\setminus\mathrm{Reg}(S)\neq\emptyset\). Since \(S\) is a finitely stratified extension, \(\mathrm{Base}(S)\) is a globally idempotent subsemigroup so \(s_{0}=s_{1}t_{1}\) for some \(s_{1},t_{1}\in\mathrm{Base}(S)\). If \(s_{1}\) is regular then as \(\mathrm{Reg}(S)\) is an ideal, \(s_{0}\) is regular giving a contradiction. Further, if \(s_{1}=s_{0}\) then \(s_{0}=s_{0}t_{1}=s_{0}{t_{1}}^{n}\) for any \(n\in\mathbb{N}\). We can choose \(n\) such that \({t_{1}}^{n}\) is regular, so \(s_{0}\) is again regular giving a contradiction. Hence \(s_{1}\) is an element of \(\mathrm{Base}(S)\setminus\mathrm{Reg}(S)\) not equal to \(s_{0}\). By a similar argument, \(s_{1}=s_{2}t_{2}\) where \(s_{2}\in\mathrm{Base}(S)\setminus\mathrm{Reg}(S)\) and \(s_{2}\) is not equal to \(s_{0}\) nor \(s_{1}\). Proceeding inductively we deduce that the set \(\{s_{0},s_{1},s_{2},\dots\}\) is an infinite subset of \(\mathrm{Base}(S)\setminus\mathrm{Reg}(S)\).
It follows that any finite semigroup in which every regular \(\mathcal{H}\)-class contains an idempotent is a semilattice of finitely stratified extensions with completely simple bases.
**Theorem 3.15**: _A semigroup \(S\) is a finite semigroup in which every regular \(\mathcal{H}\)-class contains an idempotent if and only if \(S=[Y;S_{\alpha}]\) is a finite semilattice of finite semigroups \(S_{\alpha}\) where each \(S_{\alpha}\) is a finitely stratified extension of a completely simple semigroup._
**Proof.** To see that the converse is true, let \(s\in S\) be a regular element, so that there exists \(\alpha\) such that \(s\in S_{\alpha}\). Let \(s^{\prime}\) be an inverse of \(s\) (within \(S\)) with \(s^{\prime}\in S_{\beta}\) for some \(\beta\). Then \(s=ss^{\prime}s\in S_{\alpha}S_{\beta}S_{\alpha}\subseteq S_{\alpha\beta}\cap S_ {\alpha}\), and so \(S_{\alpha}=S_{\alpha\beta}\). Similarly \(s^{\prime}=s^{\prime}ss^{\prime}\in S_{\alpha\beta}\cap S_{\beta}\) and so \(S_{\alpha}=S_{\beta}\). It follows that \(s\) is regular within \(S_{\alpha}\) and so \(s\in\mathrm{Base}(S_{\alpha})\) and is therefore \(\mathcal{H}-\)related to an idempotent as required.
## 4 Strict extensions of Clifford Semigroups
This section makes use of the notation of Clifford and Preston [1, Section 4.4], and in particular that relating to ideal extensions determined by partial homomorphisms. A Clifford semigroup is a completely regular inverse semigroup. It is well known that a Clifford semigroup \(S\) decomposes as a semilattice of groups \(S=\mathcal{S}[Y;G_{\alpha}]\). We begin by showing that a strict extension \(\Sigma\) of a Clifford semigroup \(S\) has a semilattice structure isomorphic to that of the Clifford semigroup itself.
**Lemma 4.1**: _Let \(S=\mathcal{S}[Y;G_{\alpha}]\) be a Clifford semigroup. An ideal extension of \(S\) is strict if and only if it is determined by a partial homomorphism._
**Proof.** Let \(a,b\in S\) be such that \(ax=bx\) and \(xa=xb\) for all \(x\in S\). As \(S\) is a Clifford semigroup \(a\in G_{\alpha}\) and \(b\in G_{\beta}\) for some \(\alpha,\beta\in Y\). Let \(e,f\) be the identities of \(G_{\alpha},G_{\beta}\) respectively. Then \(a=ea=eb\) and so \(\alpha\leq\beta\). Similarly, \(b=fb=fa\) so \(\beta\leq\alpha\) and so \(\alpha=\beta\) and \(e=f\). Then \(a=ea=eb=b\) and hence \(S\) is weakly reductive. The result then follows from Theorem 1.2.
**Lemma 4.2**: _Let \(\Sigma\) be a strict extension of a Clifford semigroup \(S={\cal S}[Y;G_{\alpha}]\) by a semigroup \(T\) defined by a partial homomorphism \(A\mapsto\overline{A}\) and let \(\Sigma_{\alpha}=G_{\alpha}\cup\{A\in T\setminus\{0\}|\overline{A}\in G_{\alpha}\}\) for each \(\alpha\in Y\). Define a relation \(\sim\) on \(\Sigma\) by \(s\sim t\) if and only if \(s,t\in\Sigma_{\alpha}\) for some \(\alpha\in Y\). Then \(\sim\) is a congruence and \(\Sigma/\!\sim\) is a semilattice isomorphic to \(Y\)._
**Proof.** Clearly \(\sim\) is an equivalence relation. To prove \(\sim\) is a congruence and that \(\Sigma/\!\sim\cong Y\) we show that \(\sim\) is the kernel of the homomorphism \(\theta:\Sigma\to Y\) where if \(s\in\Sigma_{\alpha}\) then \(\theta(s)=\alpha\). Note that if \(A\in T\setminus\{0\}\) then \(\theta(A)=\theta(\overline{A})\). We have four cases to consider:
1. If \(s,t\in S\) then \(\theta(s)\theta(t)=\theta(st)\) follows from the semilattice structure of \(S\).
2. If \(s\in S\) and \(A\in T\setminus\{0\}\) then \(\theta(s)\theta(A)=\theta(s)\theta(\overline{A})=\theta(s\overline{A})=\theta (sA)\), where the last two equalities follow from the first case and multiplication in a strict extension respectively. The case for \(\theta(A)\theta(s)\) follows similarly.
3. If \(A,B\in T\setminus\{0\}\) then \(\theta(A)\theta(B)=\theta(\overline{A})\theta(\overline{B})=\theta(\overline{A }\ \overline{B})\) by the first case. Then if \(AB=0\) in \(T\) we have \(\theta(AB)=\theta(\overline{A}\ \overline{B})\) and if \(AB\neq 0\) in \(T\) we have \(\theta(AB)=\theta(\overline{AB})=\theta(\overline{A}\ \overline{B})\). In either case \(\theta(A)\theta(B)=\theta(AB)\).
Hence \(\theta\) is a homomorphism as required and \(\sim\) is clearly its kernel.
**Theorem 4.3**: _Every strict extension \(\Sigma\) of a Clifford semigroup \(S\) by a semigroup \(T\) is a semilattice of extensions of groups. Conversely, if \(\Sigma\) is a semilattice of extensions \(\Sigma_{\alpha}\) of groups \(G_{\alpha}\) and \(S=\bigcup_{\alpha\in Y}G_{\alpha}\) is an ideal of \(\Sigma\) then \(\Sigma\) is a strict extension of the Clifford semigroup \(S\)._
**Proof.** By Lemma 4.2, \(\Sigma\) is a semilattice of semigroups \(\Sigma_{\alpha}\) defined via a partial homomorphism \(A\mapsto\overline{A}\) from \(T\setminus\{0\}\to S\). The restriction of this map to \(\Sigma_{\alpha}\setminus G_{\alpha}\) gives a partial homomorphism defining the ideal extension \(\Sigma_{\alpha}\) of the group \(G_{\alpha}\).
Conversely, let \(\Sigma\) be a semilattice of semigroups \(\Sigma_{\alpha}\) where each \(\Sigma_{\alpha}\) is an ideal extension of a group \(G_{\alpha}\) by a stratified semigroup \(T_{\alpha}\) and \(S=\bigcup_{\alpha\in Y}G_{\alpha}\) is an ideal of \(\Sigma\). It follows that \(S\) is a Clifford semigroup and \(\Sigma\) is an ideal extension of \(S\) by \(T=\Sigma/S\), where \(T\) can equivalently be viewed as \(\{0\}\cup\bigcup_{\alpha\in Y}T_{\alpha}\setminus\{0\}\). As \(G_{\alpha}\) has identity \(e_{\alpha}\) the extension \(\Sigma_{\alpha}\) is determined by the partial homomorphism \(A\mapsto Ae_{\alpha}\ (=e_{\alpha}A)\) (Proposition 1.1 and Theorem 1.2). The union of these maps is then a map \(\varphi:T\setminus\{0\}\to S\) such that \(\varphi(A)=Ae_{\alpha}\) for each \(A\in T_{\alpha}\setminus\{0\}\). We will show that \(\varphi\) is a partial homomorphism and that it defines the ideal extension \(\Sigma\). For clarity, the multiplication determined by \(\varphi\) will be denoted by \(\circ\), multiplication within \(T\) by \(*\), and the original multiplication of the semilattice \(\Sigma\) by juxtaposition. Let \(A,B\in T\setminus\{0\}\) such that \(A*B\neq 0\) and assume \(A\in T_{\alpha}\), \(B\in T_{\beta}\) so that \(A*B\in T_{\alpha\beta}\). Then \(\varphi(A)\varphi(B)=Ae_{\alpha}(Be_{\beta})=A(Be_{\beta})e_{\alpha}=ABe_{ \alpha\beta}=\varphi(AB)\) as required.
This partial homomorphism determines an ideal extension of \(S\) consisting of the same set \(\Sigma\) under the multiplication \(\circ\) defined by
1. \(s\circ t=st\)
2. \(A\circ B=\begin{cases}AB&\text{if }A*B\neq 0\\ \varphi(A)\ \varphi(B)&\text{otherwise}\end{cases}\)
3. \(A\circ s=\varphi(A)s\)
4. \(s\circ A=s\varphi(A)\)
where \(A,B\in T\setminus\{0\}\) and \(s,t\in S\). We show that in all cases, this multiplication is equivalent to the original multiplication on \(\Sigma\). The first condition and the first part of the second condition do not require proof. For the second part of the second condition, let \(A\in T_{\alpha}\setminus\{0\}\) and \(B\in T_{\beta}\setminus\{0\}\) with \(A*B=0\) so \(AB\in G_{\alpha\beta}\). Then
\[A\circ B=\varphi(A)\varphi(B)=Ae_{\alpha}(Be_{\beta})=A(Be_{\beta})e_{\alpha} =ABe_{\alpha\beta}=AB\]
as required. For the third condition, let \(A\in T_{\alpha}\setminus\{0\}\) and \(s\in G_{\beta}\) with \(As\in G_{\alpha\beta}\). Then
\[A\circ s=\varphi(A)s=Ae_{\alpha}(se_{\beta})=A(se_{\beta})e_{\alpha}=Ase_{ \alpha\beta}=As\]
as required. The fourth condition follows a dual argument. Hence \(\varphi\) determines the extension \(\Sigma\) and so it is a strict extension of \(S\).
**Corollary 4.4**: _Let \(\Sigma\) be a strict stratified extension of a Clifford semigroup \(S\). Then \(\Sigma\) is a semilattice of stratified extensions of groups._
**Proof.** Let \(\Sigma\) be a strict extension of a Clifford semigroup \(S\) by a stratified semigroup \(T\). By Theorem 4.3, \(\Sigma\) is a semilattice of semigroups \(\Sigma_{\alpha}\), each of which is an ideal extension of a group \(G_{\alpha}\) by a subsemigroup of \(T\) containing zero. It can be easily verified that such a subsemigroup is also stratified, and hence \(\Sigma\) is a semilattice of stratified extensions of groups.
The converse of Corollary 4.4 does not hold in general as each \(T_{\alpha}\) being a stratified semigroup does not guarantee that \(T\) is itself a stratified semigroup. For example, let \(Y=\{a,b\}\) with \(a\leq b\). For each \(\alpha\in Y\) let \(G_{\alpha}\) be a group, \(T_{\alpha}\) a free semigroup with adjoined zero, and \(\Sigma_{\alpha}\) an ideal extension of \(G_{\alpha}\) by \(T_{\alpha}\). For \(s\in T_{a}\) and \(t\in T_{b}\) let \(st=ts=s\). Along with the fact that \(S=G_{a}\cup G_{b}\) is an ideal of \(\Sigma\), this defines a multiplication on the semilattice \(\Sigma=\Sigma_{a}\cup\Sigma_{b}\). Each \(T_{\alpha}\) is a stratified semigroup so each \(\Sigma_{\alpha}\) is a stratified extension of a group, however \(T=\Sigma/S\) is not stratified, as \(\bigcap_{i\geq 1}T^{i}\cong T_{a}\). A sufficient, but clearly not necessary, condition under which \(T\) will always be stratified is if \(T\) is finite.
As an example of the above construction, consider the following. Let \(n\in\mathbb{N}\) and let \(N=\{1,\ldots,n\}\). Let \(S=G_{1}^{0}\times\ldots\times G_{n}^{0}\) be a direct product of \(0-\)groups \(G_{i}^{0}\), \(i\in N\). For \(s=(a_{1},\ldots,a_{n})\in S\) define \(\text{dom}(s)=\{i\in N|a_{i}\neq 0\}\).
Let \(m\in\mathbb{N}\) and define a relation \(\rho_{m}\) on \((\mathbb{N},+)\) by
\[\rho_{m}=1_{\mathbb{N}}\cup\{(x,y)\in\mathbb{N}\times\mathbb{N}|x,y\geq m\}.\]
Then it is easy to check that \(S\) is a Clifford semigroup (and hence a strong semilattice of groups), \(\rho_{m}\) is a congruence on \(\mathbb{N}\) and \(\mathbb{N}/\rho_{m}\) is a finite monogenic semigroup with
trivial kernel. For simplicity, we shall identify \(\mathbb{N}/\rho_{m}\) with \(\{1,\ldots,m\}\), in the obvious way. Let \(T^{\prime}\) be the semigroup of all partial maps from \(N\) to \(\mathbb{N}/\rho_{m}\) with binary operation \(*\) given by \((f*g)(x)=f(x)+g(x)\) when both are defined and undefined otherwise. Let \(I\subseteq T^{\prime}\) be the set of maps whose image is \(\{m\}\). It can be readily seen that \(I\) is an ideal of \(T^{\prime}\) and \(T=T^{\prime}/I\) is a nilpotent semigroup.
For each \(i\in N\) pick an element \(g_{i}\in G_{i}\) and let \(\alpha_{i}:T\setminus\{0\}\to G_{i}^{0}\) be the partial homomorphism given by
\[\alpha_{i}(f)=\begin{cases}g_{i}^{f(i)}&f(i)\text{ is defined}\\ 0&\text{otherwise.}\end{cases}\]
Then \(\alpha:T\setminus\{0\}\to S\) given by \(\alpha(f)=(\alpha_{1}(f),\ldots,\alpha_{n}(f))\) is a partial homomorphism defining an ideal extension \(\Sigma\) of \(S\) by \(T\).
Notice that \(s\mathcal{J}t\) if and only if \(\operatorname{dom}(s)=\operatorname{dom}(t)\). It follows that the semilattice structure of \(S\) is defined in terms of the power set of \(N\) (i.e. \(\operatorname{dom}(st)=\operatorname{dom}(s)\cap\operatorname{dom}(t)\)). Let \(S_{M}\) be the \(\mathcal{J}\)-class of \(S\) with \(\operatorname{dom}(s)=M\) for \(s\in S_{M}\). Then \(T_{M}=\alpha^{-1}(S_{M})\) is the set of maps in \(T\setminus\{0\}\) whose domain is exactly \(M\). The set \(T_{M}^{0}=T_{M}\cup\{0\}\) is a subsemigroup of \(T\) and is nilpotent. The restriction of \(\alpha\) to \(T_{M}\) then gives a partial homomorphism from \(T_{M}^{0}\) to \(S_{M}\) which defines an ideal extension \(\Sigma_{M}\) of the group \(S_{M}\) by \(T_{M}^{0}\). It can then be shown that \(\Sigma\) is a semilattice of these semigroups \(\Sigma_{M}\).
|
2304.06360 | Spatiotemporal dynamics of transonic shock-wave/turbulent-boundary-layer
interactions in an overexpanded planar nozzle | We perform a combined numerical and experimental study to investigate the
transonic shock-wave/turbulent-boundary-layer interactions (STBLI) in a
shock-induced separated subscale planar nozzle with fully-expanded Mach
number,$M_j = 1.05$ and jet Reynolds number $Re \sim 10^5$. The nozzle
configuration is tested via time-resolved schlieren visualisation. While
numerous studies have been conducted on the high Reynolds number separated
flowfields, little is known on the weak shock wave unsteadiness present in low
nozzle pressure ratio (NPR) transonic nozzles. Therefore, numerical simulations
are carried out with high resolution three-dimensional delayed detached eddy
simulation (DDES), to study the spatiotemporal dynamics of wall pressure
signals and unsteady shock interactions. The transient statistics considered
include spectral Fourier and wavelet-based analysis and dynamic mode
decomposition (DMD). The spectral analyses reveal energetic low frequency modes
corresponding to the staging behaviour of shock unsteadiness, and high
frequencies linked to the characteristics of the Kelvin-Helmholtz instabilities
in the downstream turbulent mixing layer. The mechanisms for the low frequency
unsteadiness is educed through modal decomposition and spectral analysis,
wherein it is found that the downstream perturbations within the separation
bubble play a major role in not only closing the aeroacoustic feedback loop,
but allowing the continual evolution and sustainment of low frequency
unsteadiness. An analysis via the vortex sheet method is also carried out to
characterise the screech production, by assuming an upstream propagating guided
jet mode | Justin Kin Jun Hew, Emanuele Martelli, Mahdi Davoodianidalik, Rod W. Boswell, Christoph Federrath, Matthew Shadwell | 2023-04-13T09:26:07Z | http://arxiv.org/abs/2304.06360v1 | Spatiotemporal dynamics of transonic shock-wave/turbulent-boundary-layer interactions in an overexpanded planar nozzle
###### Abstract
We perform a combined numerical and experimental study to investigate the transonic shock-wave/turbulent-boundary-layer interactions (STBLI) in a shock-induced separated subscale planar nozzle with fully-expanded Mach number, \(M_{j}=1.05\) and jet Reynolds number \(Re\sim 10^{5}\). The nozzle configuration is tested via time-resolved schlieren visualisation. While numerous studies have been conducted on the high Reynolds number separated flowfields, little is known on the weak shock wave unsteadiness present in low nozzle pressure ratio (NPR) transonic nozzles. Therefore, numerical simulations are carried out with high resolution three-dimensional delayed detached eddy simulation (DDES), to study the spatiotemporal dynamics of wall pressure signals and unsteady shock interactions. The transient statistics considered include spectral Fourier and wavelet-based analysis and dynamic mode decomposition (DMD). The spectral analyses reveal energetic low frequency modes corresponding to the staging behaviour of shock unsteadiness, and high frequencies linked to the characteristics of the Kelvin-Helmholtz instabilities in the downstream turbulent mixing layer. The mechanisms for the low frequency unsteadiness is educed through modal decomposition and spectral analysis, wherein it is found that the downstream perturbations within the separation bubble play a major role in not only closing the aeroacoustic feedback loop, but allowing the continual evolution and sustainment of low frequency unsteadiness. An analysis via the vortex sheet method is also carried out to characterise the screech production, by assuming an upstream propagating guided jet mode.
## I Introduction
The study of shock-wave turbulent boundary layer interactions (STBLI) occurring in supersonic internal and external flows has received a significant amount of interest, due to its great importance in many engineering applications of interest. These include the design of supersonic inlets [1; 2; 3], impinging jets [4], rocket nozzles [5; 6], scramjet engines, biconic bodies and others (see e.g., review articles by Dolling [7], Hadjadj and Onofri [8] and Gaitonde [9]). Transonic and supersonic propulsive nozzles operating at off-design conditions often result in shock-induced separation, which induces dynamical instabilities, wall-pressure oscillations, generation of off-axis forces, aeroacoustic resonance such as screech, transonic tones and Mach wave radiation [5; 10; 11; 12; 13]. These effects can even result in engine unstart and structural damage to the engine in question [14].
It has been identified that there are three main frequency components in unsteady shock boundary layer interaction, as observed in canonical configurations of oblique shock SBII (OSWBLI) (swept fins, double cones, impinging SBII etc.) (see e.g. Clemens and Narayanaswamy [15] for a review), which all scale with the freestream velocity \(U_{\infty}\), and the 99% boundary layer thickness, \(\delta_{99}\). High frequency unsteadiness is of order \(f\sim\mathcal{O}(U_{\infty}/\delta_{99})\), which is triggered by the incoming turbulent boundary layer (TBL) as a result of the evolution of coherent structures and anisotropic nature of the upstream flow. The intermediate range, which is of order \(f\sim\mathcal{O}(0.1U_{\infty}/\delta_{99})\), characterises the vortex shedding resonant frequency from the separated shear layer, which is dominated by oblique modes, displaying fundamental characteristics that is similar to the incompressible mixing layer. The low frequency unsteadiness, which is of order \(f\sim\mathcal{O}(0.01U_{\infty}/\delta_{99})\), is related to the unsteadiness located at the shock-induced point of separation. There are two dominant theories about the origin of this inherent low frequency dynamics, one of which suggests that the incoming boundary layer carries inherent low order unsteadiness through turbulent fluctuations, which promotes and modulates the spectral signature at the separation point [16; 17; 18; 19]. Such a hypothesis was first observed in experiments by Andreopoulos and Muck [20] for the compression-ramp flows (CRSBLI) at freestream Mach number \(M_{\infty}=2.84\), and has been observed numerically and analytically by Touber and Sandham [21] using large eddy simulations (LES) and stochastic (Fokker-Planck) analytical modelling approaches, where low frequency global |
2306.06519 | Phylogenetic network classes through the lens of expanding covers | It was recently shown that a large class of phylogenetic networks, the
`labellable' networks, is in bijection with the set of `expanding' covers of
finite sets. In this paper, we show how several prominent classes of
phylogenetic networks can be characterised purely in terms of properties of
their associated covers. These classes include the tree-based, tree-child,
orchard, tree-sibling, and normal networks. | Andrew Francis, Daniele Marchei, Mike Steel | 2023-06-10T20:46:45Z | http://arxiv.org/abs/2306.06519v2 | # Phylogenetic network classes through the lens of expanding covers
###### Abstract.
It was recently shown that a large class of phylogenetic networks, the 'labelled' networks, is in bijection with the set of 'expanding' covers of finite sets. In this paper, we show how several prominent classes of phylogenetic networks can be characterised purely in terms of properties of their associated covers. These classes include the tree-based, tree-child, orchard, tree-sibling, and normal networks.
Key words and phrases:phylogenetic network, expanding cover, partition, algorithms, spanning tree, characterising network classes, encoding
## 1. Introduction
Phylogenetic networks can provide more complete representations of evolutionary relationships among species than possible with a simple phylogenetic tree [1, 13]. Although a single tree can accurately show ancestral speciation events (splitting of lineages), it cannot display reticulate evolution (where the flow of genomic information follows the merging of ancestral lineages). Well-known reticulate processes in biology include hybridization, horizontal gene transfer, recombination, and endosymiosis, in both the recent and distant past. By contrast, rooted phylogenetic networks can explicitly and simultaneously display both speciation and reticulate evolution. As a result, the mathematical and algorithmic investigation of phylogenetic networks has become a highly active field over the last \(\sim\)15 years, and numerous classes of networks have been defined and studied [16].
In this paper, we show how a recently introduced correspondence for a large class of phylogenetic networks (the _labellable_ networks [11]) can be used to characterise a number of widely used other classes of network. Classes of network have been introduced for a variety of reasons, but usually in order to capture some feature that seems biologically important, or because they are mathematically convenient. Their definitions typically involve constraints on their structures as graphs. For instance, tree-child networks are those for which no vertex has only reticulations as its children, whereas tree-based networks are those that can be constructed from a base tree by adding additional edges between the tree edges.
The class of labellable networks contains many commonly studied classes. They have been shown to correspond to a set of covers of finite sets that satisfy a property called "expanding". We explore features of covers arising from networks, and characterise many of the familiar classes in terms of properties of their associated covers. It is to be hoped that encoding network properties in the properties of sets of sets will enable some new directions to be pursued in studying phylogenetic networks.
This paper aims to demonstrate how this encoding of labellable networks into covers may be of broad use in the classification of network classes. Different classes of networks are defined in different ways, and it can be difficult to present a clear hierarchy (there have been several visual attempts, for instance [16, Fig.12] and [11, Fig.6]). Being able to characterise different network classes by the properties of their covers gives a unified framework for defining networks, in the sense that one may add or remove axioms depending on the class of networks one wants to describe. In that sense, moving from one class to another may be just a matter of changing the axioms, providing a potentially useful lens for visualizing the relationships among classes.
We begin by defining what we mean by a phylogenetic network, recalling the key results linking labellable networks with expanding covers (from [11]), in Section 2. We give some general properties
of covers arising from networks, before characterising the classes of tree-based labellable networks (Section 3), then tree-child networks (Section 4), normal networks (Section 5), tree-sibling networks (Section 6), and orchard networks (Section 7). These are some of the more widely seen classes, and they are amenable to being described in terms of covers. We also demonstrate how the language of covers can allow one to define new classes of network by changing the constraints on the covers: one small change to the constraints defines a new class we call'spinal' networks, that have an interesting structure (Section 8). We finish by discussing some open questions and opportunities for further development.
## 2. Preliminaries
A _phylogenetic network_ on \(n\) leaves is a directed acyclic graph with a single vertex of in-degree zero, called the root, and \(n\) vertices of in-degree \(1\) and out-degree zero, labelled by \([n]:=\{1,\ldots,n\}\). Note that this includes the possibility of vertices that have in-degree and out-degree both equal to \(1\), or both strictly greater than \(1\); such vertices are called _degenerate_. If \(N\) has any degenerate vertices, it is said to be a _degenerate_ network; otherwise, it is _non-degenerate_.
If every vertex has in-degree and out-degree at most \(2\), then the network is said to be _binary_. If \(N\) is non-degenerate and binary, then all vertices other than the leaves and root have total degree \(3\).
Vertices in a network that have in-degree \(1\) are called _tree vertices_, and those with in-degree greater than \(1\) are called _reticulate vertices_, or _reticulations_. We will typically use \(k\) to denote the number of reticulations in a network, and \(m\) to denote the number of non-root vertices in total.
A _labellable_ phylogenetic network is one whose vertices can be deterministically labelled according to an algorithm that generalises one for trees (the algorithm for trees is due to Erdos and Szekely [6]) [11]. Such networks are characterized topologically by the property that the map from non-leaf vertices to their sets of children is one-to-one [11, Thm.3.3].
A _partition_ of a finite set \(A\) is a set of non-empty, pairwise disjoint subsets of \(A\) whose union is \(A\). A _cover_ of a finite set \(A\) is a set of non-empty subsets of \(A\) whose union is \(A\). The cardinality \(|\mathcal{C}|\) of a cover \(\mathcal{C}\) is the number of sets it contains. We use \(||\mathcal{C}||\) to denote the number of distinct elements in the sets in \(\mathcal{C}\), that is, \(||\mathcal{C}||:=|\bigcup_{C_{i}\in\mathcal{C}}C_{i}|\).
Recall the definition from [11]:
**Definition 2.1**.: A cover \(\mathcal{C}\) of \([m]\) is _expanding_ if, for \(n=m-|\mathcal{C}|+1\), it satisfies:
1. No element of \([n]\) appears more than once, and
2. For \(i=1,\ldots,|\mathcal{C}|\), the cover contains at least \(i\) subsets of \([n+i-1]\).
**Theorem 2.2**.: _[_11_, Thm. 4.4]_ _The class of labellable phylogenetic networks is in bijection with the collection of expanding covers of finite sets._
The map from a labellable phylogenetic network to its expanding cover takes each non-leaf vertex to the set of labels of its children. That is, sets in the cover are sets of labels of sibling vertices sharing a parent. The map from an expanding cover \(\mathcal{C}\) to a labellable network is a constructive map that first establishes the number of leaves in the network via the following formula [11, Lemma 4.1]:
\[n=||\mathcal{C}||-|\mathcal{C}|+1.\]
The construction of the network then begins with \(n\) isolated leaf vertices, and adds parent vertices to sets of vertices present in the growing network, and lexicographically minimal of those in \(\mathcal{C}\). The expanding conditions ensure that there is always such a set, and that the map is well-defined. For examples of this construction the reader is referred to [11].
While the condition for a cover to be expanding may seem artificial, and it certainly restricts from the collection of all covers of a set, it can be seen as a natural extension of the notion of partitions. In particular, it turns out that all partitions are expanding covers.
**Lemma 2.3**.: _Every set partition is an expanding cover._
Proof.: Let \(\pi\) be a partition of \([m]\) with \(\ell=|\pi|\) blocks, and set \(n=m-\ell+1\). Two conditions define an expanding cover. The first is that elements of \(\{1,\dots,n\}\) are not repeated in \(\pi\), which is satisfied by virtue of \(\pi\) being a partition. The second is that for each \(i=1,\dots,\ell\), \(\pi\) contains at least \(i\) subsets of \([n+i-1]\), and we prove this by induction on \(i\).
First, consider the base case \(i=1\). We need to show that there is at least one set in \(\pi\) that is a subset of \([n]\). There are \(\ell=m-n+1\) pairwise disjoint subsets of \([m]\) in \(\pi\), and there are \(m-n\) integers in \([m]\) that are not in \([n]\). Therefore, there must be at least one set in \(\pi\) that does not contain an element of \(\{n+1,\dots,m\}\) and is thus in \([n]\), as required.
Suppose that for \(i=k\), \(\pi\) contains at least \(k\) subsets of \([n+k-1]\). We would like to show that \(\pi\) contains at least \(k+1\) subsets of \([n+(k+1)-1]=[n+k]\). The proof proceeds in the same manner as the case of \(i=1\).
First remove \(k\) subsets of \([n+k-1]\) from \(\pi\), so that \(\pi\) has \(\ell-k\) sets remaining. We need to show at least one remaining set is entirely contained within \([n+k]\). There are \(m-(n+k)\) integers in \(\pi\) that are _not_ in \([n+k]\), and \(\ell-k=(m-n+1)-k=m-(n+k)+1\) sets are available. Therefore, at least one must not contain any element outside \([n+k]\), as required.
Since all set partitions are expanding covers, we can ask what sort of networks have partitions as their covers. A partition has a single occurrence of each integer, which means that each vertex of the network (each label) has a single set of siblings. In other words, the network has no reticulations, and thus is a tree. This correspondence of trees with partitions allows trees with degenerate vertices (i.e., vertices with in-degree and out-degree 1). In this way, the correspondence for partitions is closer to the result of Erdos and Szekely [6] than the non-degenerate framework that has partitions in bijection with phylogenetic forests in [8].
The lexicographic order on sets (given by \(A\prec B\) if \(A\subset B\) or \(\min(A\setminus B)<\min(B\setminus A)\)) that helps determine the labelling sequence is not always the ordering of sets used to label the internal vertices of the network; that sequence is given by the _labelling order_, which is defined as follows [11, Section 4]:
**Definition 2.4**.: The _labelling order_ for an expanding cover \(\mathcal{C}\) is determined by the following procedure.
1. For \(i=1,\dots,|\mathcal{C}|\), 1. Set \(C_{i}\) to be the minimal set in \((\mathcal{C},\prec)\) contained in \([n+i-1]\); and 2. Redefine \(\mathcal{C}=\mathcal{C}\setminus\{C_{i}\}\).
2. Output the sequence \(C_{1},\dots,C_{|\mathcal{C}|}\).
This order is necessary to establish conditions on a cover that give non-degenerate networks, for instance, and we will use it later in the present paper to describe _normal networks_ (in Section 5) and _orchard networks_ (Section 7).
Given a cover in labelling order, we can label every subset in position \(1\leq i<|\mathcal{C}|\) by \(i+n\), whereas the last subset is labelled \(\rho\) for the _root_. In this way, the label for each subset corresponds to the label of its parent in the corresponding labellable network.
For example, the labelling order for the network shown in Figure 1 is
\[1\mid 2\mid 3\mid 4,5\mid 6,7\mid 6,8\mid 7,8\mid 11,12\mid 9,13\mid 10,13 \mid 14,15.\]
The first set gives rise to the vertex label \(n+1=6\), the second gives rise to \(7\), and so on. We can represent this more explicitly as follows, adding \(\rho\) to denote the root:
\[\{1\}_{6},\{2\}_{7},\{3\}_{8},\{4,5\}_{9},\{6,7\}_{10},\{6,8\}_{11},\{7,8\}_{12 },\{11,12\}_{13},\{9,13\}_{14},\{10,13\}_{15},\{14,15\}_{\rho}.\]
### Features of vertices in networks and their covers' properties
Many features of vertices in networks have direct translations into the language of covers, and we present some of them in Table 1. The first two lines of the table are clear: non-root vertices on a network are labelled by the labelling algorithm and those labels appear as integers in \([m]\), and the leaves are labelled by integers in \([n]\). The other lines of the table can be justified as follows.
A tree vertex in a network is a vertex with in-degree \(1\), which means it has only one parent and, therefore, is in only one set of sibling vertices. This set of sibling vertices could have any size greater than or equal to one, but it is only a single set. A reticulation vertex, on the other hand, has strictly more than one parent, and thus has two or more sets of siblings. No two vertices in a labellable network have the same set of children [11, Thm 3.3], so the label of a reticulation vertex will appear in at least two sets in the cover. The other translations in Table 1 follow immediately.
Throughout this paper, we will add additional translations to the table, with a summary table given in the Discussion.
## 3. Tree-based networks
A phylogenetic network is _tree-based_ if it has a spanning tree whose leaves are those of the network [10]. Such a spanning tree is called a _base tree_ for the network. Typically, a tree-based network can have many base trees. A similar notion that we will discuss is that of a _support tree_ for a network. A support tree is a base tree but with additional degree \(2\) vertices where additional arcs are joined to complete the network. That is, the set of vertices in the support tree and the network are identical.
Unlike the other classes that we consider in the coming sections, not all tree-based networks are labellable, but neither are all labellable networks tree-based [11]. There is thus a non-trivial intersection of the two classes, and this intersection contains many other classes, including orchard, tree-child, and normal networks [11]. In the binary case, the tree-based networks that are labellable can be characterised in terms of their structural properties, as those for which no two reticulate vertices have the same sets of parents [11, Thm. 6.3]. In this section, we provide a new characterisation of the
\begin{table}
\begin{tabular}{l l} \hline
**Network** & **Cover** \\ \hline Non-root vertex & An integer in \([m]\) \\ Leaf & An integer in \([n]\) \\ Tree vertex & An integer contained in just one subset \\ Reticulation vertex & An integer contained in more than one subset \\ In-degree of \(x\) & The number of subsets that contain \(x\) \\ Out-degree of \(x\) & Size of the subset with label \(x\) in the labelling order \\ Parents of \(x\) & All the subsets that contain \(x\) \\ Siblings of \(x\) & All the other integers contained in the subsets that contain \(x\) \\ Children of \(x\) & The subset with label \(x\) in the labelling order \\ \hline \end{tabular}
\end{table}
Table 1. A translation of features of vertices in a network with \(n\) leaves and \(m\) non-root vertices into features of the corresponding expanding cover.
tree-based labellable networks in terms of their covers, and the existence of an "embedded" partition, in Theorem 3.2.
We say that a partition \(\pi\)_embeds_ in \(\mathcal{C}\) if there is a one-to-one map from \(\pi\) to \(\mathcal{C}\) that maps each set \(A\) in \(\pi\) to a set \(A^{\prime}\) in \(\mathcal{C}\) so that \(A\subseteq A^{\prime}\). A partition \(\pi\)_fully embeds_ in a cover \(\mathcal{C}\) if \(\pi\) embeds in \(\mathcal{C}\) and \(|\pi|=|\mathcal{C}|\).
Recall from Section 2 that every partition of \([m]\) is an expanding cover. It is straightforward to see that every expanding cover has a partition that embeds into it, as follows.
**Lemma 3.1**.: _Every expanding cover of \([m]\) has an embedded partition of \([m]\)._
Proof.: If all repeats of integers are deleted, so that there is one occurrence of each integer, then the result is a partition of \([m]\).
Any partition obtained in this way will be expanding, according to Lemma 2.3. Note, however, that each such partition may not have the same number of sets as the cover, and therefore may be expanding for a different value of \(n\).
The notion of embedding a partition into a cover turns out to help characterise tree-based networks.
**Theorem 3.2**.: _An expanding cover \(\mathcal{C}\) of \([m]\) corresponds to a tree-based network if and only if it has a fully embedded partition \(\pi\) of \([m]\)._
Proof.: Suppose \(N\) is a tree-based network with expanding cover \(\mathcal{C}\) of \([m]\). We will show that \(\mathcal{C}\) has an embedded partition with length \(|\mathcal{C}|\).
Label the vertices of \(N\) according to the labelling algorithm. This labelling gives rise to the expanding cover whose sets are the children of non-leaf vertices in \(N\). Choose a support tree \(T\) for \(N\), keeping the labels of the vertices from \(N\). The labels of vertices in \(T\) are thus precisely \([m]\). Note that all vertices of \(N\) are present in \(T\), but that each non-root vertex in \(T\) has in-degree \(1\). The set of children of each vertex in \(T\) is a subset of the set of children for the corresponding vertex in \(N\).
Construct the cover for \(T\) using the inherited labelling of vertices, forming sets of labels of vertices that are the children of the same non-leaf vertex. Each set thus formed is a subset of one of the sets in the cover for \(N\), because the children of vertex \(i\) in \(N\) are a subset of the children of vertex \(i\) in \(T\). Each set is non-empty because the only leaves in the base tree are those of \(N\). The cover for \(T\) contains no repeated integers because \(T\) is a tree and there are no vertices with in-degree greater than \(1\). Thus, the cover for \(T\) with the labelling inherited from \(N\) is a partition of \([m]\) of length \(|\mathcal{C}|\), as desired.
Note that the labels on the vertices in \(T\) are those inherited from \(N\). They are not the same as the labels that would be put on vertices by the labelling algorithm applied to \(T\). Thus the partition obtained from \(T\) is not the same as the partition that would be obtained by labelling \(T\) directly.
For the reverse direction, suppose that the expanding cover \(\mathcal{C}\) has an embedded partition \(\pi\) with length \(|\mathcal{C}|\). We will show that the corresponding network is tree-based.
Let \(N\) be the network constructed by using \(\mathcal{C}\). The partition \(\pi\) embeds in \(\mathcal{C}\), so there is a one-to-one map from \(\pi\) to \(\mathcal{C}\) that maps each set \(A\) in \(\pi\) to a set \(A^{\prime}\) in \(\mathcal{C}\) such that \(A\subseteq A^{\prime}\). The sets in \(\mathcal{C}\) correspond to vertices in \(N\) and give the set of children of each vertex. For each non-leaf vertex in \(N\), \(A^{\prime}\in\mathcal{C}\) labels its children, and there is a corresponding set \(A\in\pi\) that is its pre-image in the embedding of \(\pi\) into \(\mathcal{C}\), with \(A\subseteq A^{\prime}\).
For the non-leaf vertex in \(N\) with children \(A^{\prime}\), delete the edges in \(N\) between it and the vertices labelled by \(A^{\prime}\setminus A\), and repeat this for each non-leaf vertex in \(N\). The resulting network now has vertices whose children are labelled by the sets in \(\pi\). We claim that this resulting network \(\hat{N}\) is a support tree for \(N.\) We need to show that \(\hat{N}\) is a spanning tree whose leaves are those of \(N\).
First, \(\hat{N}\) contains all vertices of \(N\), since only edges were removed. Second, it is a tree, since no label is repeated in \(\pi\) by virtue of it being a partition, and therefore no vertex has more than one parent. Third, each vertex \(v\) that is not a leaf of \(N\) has at least one child, since \(v\) has a non-empty set of children whose labels are a set in \(\pi\) (the length of \(\pi\) is \(|\mathcal{C}|\)), and thus the only leaves of \(\hat{N}\) are those of \(N\).
Thus, \(\hat{N}\) is a support tree for \(N\), and so \(N\) is tree-based, as required.
This result gives an alternative way to characterise support trees for a tree-based network, as follows.
**Corollary 3.3**.: _The set of support trees for a tree-based network \(N\) is in bijection with the set of full embeddings of partitions in the expanding cover for \(N\)._
Proof.: As seen in the proof of Theorem 3.2, each support tree for \(N\) gives rise to a full embedding of a partition in the cover for \(N\). Conversely, every full embedding of a partition into the cover for \(N\) constitutes a choice of parent for each reticulation vertex (any element that appears more than once in the cover), and thus gives a support tree for \(N\).
Note that it is possible for a particular partition to embed in more than one way into a cover, and that each such embedding gives a different support tree for the network.
**Example 3.4**.: Figure 1 shows a network with cover \(\mathcal{C}=1\mid 2\mid 3\mid 4,5\mid 6,8\mid 6,7\mid 7,8\mid 11,12\mid 9,13\mid 10,13 \mid 14,15\). The embeddings of partitions into \(\mathcal{C}\) can be enumerated as follows. First, consider the elements that appear exactly once in \(\mathcal{C}\): \(1,2,3,4,5,9,10,11,12,14,15\). These must appear in the partition where they are in the cover (one appearance means only one possibility), so any embedded partition into \(\mathcal{C}\) has form
\[1\mid 2\mid 3\mid 4,5\mid\_,\_\mid\_,\_\mid\_,\_\mid\_,\_\mid\_,\_\mid\_ {11},12\mid 9,\_\mid 10,\_\mid 14,15.\]
Consider then the integer \(6\), which, in the partition, must be either embedded into the set \(\{6,7\}\) or \(\{6,8\}\). If the former, then \(8\) must embed into the latter; otherwise, the partition would not be a full embedding (we cannot allow empty sets), which forces \(7\) to embed into the set \(\{7,8\}\). In short, the three sets \(6,8\mid 6,7\mid 7,8\) can only have embedded either \(6\mid 7\mid 8\) or \(8\mid 6\mid 7\). These amount to the same partition but two distinct embeddings that give different support trees because they correspond to different choices of child for each vertex. The other choice for embedding a partition involves the placement of \(13\), which can either be with \(9\) or \(10\).
Thus, there are four full embeddings of partitions \(\pi_{i}\) into \(\mathcal{C}\), as follows:
\[\begin{array}{rccccccccccccc}\mathcal{C}:&1&2&\mid 3&4,5&6,8&6,7&7,8&11,12&9,13&10,13& 14,15\\ \pi_{1}:&1&2&\mid 3&4,5&6&7&8&11,12&9,13&10&14,15\\ \pi_{2}:&1&2&\mid 3&4,5&6&7&8&11,12&9&10,13&14,15\\ \pi_{3}:&1&2&\mid 3&4,5&8&6&7&11,12&9,13&10&14,15\\ \pi_{4}:&1&2&\mid 3&4,5&8&6&7&11,12&9&10,13&14,15\end{array}\]
The support trees corresponding to these embeddings of partitions are shown in Figure 2.
### Support trees for a binary tree-based network
Support trees for binary tree-based networks have been counted in earlier work [17, 12], building on an upper bound from [15]. Covers provide an alternative and clear approach that replicates these results.
For instance (and without giving details of all the components of the statement):
Figure 2. The tree-based network \(N\) and the four support trees given by the four embeddings \(\pi_{1},\dots,\pi_{4}\), as described in Example 3.4.
**Theorem 3.5** ([17], Theorem 8).: _For a binary tree-based network \(N\), the number of support trees is:_
\[2^{c}\times\prod_{P\in\pi(\mathcal{J}_{N})}\frac{1}{2}(v(P)+1),\]
_where_
* \(\mathcal{J}_{N}\) _is a bipartite graph derived from_ \(N\) _with parts given by the set of vertices with a reticulate child, and reticulations without a reticulate parent,_
* \(c\) _is the number of cycle components in_ \(\mathcal{J}_{N}\)_,_
* \(\pi(\mathcal{J}_{N})\) _is the set of path components in_ \(\mathcal{J}_{N}\) _without an omnian terminal vertex, and_
* \(v(P)\) _is the number of vertices in the path component_ \(P\)_._
This is an explicit formula based on features of the network, using a representation of key features in the bipartite graph \(\mathcal{J}_{N}\) in particular.
It was subsequently demonstrated that this formula relied on two key structural elements of the network: the number of "crowns" and the lengths of each "\(M\)-fence" [12, Section 5.3]. These are types of "zig-zag trails", which are undirected paths of vertices in the network that alternate between tree and reticulation vertices [21]. A maximal length zig-zag trail is called a _crown_ if it forms a cycle, and is called an _\(M\)-fence_ if the ends of the path are tree vertices. Crowns and fences arise naturally when looking at the problem through the lens of covers. We are able to obtain, by using covers, a formula that is analogous to that of Theorem 3.5, as follows.
Suppose \(N\) is a binary tree-based network. We allow degenerate vertices with in-degree 2 as well as out-degree 2. The cover \(\mathcal{C}\) for \(N\) then consists of sets of size 1 or 2, and each integer appearing in \(\mathcal{C}\) appears either once, if it is a tree vertex (in-degree 1), or twice if it is a reticulation (in-degree 2).
We will now describe an algorithm for obtaining an embedded partition (support tree) from \(\mathcal{C}\), and this will allow us to count the number of such support trees.
The sets in \(\mathcal{C}\) fall into exactly five categories:
1. Singletons containing integers appearing once in \(\mathcal{C}\),
2. Singletons containing integers appearing twice in \(\mathcal{C}\),
3. Pairs containing integers each appearing once in \(\mathcal{C}\),
4. Pairs containing integers each appearing twice in \(\mathcal{C}\), and
5. Pairs containing one integer appearing once and the other appearing twice in \(\mathcal{C}\).
Sets that contain elements that appear only once in \(\mathcal{C}\) must be fully retained in any embedded partition. Thus sets from categories (1) and (3) must be in the embedded partition, and there is no choice.
Because the partition embeds into \(\mathcal{C}\), a set containing a singleton \(\{a\}\) in \(\mathcal{C}\) must also appear in the embedded partition. Therefore, if \(\{a\}\) is in category (2), none of the other occurrences of \(a\) in other sets in \(\mathcal{C}\) can appear in the partition, and we delete them from the sets in the cover. This will create new sets of size 1, and possibly of category (2). We repeat this process until all sets in category (2) are gone, creating a new cover we denote \(\mathcal{C}_{1}\). Note that \(\mathcal{C}_{1}\) is uniquely determined from \(\mathcal{C}\) and embeds into it. Note also that \(\mathcal{C}_{1}\) does not contain any sets in category (2) above.
This leaves sets from categories (4) and (5) to deal with. These sets are connected. If a set is in category (5), then one of its elements appears elsewhere, and it can only be in a set from category (5) or (4). We can thus form sequences of such sets in \(\mathcal{C}_{1}\) by connecting a set from category (5) with a sequence of sets from category (4) and ending with another set from category (5). These sequences are
\begin{table}
\begin{tabular}{l l} \hline
**Network** & **Cover** \\ \hline Spanning tree & A partition embedded in \(\mathcal{C}\) \\ Support tree & A full embedding of a partition in \(\mathcal{C}\) \\ \hline \end{tabular}
\end{table}
Table 2. Translation of concepts arising in tree-based networks.
uniquely determined by \(\mathcal{C}_{1}\), and every set from category (5) is in precisely one sequence of this form. For example, such sequences are of form
\[a_{0},a_{1}\mid a_{1},a_{2}\mid\cdots\mid a_{t-1},a_{t}\mid a_{t},a_{t+1}, \tag{1}\]
where \(a_{0}\) and \(a_{t+1}\) do not appear elsewhere in \(\mathcal{C}_{1}\) (note that \(t\) could be \(1\)). We call such sequences _fences_ (they correspond to the \(M\)-fences defined above). The notions of crowns and fences for covers are summarized in Table 3.
Let \(\mathcal{F}\) denote the set of fences in \(N\). For each fence \(f\), let \(r(f)\) denote the number of repeated integers in \(f\), which we call its length. The fence in Equation (1) has length \(r(f)=t\).
A set from category (4) may be in a sequence such as the one above, or in a sequence of at least three sets from the same category:
\[a_{0},a_{1}\mid a_{1},a_{2}\mid\cdots\mid a_{t-1},a_{t}\mid a_{t},a_{0}, \tag{2}\]
where \(t\geq 2\). These correspond precisely to the 'crowns' of [12].
For either fences or crowns, we can count the number of selections of unique elements as follows.
In the case of fences of length \(t\) (Equation (1)), the number of choices is simply \(t+1\), since there are \(t+2\) elements to go into \(t+1\) non-empty sets, so one has two elements and the rest have one element. There are \(t+1\) choices for the set with two elements. For example, with the fence \(a,b\mid b,c\mid c,d\mid d,e\), we have \(t=3\) and the choices are:
\[a,b\mid c\mid d\mid e\] \[a\mid b,c\mid d\mid e\] \[a\mid b\mid c,d\mid e\] \[a\mid b\mid c\mid d,e.\]
In the case of a crown, as in Equation (2), there is only one embedded partition. We have the same number of elements as we have non-empty sets, and so there is only one option for selecting unique elements. Each element forms a singleton. For example, in the crown \(a,b\mid b,c\mid c,d\mid d,a\), we have only \(a\mid b\mid c\mid d\). However, although there is only one embedded partition, that partition has exactly two distinct embeddings. We could have:
\[a\mapsto\{a,b\},\ b\mapsto\{b,c\},\ c\mapsto\{c,d\}\text{ and }d \mapsto\{d,a\},\text{ or}\] \[b\mapsto\{a,b\},\ c\mapsto\{b,c\},\ d\mapsto\{c,d\}\text{ and }a \mapsto\{d,a\}.\]
Therefore, we have shown the following result, which is equivalent to Theorem 3.5:
**Theorem 3.6**.: _Let \(N\) be a binary tree-based network with cover \(\mathcal{C}\). The number of embedded partitions in \(\mathcal{C}\), and therefore the number of support trees for \(N\), is_
\[2^{c}\times\prod_{f\in\mathcal{F}}\left(r(f)+1\right)\]
_if \(\mathcal{F}\) is non-empty, and is \(2^{c}\) if \(\mathcal{F}=\emptyset\), where \(c\) is the number of crowns in \(\mathcal{C}\)._
Note that the number of crowns, \(c\), is the same as the number of components referred to in Theorem 3.5.
Given a cover \(\mathcal{C}\), we can compute the number of crowns and the lengths of fences, and thus the number of embedded partitions, by using Algorithm 1, which uses the definition of 'acquaints'.
**Definition 3.7**.: Set \(x\sim y\) if \(x=y\) or \(x,y\) are siblings, and consider the transitive closure of \(\sim\), which is an equivalence relation on the set of vertices of the network. Two vertices in an equivalence relation are said to be _acquaints_ of each other.
Acquaints can be defined self-referentially by saying that an _acquaint_ of a vertex \(x\) is a sibling of \(x\) or is a sibling of an acquaint of \(x\). Fences and crowns can be described in terms of acquaints, as follows.
**Theorem 3.8**.: _Let \(N\) be a binary tree-based network with cover \(\mathcal{C}\). Then_
1. \(N\) _has a fence if and only if there exists a set of acquaints in which exactly two vertices that appear uniquely in_ \(\mathcal{C}\) _have one sibling._
2. \(N\) _has a crown if and only if there exists a set of acquaints in which no vertex has one sibling._
Proof.: (1) For the forward direction, suppose that we have a fence like that in Table 3. The integers in the set \(\{a_{0},a_{1},a_{2},\ldots,a_{t-1},a_{t},a_{t+1}\}\) are acquaints, \(a_{0}\) and \(a_{t+1}\) have only one sibling (\(a_{1}\) and \(a_{t}\) respectively), and they appear uniquely by assumption.
Conversely, assume there is a set of acquaints in which exactly two vertices (say \(a_{i}\) and \(a_{j}\)) that appear uniquely in \(\mathcal{C}\) have one sibling. Since we assume that the network is binary, \(a_{i}\) and \(a_{j}\) appear in only one subset, but they can not be in the same one; otherwise, they would not be acquainted with the other vertices.
It is also the case that every other vertex will appear in exactly two subsets; otherwise, it would imply an in-degree greater than \(2\), which is not allowed in a binary network. Therefore, we have a set of a type described in Table 3, and the network has a fence.
(2) For the forward direction, suppose we have a crown (as indicated in Table 3). The integers in the set \(\{a_{0},a_{1},a_{2},\ldots,a_{t-1},a_{t}\}\) are acquaints, and none of them has exactly one sibling.
Conversely, assume there is a set of acquaints in which no vertex has one sibling. Since we assume the network is binary, every vertex will appear in exactly two subsets; otherwise, it would imply an in-degree greater than \(2\), which is not allowed in a binary network. On the other hand, if a vertex appeared in exactly one subset, this would imply that it had only one sibling, which violates the assumption. Therefore, we have a set of a type described in Table 3, and the network has a crown.
According to the theorem above, we can use Algorithm 1 to count the number of embedded partitions by enumerating the acquaints of all integers that are inside a set of size \(2\), because, in the definitions of crown and fences (Table 3), they do not contain sets of any other sizes.
**Example 3.9**.: We saw in Example 3.4 that the cover for the binary tree-based network in Figure 1 has four embedded partitions, and hence the network has four support trees (shown in Figure 2). These can be counted using Theorem 3.6 as follows. The cover \(\mathcal{C}=1\mid 2\mid 3\mid 4,5\mid 6,8\mid 6,7\mid 7,8\mid 11,12\mid 9,13\mid 10,13\mid 14,15\) has one crown, namely \(6,8\mid 6,7\mid 7,8\), and one fence \(9,13\mid 10,13\), which has length \(1\) (a single reticulation). Hence, the number of support trees is \(2^{2}\times(1+1)=4\), as expected.
## 4. Tree-child networks
Tree-child networks are phylogenetic networks for which every vertex has a child that is a tree vertex [3]. They satisfy a number of important properties. For instance, they have the property that every vertex is _visible_. This is a property that we describe in Section 4.1, but first, tree-child networks turn out to have a very natural description in terms of covers, as follows.
**Theorem 4.1**.: _Tree-child networks are in bijection with expanding covers for which each set contains an integer that appears exactly once in the cover._
Proof.: The proof relies on the fact that the integers that appear precisely once in a cover are exactly the tree vertices.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Network** & **Cover** \\ \hline Crown & Collection of sets \(a_{0},a_{1}\mid a_{1},a_{2}\mid\cdots\mid a_{t-1},a_{t}\mid a_{t},a_{0}\). \\ Fence & Collection of sets \(a_{0},a_{1}\mid a_{1},a_{2}\mid\cdots\mid a_{t-1},a_{t}\mid a_{t},a_{t+1}\) with \\ & \(a_{0}\neq a_{t+1}\) both appearing uniquely. \\ \hline \hline \end{tabular}
\end{table}
Table 3. Translation of concepts arising from counting support trees for binary tree-based networks.
Let \(N\) be a tree-child network with expanding cover \(\mathcal{C}\). Each non-leaf vertex \(v\) in \(N\) corresponds to a specific set \(C_{v}\) in \(\mathcal{C}\), whose elements label the children of \(v\) in \(N\). Because \(N\) is a tree-child network, each such vertex \(v\) has at least one child that is a tree vertex. The labels of the tree vertices appear precisely once in the cover, so the set \(C_{v}\) contains at least one element that appears precisely once in the cover. This holds for every non-leaf vertex, and so for every set in \(\mathcal{C}\), which establishes the forward direction.
The reverse direction is also straightforward. Suppose that every set in an expanding cover \(\mathcal{C}\) has an element that appears precisely once in \(\mathcal{C}\). Since each set in the cover is the set of labels of the children of a non-leaf vertex, this implies that every non-leaf vertex has at least one child whose label appears once in the cover. In other words, it is a tree vertex. Thus, the network corresponding to \(\mathcal{C}\) is a tree-child network.
### Visible vertices
An important property of tree-child networks is that all of their vertices are _visible_[3, Lemma 2]. A vertex \(v\) in a network is visible if there is a leaf \(x\) for which every path from the root to \(x\) passes through \(v\). In this section, we show how visibility can be interpreted by using covers, beginning with the definition of the _backtrack_ of a label in a cover.
**Definition 4.2**.: Let \(\mathcal{C}\) be an expanding cover in labelling order and let \(x\) be an element of \([m]\). Then a _backtrack for \(x\)_ is a sequence of sets \(S_{1},\ldots,S_{t}\) in \(\mathcal{C}\) for which the label of a set containing \(x\) is in
\(S_{1}\), and the label of \(S_{i}\) is an element of \(S_{i+1}\) for each \(i=1,\ldots,t-1\). This corresponds to the output of Algorithm 2. Let \(B_{\mathcal{C}}(x)\) denote the set of all backtracks of \(x\) in \(\mathcal{C}\).
```
0: Expanding cover \(\mathcal{C}\) in labelling order procedureBacktrack(\(\mathcal{C}\), \(x\)) \(\operatorname{seq}\leftarrow[\quad]\) \(s\leftarrow\) a subset of \(\mathcal{C}\) containing \(x\) while label of \(s\) is not \(\rho\)do \(s\leftarrow\) a subset of \(\mathcal{C}\) that contains the label of \(s\) as an element add \(s\) to seq endwhile return seq endprocedure
```
**Algorithm 2** Backtracking algorithm
We can characterise visibility in a network by using the backtracking algorithm. Given \(x\in[m]\) and a backtrack \(\beta\) for \(x\), we define \(L(\beta)=\{\text{label of }s\,|\,s\in\beta\}\). In this way, \(L(\beta)\) contains the vertices of a path from \(x\) to \(\rho\) (the root), \(\bigcup\limits_{\beta\in B_{C}(x)}L(\beta)\) is the set of all vertices that _can be_ visited with a path from \(x\) to \(\rho\), and \(\bigcap\limits_{\beta\in B_{C}(x)}L(\beta)\) is the set of all vertices that _must be_ visited on a path from \(x\) to \(\rho\).
**Theorem 4.3**.: _Given a cover \(\mathcal{C}\) in labelling order and \(x\in[m]\), \(x\) is a visible vertex in the corresponding network if and only if there exists \(y\in[n]\) such that \(x\in\bigcap\limits_{\beta\in B_{C}(y)}L(\beta)\)._
Proof.: For the forward direction, assume that a vertex \(x\) of a network is visible. By definition, there exists a leaf \(y\) (in other words, \(y\in[n]\)) such that all paths from the root to \(y\) pass through \(x\). Since \(\bigcap\limits_{\beta}L(\beta)\) is the set of all vertices we _have to_ visit from \(y\) to \(\rho\), \(x\) must be in this intersection.
For the backward direction, let \(y\in[n]\) and \(x\in\bigcap\limits_{\beta\in B_{C}(y)}L(\beta)\). Then it means that all paths from \(y\) to \(\rho\) contain \(x\). Therefore \(x\) is visible in the corresponding network.
Since all \(x\in\bigcap\limits_{\beta\in B_{C}(x)}L(\beta)\) are visible vertices and vice versa, we obtain the following corollary.
**Corollary 4.4**.: _Given a cover \(\mathcal{C}\) in labelling order, then all \(x\in\bigcup\limits_{y\in[n]}\bigcap\limits_{\beta\in B_{\mathcal{C}}(y)}L(\beta)\) are visible vertices in the corresponding network and vice versa._
### Support trees for tree-child networks
Theorem 4.1 allows us to provide an alternative proof of a result about support trees in tree-child networks, as follows.
**Corollary 4.5** ([9], Theorem 3.3).: _A binary tree-child network with \(k\) reticulations has \(2^{k}\) support trees._
\begin{table}
\begin{tabular}{l l} \hline
**Network** & **Cover** \\ \hline Path from node \(x\) to the root & A backtrack for \(x\) \\ Visible vertex \(x\) & There is a \(y\in[n]\) such that \(x\in\bigcap\limits_{\beta\in B_{\mathcal{C}}(y)}L(\beta)\). \\ \hline \end{tabular}
\end{table}
Table 4. Translation of the concepts arising in tree-child networks.
Proof.: Since each set in the cover for a tree-child network has a uniquely appearing element, there are no sets containing only reticulations (i.e. no singletons with elements that appear elsewhere, and no pairs in which both elements are repeated). Using the categories above, all sets in such a cover are from Categories (1), (3), or (5).
As a consequence, there are no crowns, which require sets with two reticulations, and each fence can only have length \(1\), being of the form \(a,b\mid b,c\), and containing only one reticulation (\(b\) in this case). Furthermore, each repeated integer in the cover (i.e., each reticulation) is in a fence, since it must be part of a pair with a uniquely appearing element (a tree vertex). Thus, the number of fences is the number of reticulations, and each fence has length \(1\). Therefore, by Theorem 3.6, there are \(2^{k}\) support trees.
Corollary 4.5 also follows immediately by combining both parts of the following result.
**Theorem 4.6**.:
1. _The number of spanning trees in a phylogenetic network is the product of all the in-degrees of the reticulation vertices._
2. _A network is a tree-child network if and only if every spanning tree is also a support tree._
Proof.: Part (i): A reticulation vertex \(x\) is an integer contained in \(k>1\) subsets of \(\mathcal{C}\) (Table 1) and a spanning tree is an embedded partition (Table 2). Thus, to obtain an embedded partition from a cover, we have to remove \(k-1\) instances of \(x\) from \(\mathcal{C}\). This can be done in \(\binom{k}{k-1}=k\) different ways, and each choice is independent of the others. Since \(k\) is also the in-degree for vertex \(x\), it follows than the number of embedded partitions (spanning trees) is \(\prod_{x}\text{in-degree}(x)\), where \(x\) is a reticulation vertex. If \(x\) is a tree vertex, then \(\text{in-degree}(x)=1\) and, therefore, it does not contribute to the product.
Part (ii): By Theorem 4.1, every subset of a tree-child cover has at least one element that is not present in any other subset. This implies that every embedding partition must contain at least one element for each subset; hence, it has the same size as \(|\mathcal{C}|\).
To show the forward direction, suppose that \(N\) is not a tree-child network. We will show that there must be a spanning tree for \(N\) that is not a support tree. If \(N\) is not tree-child, then it has at least one vertex that is not visible. Let \(v\) be a non-visible vertex that is maximally distant from the root, so that all vertices descended from \(v\) are visible. If we delete each arc out of \(v\), then there is a path from the root to each vertex, so \(N\) has a spanning tree \(T\). However, in this tree, \(T\) has \(v\) as a leaf. The tree \(T\) is therefore a spanning tree of \(N\) and not all its leaves are in \(X\), so \(T\) is not a support tree.
## 5. Normal networks
Normal networks are a subclass of the tree-child networks, with the added constraint that they contain no "shortcuts" [20]. A _shortcut_ is an edge \((u,v)\) for which there is an alternative directed path from \(u\) to \(v\) in the network.
To capture this information in terms of covers, we need a way to record paths in that context. This motivated the definition of backtrack (Definition 4.2), which requires the labelling order that was defined in Section 2. The backtrack algorithm identifies a path from the vertex labelled \(x\) back to the root, expressing the path in terms of a sequence of sets in the cover. The edges between the parent vertices that correspond with these sets defines the path.
**Example 5.1**.: Recall the cover \(\mathcal{C}=1\mid 2\mid 3\mid 4,5\mid 6,8\mid 6,7\mid 7,8\mid 11,12\mid 9,13\mid 10,13\mid 14,15\) from Example 3.4 for the network in Figure 1. This cover has the labelling order
\[\mathcal{C}=\{1\}_{6},\{2\}_{7},\{3\}_{8},\{4,5\}_{9},\{6,7\}_{10},\{6,8\}_{1 1},\{7,8\}_{12},\{11,12\}_{13},\{9,13\}_{14},\{10,13\}_{15},\{14,15\}_{\rho}.\]
A backtrack for \(x=3\) starts with a subset containing \(8\) (with the label of \(\{3\}\) in the labelling order). There are two choices; suppose we pick \(\{7,8\}\). The label of \(\{7,8\}\) is \(12\), so now we must find a set containing \(12\). There is only one, so we add \(\{11,12\}\) to the backtrack sequence. \(\{11,12\}\) has label \(13\), so we look for a set containing \(13\) and choose one of the two options, say \(\{10,13\}\). This has label
in the order, so we look for a set containing \(15.\) There is one, namely \(\{14,15\}\), and its label is \(\rho\), which means we terminate the algorithm and output the backtrack sequence
\[\{7,8\},\{11,12\},\{10,13\},\{14,15\}.\]
Note, each such backtrack defines a path from \(3\) to the root \(\rho\); in this case, \(3\to 8\to 12\to 13\to 15\to\rho\).
**Theorem 5.2**.: _Let \(N\) be a phylogenetic network with expanding cover \(\mathcal{C}\), in labelling order. Then \(N\) has a shortcut if and only if there is a backtrack for an \(x\in[m]\) that includes a subset containing \(x\)._
Proof.: Suppose \(N\) has a shortcut. Then there is a vertex \(x\) with a non-trivial path from some vertex \(v\) to \(x\), and there is also an edge \((v,x)\). The existence of a non-trivial path from \(v\) to \(x\) means that the cover has a non-trivial backtrack from \(x\), which includes the children of \(v\) as a set. However, \(x\) is also a child of \(v\), so \(x\) is in a set in the backtrack.
Conversely, suppose that the cover contains a backtrack for \(x\) that includes a set \(S\) containing \(x\). Let \(v\) be the label of the parent of \(S\). Then \(x\) is a child of \(v\), meaning there is an edge \((v,x)\) in \(N\). However, the backtrack provides a non-trivial path in \(N\) from \(v\) to \(x\) through \(S\). That is, \(N\) contains a shortcut.
**Corollary 5.3**.: _Let \(\mathcal{C}\) be a cover in labelling order for a tree-child network. Then \(\mathcal{C}\) is a cover for a normal network if and only if, for all \(x\in[m]\), no backtrack for \(x\) has a subset that contains \(x\)._
Without loss of generality, in Theorem 5.2 and Corollary 5.3, we can assume that \(x\) is a reticulation vertex (i.e., a value in \([m]\) that is contained in more that one subset of \(\mathcal{C}\)), since, by definition, reticulations have in-degree greater than one and thus are the only vertices that can have shortcuts.
Using Theorem 5.2, we can construct an algorithm that removes all the shortcuts from a cover. This implies that, given a tree=child network, we can transform it to a normal network by removing all the shortcuts via Algorithm 3.
```
0: Expanding cover \(\mathcal{C}\) in labelling order procedureRemoveShortcuts(\(\mathcal{C}\)) Compute all backtracks for all reticulation vertices for backtrack \(\beta\) for reticulation vertex \(x\)do for\(s\in\beta\)do if\(x\in s\)then Remove \(x\) from \(s\) endif endfor endfor endprocedure
```
**Algorithm 3** Remove all shortcuts from a cover
## 6. Tree-sibling networks
Tree-sibling networks are also amenable to a description in terms of covers.
**Definition 6.1** ([2]).: A tree-sibling network is a network in which every reticulation vertex is a sibling of a tree vertex.
\begin{table}
\begin{tabular}{l c} \hline
**Network** & **Cover** \\ \hline Shortcut to \(x\) & A backtrack of \(x\) that includes a set containing \(x\). \\ \hline \end{tabular}
\end{table}
Table 5. A translation of a shortcut into a feature of the corresponding expanding cover.
**Theorem 6.2**.: _Tree-siblings networks are in bijection with those expanding covers for which every repeated integer lies in at least one set with an integer that appears only once._
Proof.: The statement is a direct translation of the definition of tree-sibling into the language of covers, according to Table 1. Reticulation vertices are those that appear more than once in the cover, and vertices are siblings when they appear in the same set in the cover.
We have already seen a characterisation of tree-child networks using covers in Theorem 4.1. Covers for tree-child networks are those for which every set has a uniquely appearing element. However, there is a close connection between tree-child and tree-sibling networks, which can be captured in a cover description for tree-child networks, as follows.
**Theorem 6.3**.: _Tree-child networks are in bijection with expanding covers for which, for every repeated element \(k\) in \(\mathcal{C}\), every subset containing \(k\) also contains an integer that appears only once._
Proof.: We will prove that this statement is equivalent to Theorem 4.1.
For the forward direction, suppose that a cover satisfies the condition in Theorem 4.1. If every subset contains a uniquely occurring integer, then all subsets that contain a reticulation will do also do so.
For the backward direction, by assumption, every subset that contains a reticulation vertex has an integer that is not contained in another subset. This implies that all other subsets do not contain a reticulation vertex, and therefore, it contains a tree vertex that is not contained in any other subset (Table 1).
In other words, tree-child networks are networks in which every parent of a reticulation vertex has a tree-vertex as a child. Therefore, we recover the well-known fact that all tree-child networks are tree-sibling networks.
## 7. Orchard networks
Orchard networks are non-degenerate phylogenetic networks defined by the property that they can be reduced to a trivial network (a single vertex) by a series of cherry or reticulated cherry reductions [5, 14, 19]. In the present paper, we will restrict our attention to _binary_ orchard networks.
A _cherry_ is a pair of leaves that are siblings; a _reticulated cherry_ is a pair of leaves, one of which has a reticulate parent and the other is the sibling of that reticulate parent. Cherry reduction involves replacing the cherry with a single vertex. Reticulated cherry reduction involves deleting the arc between the parents of the two leaves and then suppressing degree-2 vertices. By a theorem of [5, 14], for orchard networks, the order in which these are performed is not important.
To translate this definition into covers, we need to first characterise cherries and reticulated cherries as they are manifested in covers, and then describe the action of such reductions in terms of the cover. The first of these requirements is routine; the second, not, as it requires us to augment the cover with its set of leaves. We will describe a test for orchard that reduces an expanding cover to a trivial cover but, along the way, passes through covers that are not expanding.
In covers, a cherry is given by a set consisting of two elements of \([n]\) (the leaves), whereas a reticulated cherry is given by a singleton subset of \([n]\) appearing in position \(j\) in the labelling order, and a pair \(\{n+j,i\}\) where \(i\in[n]\) (summarized in Table 7). An example is shown in Figure 3.
### The cherry reduction process via covers
The cherry reduction test for orchard networks can be defined efficiently using covers by keeping track of the changing set of leaf labels \(\mathcal{L}\) within the algorithm, as follows. Identifying a cherry or reticulated cherry in a cover can be done using the translations given in Table 7. The process in Algorithm 4 chooses to reduce a cherry first, if there is one, as it involves fewer checks.
In general, the set \(\mathcal{C}\) that is redefined during Algorithm 4 may not be an expanding cover, but these processes do nevertheless model the network cherry and reticulated cherry reduction steps, applied to a labelled network.
```
0: Expanding cover \(\mathcal{C}\) in labelling order procedureIsOrchard(\(\mathcal{C}\)) \(n\leftarrow||\mathcal{C}||-|\mathcal{C}|+1\) \(\mathcal{L}\leftarrow[n]\) \(reduced\gets true\) while\(reduced=true\)and\(|\mathcal{C}|>0\)do \(reduced\gets false\) ifthere is a set of form \(\{a,b\}_{j}\in\mathcal{C}\) with \(a,b\in\mathcal{L}\)then \(\mathcal{C}\leftarrow\mathcal{C}\setminus\{a,b\}\) \(\triangleright\) Cherry reduction \(\mathcal{L}\leftarrow(\mathcal{L}\setminus\{a,b\})\cup\{j\}\) \(reduced\gets true\) else ifthere is a set of form \(\{a\}_{j}\) in \(\mathcal{C}\) and a set of form \(\{j,b\}\in\mathcal{C}\), with \(a,b\in\mathcal{L}\)then \(\mathcal{C}\leftarrow\mathcal{C}\setminus\{\{a\},\{j,b\}\}\) \(\triangleright\) Reticulated cherry reduction \(\mathcal{L}\leftarrow(\mathcal{L}\setminus\{a,b\})\cup\{j,k\}\) \(reduced\gets true\) endif endif endwhile if\(\mathcal{C}=\emptyset\)then return "\(\mathcal{C}\) is orchard" else return "\(\mathcal{C}\) is not orchard" endif endprocedure
```
**Algorithm 4** Test whether the expanding cover \(\mathcal{C}\) corresponds to an orchard network
**Theorem 7.1**.: _Algorithm 4 determines whether the network from the expanding cover \(\mathcal{C}\) is orchard._
Proof.: A network is orchard, by definition, if and only if it can be reduced to a trivial network by cherry or reticulated cherry reductions. According to a result of [5, 14], the order of such reductions is
Figure 3. A network with a cherry and a reticulated cherry. The cover, in labelling order, is \((\{1,5\},\{3,6\},\{4\},\{2,8\},\{7,8\},\{9,10\})\). The cherry can be identified in the cover as a pair of integers that are a subset of the leafset [5]. In this cover, a cherry is \(\{1,5\}\). The reticulated cherry is identified in the cover as a pair of sets: one is a singleton subset of the leafset, in position \(j\); the other is the pair \(\{n+j,i\}\) with \(i\) in the leafset. In this cover, there is a reticulated cherry consisting of the singleton \(\{4\}\) (contained in the leafset), which appears in position \(3\) in the labelling order, and the pair \(\{2,8\}\), noting that \(8=n+3\) and \(2\) is in the leafset.
not important. The procedures in Algorithm 4 exactly reflect the effect on the cover of these operations on the network, as can be seen in Figure 4.
**Example 7.2**.: The cherry reduction process in Algorithm 4, applied to the cover for the network in Figure 3, proceeds as described in Table 6.
## 8. A new class of network detected through the lens of covers
We have used covers to describe several classes of phylogenetic network. However, the encoding into covers also creates the opportunity to define new classes of network that correspond to particular features of covers. Such classes might currently have little direct utility for application to phylogenetics, but they may have an indirect value in that algorithms and methods using covers may involve such classes in passing. We introduce one such class as an example of this opportunity.
Recall that the definition of an expanding cover has two criteria (Definition 2.1). The first is that elements of the leafset \([n]\) are not repeated, and the second ensures that the labelling algorithm is well-defined by requiring at least \(i\) subsets of \([n+i-1]\) to be in the cover.
If a cover contains _exactly_\(i\) subsets of \([n+i-1]\), it has a strong consequence for the network, as follows. We define a _spine_ in a network to be a path from a leaf to the root that traverses all non-leaf vertices, and we call a network _spinal_ if it has a spine.
**Theorem 8.1**.: _A network is spinal if and only if its cover \(\mathcal{C}\) has exactly \(i\) subsets of \([n+i-1]\), for each \(i=1,\ldots,|\mathcal{C}|\)._
Proof.: We prove the reverse direction first. Suppose that the cover \(\mathcal{C}\) has exactly \(i\) subsets of \([n+i-1]\), for each \(i=1,\ldots,|\mathcal{C}|\), and consider its labelling order. The first set in the labelling order is the unique set that is contained in \([n]\), and its label is \(n+1\). For each \(i=1,\ldots,|\mathcal{C}|-1\), the \(i\)th set in the labelling order is contained in \([n+i-1]\), according to the expanding property, but is not contained in \([n+i-2]\), according to our assumption about \(\mathcal{C}\). Therefore, it must contain the integer \(n+i-1\). The \(i\)th set in the labelling order has label \(n+i\), which means that \(n+i\) is a parent of \(n+i-1\). Since this holds for each \(i>1\), this determines a path from a leaf (labelled by an element of the first set in the labelling order) through every vertex with label \(n+1\) to \(n+|\mathcal{C}|-1\), and the last set, containing \(||\mathcal{C}||=n+|\mathcal{C}|-1\), has the root as parent. Thus the network is spinal.
We now prove the forward direction. Suppose that \(N\) is spinal with cover \(\mathcal{C}\). Being spinal means that \(N\) has a path of length \(|\mathcal{C}|\) from a leaf to the root. This means that there is a backtrack of a leaf that has length \(|\mathcal{C}|-1\). That is, a sequence of sets from the cover such that the label of one set (from the labelling order on \(\mathcal{C}\)) is an element of the next set in the backtrack sequence. Because the label of a set in the cover is strictly greater than all the elements of the set, the maximal elements of the sets in a backtrack are strictly increasing.
Figure 4. Cherry (left) and reticulated cherry (right) reductions and their effects on the covers. Here, \(*\) represents other sibling vertices, which could be an empty set.
Now consider the backtrack arising from the spine (the path from a leaf to the root traversing all non-leaf vertices). The leaf at the base of the spine must be in a set contained in \([n]\); otherwise, there would be no path from it to the vertex labelled \(n+1\). Therefore, the first set in the backtrack contains \(n+1\) as its maximal element because that is the parent label for the set containing the initial leaf. The spine has \(|\mathcal{C}|-1\) vertices in it, including the initial leaf, because it includes all except \(n-1\) of the vertices in the network (the network has \(||\mathcal{C}||+1=|\mathcal{C}|+n\) vertices in total). Therefore, the backtrack
\begin{table}
\begin{tabular}{|l|l|} \hline \multicolumn{2}{|l|}{Cherry reduction of the cover \(\mathcal{C}\) in Example 7.2, following Algorithm 4.} \\ \hline \(\mathcal{C}=\{\{1,5\}_{6},\{3,6\}_{7},\{4\}_{8},\{2,8\}_{9},\{7,8\}_{10},\{9,1 0\}_{\rho}\}\); \\ \(\mathcal{L}=\{1,2,3,4,5\}\). \\ \hline \(1\) & \(\mathcal{C}\) contains the cherry \(\{1,5\}_{6}\) (and the reticulated cherry, \(\{4\}_{8},\{2,8\}_{9}\)). We reduce the cherry. \\ \cline{2-3} & \(\mathcal{C}_{1}=\mathcal{C}\setminus\{\{1,5\}_{6}\}\) \\ & \(=\{\{3,6\}_{7},\{4\}_{8},\{2,8\}_{9},\{7,8\}_{10},\{9,10\}_{\rho}\}\) \\ & \(\mathcal{L}_{1}=(\mathcal{L}\setminus\{1,5\})\cup\{6\}\) \\ & \(=\{2,3,4,6\}\) \\ \hline \(2\) & \(\mathcal{C}_{1}\) contains cherry \(\{3,6\}_{7}\) (and the reticulated cherry \(\{4\}_{8},\{2,8\}_{9}\)). We reduce the cherry. \\ \cline{2-3} & \(\mathcal{C}_{2}=\mathcal{C}_{1}\setminus\{\{3,6\}_{7}\}\) \\ & \(=\{\{4\}_{8},\{2,8\}_{9},\{7,8\}_{10},\{9,10\}_{\rho}\}\). \\ & \(\mathcal{L}_{2}=(\mathcal{L}_{1}\setminus\{3,6\})\cup\{7\}\) \\ & \(=\{2,4,7\}\). \\ \hline \(3\) & \(\mathcal{C}_{2}\) contains no cherry, but contains the reticulated cherry \(\{4\}_{8},\{2,8\}_{9}\), which we reduce. \\ \cline{2-3} & \(\mathcal{C}_{3}=\mathcal{C}_{2}\setminus\{\{4\}_{8},\{2,8\}_{9}\}\) \\ & \(=\{\{7,8\}_{10},\{9,10\}_{\rho}\}\). \\ & \(\mathcal{L}_{3}=(\mathcal{L}_{2}\setminus\{2,4\})\cup\{8,9\}\) \\ & \(=\{7,8,9\}\). \\ \hline \(4\) & \(\mathcal{C}_{3}\) contains the cherry \(\{7,8\}_{10}\), which we reduce. \\ \cline{2-3} & \(\mathcal{C}_{4}=\mathcal{C}_{3}\setminus\{7,8\}_{10}\) \\ & \(=\{\{9,10\}_{\rho}\}\). \\ \cline{2-3} & \(\mathcal{L}_{4}=(\mathcal{L}_{3}\setminus\{7,8\})\cup\{10\}\) \\ & \(=\{9,10\}\). \\ \hline \(5\) & \(\mathcal{C}_{4}\) contains (only) the cherry \(\{9,10\}_{\rho}\), which we reduce. \\ \cline{2-3} & \(\mathcal{C}_{5}=\mathcal{C}_{4}\setminus\{9,10\}_{\rho}=\emptyset\), which means the algorithm ends. \\ \hline \end{tabular}
\end{table}
Table 6. Cherry reduction algorithm acting on the cover in Example 7.2 via Algorithm 4, with the effects of reduction on the network shown at right (for illustration only).
\begin{table}
\begin{tabular}{l l} \hline
**Network** & **Cover** \\ \hline Cherry & A set consisting of two elements of \([n]\) \\ Reticulated cherry & A singleton subset of \([n]\) appearing in position \(j\) in the labelling order, and a pair \(\{n+j,i\}\) where \(i\in[n]\) \\ \hline \end{tabular}
\end{table}
Table 7. A translation of features that are relevant to orchard networks into features of the corresponding expanding cover.
for the initial leaf has \(|\mathcal{C}|-1\) sets. The maximal elements of these \(|\mathcal{C}|-1\) sets are strictly increasing, and run from \(n+1\) to \(m=||\mathcal{C}||=|\mathcal{C}|+n-1=n+(|\mathcal{C}|-1)\). This forces each set in the backtrack to have a distinct maximal element. Put together with the set containing the initial leaf, which is a subset of \([n]\), this means that there are exactly \(i\) subsets of \([n+i-1]\), for each \(i=1,\ldots,|\mathcal{C}|\), as required.
In the light of Theorem 8.1, we say that a cover \(\mathcal{C}\) is _spinal_ if it contains exactly \(i\) subsets of \([n+i-1]\), for each \(i=1,\ldots,|\mathcal{C}|\). An example of a spinal network is shown in Figure 5. Spinal networks have some non-trivial intersections with other classes; for example, the spinal network \(1\mid 2\mid 2,3\mid 3,4\) is not a tree-child, tree-sibling, or orchard network. It can, however, be shown that the class of spinal networks lies within the intersection of the labellable and tree-based classes of networks.
## 9. Discussion
Sometimes a relatively small shift in perspective can open up new possibilities in surprising ways. What seems like a fairly straightforward idea in a paper by Diaconis and Holmes (the idea that rooted binary phylogenetic trees correspond to perfect matchings [4]), itself building on an elegant but simple way to label internal vertices [6], was loosened slightly to yield a correspondence between phylogenetic forests and all partitions of finite sets, as well as a raft of interesting questions in semigroup theory [8]. This subtle twist of an idea, like something from a Philip Pullman novel [18], seems to have opened up further opportunities that, with a further gentle twist, have opened a new canvas on which to draw phylogenetic networks [7]. Capturing the features that define different network classes on this canvas provided the underlying motivation for this paper.
Many core features discussed in the context of networks, such as reticulations, paths, cherries, siblings, and so on, have been translated into the language of covers; a summary is given in Table 9. These translations of features have been necessary for characterising several important classes of phylogenetic
\begin{table}
\begin{tabular}{l l} \hline
**Network** & **Cover** \\ \hline Spine & Exactly \(i\) subsets of \([n+i-1]\), for each \(i\) \\ \hline \end{tabular}
\end{table}
Table 8. A translation of the feature of spinal networks into a feature of the corresponding expanding cover.
Figure 5. A spinal network with cover \(1,3\mid 5\mid 2,6\mid 5,7\mid 4,6,8\mid 7,9\). Note that \(n=4\) and the cover has one set in [4], two in [5], three in [6], four in [7], five in [8], and six in [9]. There is a path from the elements of the set that is in [4], namely \(1\) and \(3\), to the root, and this path traverses every non-leaf vertex. Observe that this path is in labelling sequence. The spine is particularly clear when the network is drawn as shown on the right.
network in the language of covers. This includes some of the most prominent classes, including normal, tree-child, tree-sibling, orchard, and tree-based networks (relationships among the classes, determined by properties of their covers, are represented in Figure 6). However there are many classes, each of which is important for its own reasons, and this list is not complete. Some classes that have been omitted in the present paper might be difficult to define with covers (for instance, level-\(k\) networks or HGT networks), whereas others might just be a matter of following through with the first steps we have taken here (for example, reticulation-visible networks, and non-binary orchard networks).
Defining a language is not the goal, however, despite it being a necessary step. The goal is to be able to efficiently work with phylogenetic networks -- computationally, algorithmically, and mathematically -- in order to establish robust methods of inference for networks that will eventually be of practical use for biological researchers. To that end, encoding various classes of phylogenetic networks in terms of expanding covers provides an opportunity to make computation more effective and allow their structure to be seen more clearly.
## 10. Data Availability
Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.
|
2310.03387 | Ideal structure of Nica-Toeplitz algebras | We study the gauge-invariant ideal structure of the Nica-Toeplitz algebra
$\mathcal{NT}(X)$ of a product system $(A, X)$ over $\mathbb{N}^n$. We obtain a
clear description of $X$-invariant ideals in $A$, that is, restrictions of
gauge-invariant ideals in $\mathcal{NT}(X)$ to $A$. The main result is a
classification of gauge-invariant ideals in $\mathcal{NT}(X)$ for a proper
product system in terms of families of ideals in $A$. We also apply our results
to higher-rank graphs. | Boris Bilich | 2023-10-05T08:49:20Z | http://arxiv.org/abs/2310.03387v1 | # Ideal structure of Nica-Toeplitz algebras
###### Abstract.
We study the gauge-invariant ideal structure of the Nica-Toeplitz algebra \(\mathcal{NT}(X)\) of a product system \((A,X)\) over \(\mathbb{N}^{n}\). We obtain a clear description of \(X\)-invariant ideals in \(A\), that is, restrictions of gauge-invariant ideals in \(\mathcal{NT}(X)\) to \(A\). The main result is a classification of gauge-invariant ideals in \(\mathcal{NT}(X)\) for a proper product system in terms of families of ideals in \(A\). We also apply our results to higher-rank graphs.
Key words and phrases:product systems, gauge-invariant ideals, Nica covariance, Nica-Pimsner algebras, higher-rank graphs 2020 Mathematics Subject Classification: 46L08, 46L55
## 1. Introduction
Product systems of \(C^{*}\)-correspondences over semigroups are a powerful framework for studying symmetries and dynamics in noncommutative geometry. The theory was first developed by Pimsner [22] for a single correspondence, and then it was extended to discrete semigroups by Fowler [9]. A product system \((A,X)\) over a \(C^{*}\)-algebra \(A\) is a collection \(\{X^{p}\}_{p\in P}\) of \(A\)-\(A\)-correspondences (\(A\)-bimodules with special properties) indexed by a semigroup \(P\) together with specified isomorphisms \(X^{p}\otimes_{A}X^{q}\cong X^{pq}\) (see Definition 2.7). There are product systems associated to semigroup dynamical systems, higher-rank (topological) graphs [12, 14, 24], self-similar groups [8, 18], and factorial languages [6], yet this merely represents a portion of the applications of product systems.
Like many other mathematical objects, product systems are studied through their representations (see Definition 2.8). There is a universal algebra \(\mathcal{T}_{X}\) for representations of \((A,X)\). When the semigroup is embedded into a group \(G\), the algebra \(\mathcal{T}_{X}\) comes with a natural coaction of \(C^{*}(G)\) such that \(X^{p}\) is homogeneous of degree \(p\in G\). When the group is abelian, the coaction is equivalent to an action of the dual group \(\widehat{G}\), which is called the _gauge action_. We only study product systems over \(\mathbb{N}^{n}\), in which case the gauge group is \(\mathbb{T}^{n}\).
However, the algebra \(\mathcal{T}_{X}\) is often too big to be useful. That is why we want to restrict the set of representations by imposing additional conditions. The corresponding universal algebra is then a certain gauge-invariant quotient of \(\mathcal{T}_{X}\). In case of a single correspondence \(X\), Katsura [13] classified all the gauge-invariant ideals in \(\mathcal{T}_{X}\) in terms of pairs of ideals in \(A\). Among these, there is a unique maximal gauge-invariant ideal \(\mathcal{C}_{\mathcal{I}}\) which intersects trivially with the base algebra \(A\). The quotient \(\mathcal{O}_{X}\coloneqq\mathcal{T}_{X}/\mathcal{C}_{\mathcal{I}}\) is called the _Cuntz-Pimsner algebra of \(X\)_. Katsura also defined _relative Cuntz-Pimsner algebras_ as quotients by other gauge-invariant ideals.
The higher rank case is much more complicated since ideals in \(\mathcal{T}_{X}\) come not only from ideals of \(A\) but also from linear relations on projections corresponding to the base semigroup \(P\). Further information about the emerging problems can be found in [16, 17].
Based on the work [19] by Nica, Fowler [9] proposed to restrict the set of representations by imposing the so-called Nica-covariance condition. It turns out that most of the interesting representations are Nica-covariant and in the case of compactly aligned \(X\) there is a universal algebra \(\mathcal{NT}(X)\) called the Nica-Toeplitz algebra. This algebra is much more tractable than \(\mathcal{T}_{X}\).
A substantial work on the generalization of Katsura's results to the higher rank case was done by several authors. Sims and Yeend [27] defined the Cuntz-Nica-Pimsner algebra \(\mathcal{NO}(X)\) of a product system over a quasi-lattice ordered group. Their construction unifies algebras of higher-rank graphs and Katsura's Cuntz-Pimsner algebra. Carlsen, Larsen, Sims, and Vittadello [3] proposed another definition, which coincides with that of Sims-Yeend under a certain amenability condition. In terms relevant to our paper, they demonstrated the existence of a unique maximal gauge-invariant ideal within \(\mathcal{NT}(X)\) not meeting the base algebra \(A\). The quotient by this ideal is then the _co-universal_ Cuntz-Nica-Pimsner algebra of the product system \((A,X)\). The co-universal property was extensively studied by several authors in [5, 7, 25].
However, the gauge-invariant ideal structure of the Nica-Toeplitz algebra \(\mathcal{NT}(X)\) remained unknown (see [6, Question 9.2]). We aim to fill this gap by describing the gauge-invariant ideal lattice of \(\mathcal{NT}(X)\) in the special case of proper product systems over the semigroups \(\mathbb{N}^{n}\). The main result is a description of the gauge-invariant ideal lattice of \(\mathcal{NT}(X)\) in Theorem 4.15.
In order to classify gauge-invariant ideals in \(\mathcal{NT}(X)\) we adapt the methods of [13] to the higher-rank case. The adaptation significantly relies on the results of Dor-On and Kakariadis [6]. They defined a wide class of _strongly compactly aligned_ product systems and described the Cuntz-Nica-Pimsner algebra \(\mathcal{NO}(X)\) of a strongly compactly aligned \(\mathbb{N}^{n}\)-product system \((A,X)\) as a universal algebra for CNP-representations. These are representations where certain covariance conditions are satisfied on explicitly defined ideals \(\{\mathcal{I}^{F}\}_{F\subset\{1,\ldots,n\}}\) of \(A\) (see Definition 2.12).
We first consider an arbitrary strongly compactly aligned \(\mathbb{N}^{n}\)-product system \((B,Y)\). We define _invariant ideals_ in \(B\) as ideals coming from the gauge-invariant ideals of \(\mathcal{NO}(Y)\) (see Definition 3.1). In order to describe these ideals we define two weaker notions: positively invariant (Definition 3.2) and negatively invariant (Definition 3.7) ideals. Finally, we show that an ideal is invariant if and only if it is negatively and positively invariant (Theorem 3.9). The whole Section 3 is a direct generalization of [13, Section 4], including the three notions of invariance.
Next, we focus on a proper product system \((A,X)\) and classify gauge-invariant ideals in \(\mathcal{NT}(X)\) and \(\mathcal{NO}(X)\). In the rank \(1\) case, these ideals are classified by T-pairs and O-pairs of ideals in \(A\) (see [13, Section 5]). It turns out that we need families of \(2^{n}\) ideals in the rank \(n\) case. We define three types of families of ideals in \(A\): T-families, O-families (see Definition 4.2) and invariant families (see Definition 4.1). We show that there is a lattice bijection between T-families and invariant families in Proposition 4.4.
Following this, given an invariant family \(K=\{K^{F}\}_{F\subset\{1,\ldots,n\}}\), we define an extended product system \(\boldsymbol{X}_{K}\) over a \(C^{*}\)-algebra \(\boldsymbol{A}_{K}=\bigoplus_{F}A/K^{F}\). We show that \(\boldsymbol{X}_{K}\)-invariant ideals in \(\boldsymbol{A}_{K}\) are in bijection with invariant families \(K^{\prime}\) containing \(K\). Moreover, we show that every invariant ideal of \(\boldsymbol{A}_{K}\) is separating, meaning that gauge-invariant ideals of \(\mathcal{NO}(\boldsymbol{X}_{K})\) are in bijection with invariant ideals of \(\boldsymbol{A}_{K}\). While the construction is inspired
by Katsura's [13, Definition 6.1], it differs even in the rank \(1\) case. Unlike Katsura's construction, there is no immediate generalization to the non-proper case.
Finally, for every T-family \(I\) we define an \(I\)_-relative Cuntz-Nica-Pimsner algebra_\(\mathcal{NO}(X,I)\) as a certain gauge-invariant quotient of \(\mathcal{NT}(X)\) (Definition 4.9). In particular, \(\mathcal{NO}(X,0)\) is isomorphic to \(\mathcal{NT}(X)\) as expected. We prove in Proposition 4.13 that \(\mathcal{NO}(X,I)\) is canonically isomorphic to \(\mathcal{NO}(\boldsymbol{X}_{K_{I}})\), where \(K_{I}\) is the invariant family corresponding to \(I\). Together with the classification of invariant ideals of \(\boldsymbol{A}_{K}\), this leads to a lattice bijection between gauge-invariant ideals in \(\mathcal{NT}(X)\) (resp. \(\mathcal{NO}(X)\)) and T-pairs (resp. O-pairs) in Theorem 4.15. Moreover, this also shows that the quotient by the ideal corresponding to a T-family \(I\) is the \(I\)-relative Cuntz-Nica-Pimsner algebra \(\mathcal{NO}(X,I)\).
The paper is structured as follows. In Section 2 we give basic definitions and constructions. Section 3 is devoted to the description of invariant ideals. The main result here is Theorem 3.9, which states that invariant ideals are exactly positively and negatively invariant ideals. We start Section 4 with necessary constructions and definitions. First, the definitions of invariant families, T-families, and O-families are given in Section 4.1. Secondly, we construct an extended product system \((\boldsymbol{A}_{K},\boldsymbol{X}_{K})\) in Section 4.2. Next, in Definition 4.9, we define \(I\)-relative algebras and prove the main result of the paper: the classification of gauge-invariant ideals in \(\mathcal{NT}(X)\) and \(\mathcal{NT}(X)\) (Theorem 4.15). Finally, we apply our results to the theory of higher-rank graphs in Section 5. We represent any quotient of the Toeplitz algebra of a row-finite higher-rank graph as the algebra of some extended higher rank graph in Theorem 5.5, generalizing an analogous result of Bates-Hong-Raeburn-Szymanski [1, Corollary 3.5] for rank \(1\) graphs.
During the preparation of this preprint, the author became aware of independent work by Dessi and Kakariadis [4], who have achieved similar results.
**Acknowledgments.** This work is based on the author's Master's thesis conducted at the University of Gottingen. I would like to express my sincere gratitude to my advisor Ralf Meyer for posing the initial problem and providing guidance throughout the preparation of this work. Special thanks go to Adam Dor-On for introducing me to the results of [6], sharing his notes on the topic, and reviewing a draft version of the paper. I would also like to extend my appreciation to Chenchang Zhu, who has kindly agreed to undertake the review and evaluation of the Master's thesis. I thank Joseph Dessi and Evgenios Kakariadis for sending me their manuscript [4].
## 2. Preliminaries
In this section, we establish notation and recall basic definitions and constructions as well as relevant recent results. We refer the reader to [15] for details on Hilbert \(C^{*}\)-modules and tensor products of \(C^{*}\)-correspondences. Fowler's paper [9] is a good reference for product systems and Nica-covariance for quasi-lattice ordered semigroups.
### Notation
We denote elements of the semigroup \(\mathbb{N}^{n}\) by bold Latin letters like \(\mathbf{m}=(\mathbf{m}_{1},\ldots,\mathbf{m}_{n})\). If \(\mathbf{m},\mathbf{k}\in\mathbb{N}^{n}\), we write \(\mathbf{m}\leq\mathbf{k}\) if \(\mathbf{m}_{i}\leq\mathbf{k}_{i}\) for all \(i=1,\ldots,n\). We use the notation \(\mathbf{m}\vee\mathbf{k}\) and \(\mathbf{m}\wedge\mathbf{k}\) for the coordinatewise maximum and minimum of \(\mathbf{m}\) and \(\mathbf{k}\), respectively. These are the meet and the join operations in the lattice \(\mathbb{N}^{n}\) with the partial order \(\leq\).
We use \([n]\) to denote the set \(\{1,\ldots,n\}\) and \(\mathfrak{F}\coloneqq 2^{[n]}\) to denote the set of all subsets of \([n]\). Given \(F\in\mathfrak{F}\), we write \(\mathbf{1}_{F}\in\mathbb{N}^{n}\) for the characteristic vector of \(F\): \((\mathbf{1}_{F})_{i}\) is \(1\) if \(i\in F\) and \(0\) otherwise. We write \(\mathbf{1}_{i}\) instead of \(\mathbf{1}_{\{i\}}\) for a singleton \(\{i\}\in\mathfrak{F}\) and \(\mathbf{0}\) for the zero vector. Conversely, if \(\mathbf{m}\in\mathbb{N}^{n}\), we denote by \(\operatorname{supp}\mathbf{m}\in\mathfrak{F}\) the subset of non-zero coordinates of \(\mathbf{m}\). For \(\mathbf{m},\mathbf{k}\in\mathbb{N}^{n}\), we write \(\mathbf{m}\perp\mathbf{k}\) if \(\operatorname{supp}\mathbf{m}\cap\operatorname{supp}\mathbf{k}=\emptyset\).
### Hilbert C*-modules
Let \(A\) be a \(C^{*}\)-algebra. A _(right) Hilbert \(A\)-module_ is a right \(A\)-module \(X\) equipped with a map \(\langle\cdot,\cdot\rangle\colon X\times X\to A\) which is \(\mathbb{C}\)-linear in the second variable and such that
1. \(\langle x,y\rangle^{*}=\langle y,x\rangle\) for all \(x,y\in X\),
2. \(\langle x,x\rangle\geq 0\) for all \(x\in X\),
3. \(\langle x,y\rangle\cdot a=\langle x,y\cdot a\rangle\) for all \(x,y\in X\) and \(a\in A\),
4. \(X\) is complete with respect to the norm \(\|x\|=\|\langle x,x\rangle\|^{1/2}\).
If \(X,Y\) are Hilbert \(A\)-modules, a bounded operator \(T\colon X\to Y\) is called _adjointable_ if there exists a bounded operator \(T^{*}\colon Y\to X\) such that \(\langle Tx,y\rangle=\langle x,T^{*}y\rangle\) for all \(x\in X\) and \(y\in Y\). The space of all adjointable operators from \(X\) to \(Y\) is denoted by \(\mathcal{L}_{A}(X,Y)\). The space of all adjointable operators from \(X\) to itself is a \(C^{*}\)-algebra denoted by \(\mathcal{L}_{A}(X)\coloneqq\mathcal{L}_{A}(X,X)\). One can show that adjointable operators are automatically right \(A\)-module homomorphisms.
For \(x\in X\) and \(y\in Y\), we define the _rank-one operator_\(\theta_{y,x}\colon X\to Y\) by \(\theta_{y,x}(z)=y\langle x,z\rangle\) for all \(z\in X\). The closed linear span of all rank-one operators is denoted by \(\mathcal{K}_{A}(X,Y)\subset\mathcal{L}_{A}(X,Y)\) and called the space of _(generalized) compact operators_. The _(generalized) compact operators_ on \(X\) are defined as \(\mathcal{K}_{A}(X)\coloneqq\mathcal{K}_{A}(X,X)\), and they form a closed two-sided ideal in \(\mathcal{L}_{A}(X)\). We write \(\mathcal{K}\) and \(\mathcal{L}\) instead of \(\mathcal{K}_{A}\) and \(\mathcal{L}_{A}\) if \(A\) is clear from the context.
Let \(\mathfrak{S}\) be a finite set and let \(\{A_{S}\}_{S\in\mathfrak{S}}\) be a family of \(C^{*}\)-algebras. Let \(\boldsymbol{A}=\bigoplus_{S\in\mathfrak{S}}A_{s}\) be their direct sum. Suppose that we are provided with a Hilbert \(A_{S}\)-module \(X_{S}\) for each \(S\in\mathfrak{S}\). Then the direct sum \(\boldsymbol{X}\coloneqq\bigoplus_{S\in\mathfrak{S}}X_{S}\) is a Hilbert \(\boldsymbol{A}\)-module with the inner product \(\langle\boldsymbol{x},\boldsymbol{y}\rangle=(\langle x_{S},y_{S}\rangle)_{S \in\mathfrak{S}}\) for \(\boldsymbol{x}=(x_{S})_{S\in\mathfrak{S}}\) and \(\boldsymbol{y}=(y_{S})_{S\in\mathfrak{S}}\).
**Lemma 2.1**.: _Let \(\boldsymbol{A}=\bigoplus_{S\in\mathfrak{S}}A_{S}\) and let \(\boldsymbol{X}\) be a Hilbert \(\boldsymbol{A}\)-module. Then it has the form \(\boldsymbol{X}=\bigoplus_{S\in\mathfrak{S}}X_{S}\) as above. Moreover, we have \(\mathcal{L}(\boldsymbol{X})=\bigoplus_{S\in\mathfrak{S}}\mathcal{L}(X_{S})\) and \(\mathcal{K}(\boldsymbol{X})=\bigoplus_{S\in\mathfrak{S}}\mathcal{K}(X_{S})\)._
Proof.: Consider the subspace \(X_{S}=\boldsymbol{X}A_{S}\) of \(\boldsymbol{X}\). It is easy to see that it is a closed submodule and \(\langle X_{S},X_{S}\rangle=A_{S}\). Hence, \(X_{S}\) is a Hilbert \(A_{S}\)-module. We have \(\boldsymbol{X}=\boldsymbol{X}\boldsymbol{A}=\boldsymbol{X}\bigoplus_{S\in \mathfrak{S}}A_{S}=\bigoplus_{S\in\mathfrak{S}}\boldsymbol{X}A_{S}=\bigoplus_{S \in\mathfrak{S}}X_{S}\), which proves the first claim.
For the second claim, consider \(T\in\mathcal{L}(\boldsymbol{X})\). Then, we have \(TX_{S}=T(\boldsymbol{X}A_{S})=(T\boldsymbol{X})A_{S}\subset X_{S}\), so \(T\) can be described as a diagonal operator matrix with diagonal entries \(T_{S}=T|_{X_{S}}\). The same argument works for \(\mathcal{K}(\boldsymbol{X})\).
### C*-correspondences
Let \(A\) and \(B\) be \(C^{*}\)-algebras. An \(A\)_-\(B\)-correspondence_ is a right Hilbert \(B\)-module \(X\) equipped with a \(*\)-homomorphism \(\varphi_{X}\colon A\to\mathcal{L}(X)\). A correspondence is _proper_ if \(\varphi_{X}(A)\subset\mathcal{K}(X)\). It is _injective_ (or faithful) if \(\varphi\) is injective and _non-degenerate_ if \(\varphi_{X}(A)X\) is dense in \(X\). When the map \(\varphi_{X}(A)\) is clear from the context, we simply write \(a\cdot x\) instead of \(\varphi_{X}(a)x\) for \(a\in A\) and \(x\in X\).
Let \(C\) be another \(C^{*}\)-algebra and let \(Y\) be a \(B\)-\(C\)-correspondence. The _tensor product_\(X\otimes_{B}Y\) is a Hilbert \(C\)-module defined as the Hausdorff completion of the algebraic tensor product \(X\otimes_{B}Y\) with respect to the inner product
\[\langle x_{1}\otimes y_{1},x_{2}\otimes y_{2}\rangle=\langle y_{1},\varphi_{Y} (\langle x_{1},x_{2}\rangle)y_{2}\rangle\in C\]
for \(x_{1},x_{2}\in X\) and \(y_{1},y_{2}\in Y\).
There is a map \(\iota_{X}^{X\otimes Y}\colon\mathcal{L}(X)\to\mathcal{L}(X\otimes_{B}Y)\), defined by \(\iota_{X}^{X\otimes Y}(T)(x\otimes y)=Tx\otimes y\) for \(T\in\mathcal{L}(X)\), \(x\in X\) and \(y\in Y\). It is a homomorphism of \(C^{*}\)-algebras.
**Lemma 2.2** ([15, Proposition 4.7]).: _If \(Y\) is proper, then \(\iota_{X}^{X\otimes Y}(\mathcal{K}(X))\subset\mathcal{K}(X\otimes_{B}Y)\). Moreover, \(\iota_{X}^{X\otimes Y}|_{\mathcal{K}(X)}\) is injective (surjective) if \(\varphi_{Y}\) is injective (surjective)._
We can now define a structure of \(A\)-\(C\)-correspondence on the Hilbert \(C\)-module \(X\otimes_{B}Y\) by \(\varphi_{X\otimes Y}=\iota_{X}^{X\otimes Y}\circ\varphi_{X}\). By Lemma 2.2, the correspondence \(X\otimes_{B}Y\) is proper if \(X\) and \(Y\) are proper.
Let \(I\subset B\) be an ideal. Then, we can consider \(B/I\) as a \(B\)-\(B/I\)-correspondence. We define the quotient Hilbert \(B/I\)-module \(Y_{I}\coloneqq Y\otimes_{B}B/I\cong Y/YI\). The last isomorphism follows from the fact that \(YI\) is a closed submodule of \(Y\). The quotient maps are denoted by \([-]_{I}\colon B\to B_{I}\), \([-]_{I}\colon Y\to Y_{I}\). For a bounded operator \(T\in\mathcal{L}(Y)\), we abuse notation and write \([T]_{I}\in\mathcal{L}(Y_{I})\) to denote the operator \(\iota_{Y}^{Y_{I}}(T)\).
**Lemma 2.3** ([13, Lemma 1.6]).: _We have_
\[[\mathcal{K}(Y)]_{I}=\mathcal{K}(Y_{I})\]
_and \([T]_{I}=0\) for compact \(T\) if and only if \(T\in\mathcal{K}(YI)\)._
Proof.: The \(B\)-\(B/I\)-correspondence \(B/I\) is proper and the map \(\varphi_{B/I}\colon B\to\mathcal{K}_{B/I}(B/I)=B/I\) is surjective. Therefore, by Lemma 2.2, the map \([-]_{I}=\iota_{B}^{B/I}\) maps \(\mathcal{K}(B)\) onto \(\mathcal{K}(B/I)\). See the proof of [13, Lemma 1.6] for the second claim.
When \(Y\) is an \(A\)-\(B\)-correspondence, then \(Y_{I}\) is an \(A\)-\(B/I\)-correspondence with left multiplication map \(\varphi_{Y_{I}}=[-]_{I}\circ\varphi_{Y}\). We define an inclusion-preserving map \(Y^{-1}\colon\mathbb{I}(B)\to\mathbb{I}(A)\) from the set of ideals of \(B\) to the set of ideals of \(A\) by
\[Y^{-1}(I)=\ker\varphi_{Y_{I}}=\{a\in A\colon a\cdot Y\subset YI\}.\]
**Lemma 2.4** ([13, Proposition 1.3]).: _The following are equivalent for an ideal \(I\subset B\) and \(a\in A\)._
1. \(a\in Y^{-1}(I)\)_._
2. \(\langle x,a\cdot y\rangle\in I\) _for all_ \(x,y\in Y\)_._
3. \(\langle x,a\cdot x\rangle\in I\) _for all_ \(x\in Y\)_._
**Lemma 2.5**.: _Let \(X\) be an \(A\)-\(B\)-correspondence and \(Y\) be a \(B\)-\(C\)-correspondence. Then, we have \((X\otimes_{B}Y)^{-1}(I)=X^{-1}(Y^{-1}(I))\subset A\) for all ideals \(I\subset C\)._
Proof.: Let \(a\in A\), \(x_{1},x_{2}\in X\), and \(y_{1},y_{2}\in Y\) be arbitrary. Then, we have
\[\langle x_{1}\otimes y_{1},a\cdot(x_{2}\otimes y_{2})\rangle=\langle y_{1}, \langle x_{1},a\cdot x_{2}\rangle y_{2}\rangle.\]
By Lemma 2.4, the right-hand side is in \(I\) for all \(y_{1},y_{2}\in Y\) if and only if \(\langle x_{1},a\cdot x_{2}\rangle\in Y^{-1}(I)\). Applying Lemma 2.4 again, we see that this is equivalent to \(a\in X^{-1}(Y^{-1}(I))\)
Now, suppose that \(\mathbf{C}=\bigoplus_{S\in\mathfrak{S}}C_{S}\) is a finite sum of \(C^{*}\)-algebras and \(\mathbf{Y}=\bigoplus_{S\in\mathfrak{S}}Y_{S}\) is a Hilbert \(\mathbf{C}\)-module. By Lemma 2.1, a homomorphism \(\varphi_{\mathbf{Y}}\colon B\to\mathcal{L}(\mathbf{Y})\) is equivalent to a family of homomorphisms \(\varphi_{Y_{S}}\colon B\to\mathcal{L}(Y_{S})\) for all \(S\in\mathfrak{S}\). Therefore, any \(B\)-\(C\)-correspondence is isomorphic to a direct sum of a family of \(B\)-\(C_{S}\)-correspondences.
**Lemma 2.6**.: _Let \(\mathbf{Y}\) be as above and let \(X\) be an \(A\)-\(B\)-correspondence. Then, the tensor product of \(X\) and \(\mathbf{Y}\) is given by_
\[X\otimes_{B}\mathbf{Y}=\bigoplus_{S\in\mathfrak{S}}X\otimes_{B}Y_{S}.\]
Proof.: Observe that the decomposition \(\mathbf{Y}=\bigoplus_{S\in\mathfrak{S}}Y_{S}\) is a direct sum of left \(B\)-modules. The statement follows because the tensor product is distributive over direct sums.
### Product systems
**Definition 2.7**.: An \(\mathbb{N}^{n}\)-_product system_\((A,X)\) over a \(C^{*}\)-algebra \(A\) is a collection of \(A\)-\(A\)-correspondences \(X=(X^{\mathbf{m}})_{\mathbf{m}\in\mathbb{N}^{n}}\) such that \(X^{\mathbf{0}}=A\) together with isomorphisms of correspondences
\[\mu_{X}^{\mathbf{m},\mathbf{k}}\colon X^{\mathbf{m}}\otimes_{A}X^{\mathbf{k}} \to X^{\mathbf{m}+\mathbf{k}}\]
for all \(\mathbf{m},\mathbf{k}\in\mathbb{N}^{n}\setminus\{\mathbf{0}\}\), satisfying certain associativity conditions. A product system is called proper, injective, or non-degenerate if all its correspondences are.
We denote the left action homomorphism \(A\to\mathcal{L}(X^{\mathbf{m}})\) by \(\varphi_{X}^{\mathbf{m}}\) for \(\mathbf{m}\in\mathbb{N}^{n}\setminus\{\mathbf{0}\}\). We also write \(x\cdot y\) instead of \(\mu_{X}^{\mathbf{m},\mathbf{k}}(x\otimes y)\) for \(x\in X^{\mathbf{m}}\) and \(y\in X^{\mathbf{k}}\) for \(\mathbf{m},\mathbf{k}\in\mathbb{N}^{n}\setminus\{\mathbf{0}\}\).
For \(\mathbf{m}\geq\mathbf{k}\), the conjugation of the map \(\iota_{X^{\mathbf{k}}\otimes X^{\mathbf{m}-\mathbf{k}}}^{X}\) with the multiplication map \(\mu_{X}^{\mathbf{k},\mathbf{m}-\mathbf{k}}\) gives a map \(\iota_{\mathbf{k}}^{\mathbf{m}}\colon\mathcal{L}(X^{\mathbf{k}})\to\mathcal{L }(X^{\mathbf{m}})\). It is given by \(\iota_{\mathbf{k}}^{\mathbf{m}}(T)(x\cdot y)=(Tx)\cdot y\) for all \(T\in\mathcal{L}(X^{\mathbf{k}})\), \(x\in X^{\mathbf{k}}\) and \(y\in X^{\mathbf{m}-\mathbf{k}}\). A product system is called _compactly aligned_ if \(S\lor T\coloneqq\iota_{\mathbf{k}}^{\mathbf{k}\vee\mathbf{m}}(S)\iota_{ \mathbf{m}}^{\mathbf{k}\vee\mathbf{m}}(T)\in\mathcal{K}(X^{\mathbf{k}\vee \mathbf{m}})\) for all \(S\in\mathcal{K}(X^{\mathbf{k}})\) and \(T\in\mathcal{K}(X^{\mathbf{m}})\). If, additionally, \(\iota_{\mathbf{m}}^{\mathbf{m}+\mathbf{1}_{i}}(\mathcal{K}(X^{\mathbf{m}})) \subset\mathcal{K}(X^{\mathbf{m}+\mathbf{1}_{i}})\) for all \(i\notin\operatorname{supp}\mathbf{m}\) and \(\mathbf{m}\in\mathbb{N}^{n}\setminus\{\mathbf{0}\}\), we say that the product system is _strongly compactly aligned_ (see [6, Definition 2.2]).
**Definition 2.8**.: A _representation_ of a product system \((A,X)\) in a \(C^{*}\)-algebra \(D\) is a pair \((\sigma,s)\) consisting of a \(*\)-homomorphism \(\sigma\colon A\to D\) and a family of maps \(s=(s^{\mathbf{m}}\colon X^{\mathbf{m}}\to D)_{\mathbf{m}\in\mathbb{N}^{n} \setminus\{\mathbf{0}\}}\) such that the following conditions are satisfied for all \(\mathbf{m},\mathbf{k}\in\mathbb{N}^{n}\setminus\{\mathbf{0}\}\):
1. \(s^{\mathbf{m}}(x)^{*}s^{\mathbf{m}}(y)=\sigma(\langle x,y\rangle)\) for all \(x,y\in X^{\mathbf{m}}\),
2. \(s^{\mathbf{m}}(x)s^{\mathbf{k}}(y)=s^{\mathbf{m}+\mathbf{k}}(x\cdot y)\) for all \(x\in X^{\mathbf{m}}\), \(y\in X^{\mathbf{k}}\),
3. \(\sigma(a)s^{\mathbf{m}}(x)=s^{\mathbf{m}}(\varphi^{\mathbf{m}}(a)x)\) for all \(a\in A\), \(x\in X^{\mathbf{m}}\).
A representation \((\sigma,s)\)_admits a gauge action_ if there is a pointwise norm-continuous action \(\gamma\colon\mathbb{T}^{n}\curvearrowright D\) such that it fixes \(\sigma(A)\) and acts by the character \(\mathbf{z}\mapsto\mathbf{z}^{\mathbf{m}}\) on \(s^{\mathbf{m}}(X^{\mathbf{m}})\) for all \(\mathbf{m}\in\mathbb{N}^{n}\setminus\{\mathbf{0}\}\). That is, for all \(x\in X^{\mathbf{m}}\) and \(\mathbf{z}=(z_{1},\dots,z_{n})\in\mathbb{T}^{n}\), we have \(\gamma_{\mathbf{z}}(s^{\mathbf{m}}(x))=z_{1}^{\mathbf{m}_{1}}\cdot\dots\cdot z _{n}^{\mathbf{m}_{n}}\cdot s^{\mathbf{m}}(x)\).
When the degree \(\mathbf{m}\) of \(x\in X^{\mathbf{m}}\) is clear, we sometimes write \(s(x)\) instead of \(s^{\mathbf{m}}(x)\).
Given a representation \((\sigma,s)\), we define homomorphisms \(\psi_{s}^{\mathbf{m}}\colon\mathcal{K}(X^{\mathbf{m}})\to D\) for all \(\mathbf{m}\in\mathbb{N}^{n}\setminus\{\mathbf{0}\}\) first on rank-one operators by
\[\psi_{s}^{\mathbf{m}}(\theta_{x,y})=s^{\mathbf{m}}(x)s^{\mathbf{m}}(y)^{*}\]
for \(x,y\in X^{\mathbf{m}}\). Then, we may extend \(\psi_{s}^{\mathbf{m}}\) to all of \(\mathcal{K}(X^{\mathbf{m}})\) by linearity and continuity (see [11, Lemma 2.2]).
Let \((k_{\lambda}^{\mathbf{m}})_{\lambda\in\Lambda}\) be a canonical approximate identity for \(\mathcal{K}(X^{\mathbf{m}})\). We define projections
\[p_{s}^{\mathbf{m}}=\text{w*-}\!\!\!\!\lim_{\lambda\in\Lambda}\psi_{s}^{\mathbf{ m}}(k_{\lambda}^{\mathbf{m}})\in D^{\prime\prime},\]
where \(D^{\prime\prime}\) is the enveloping von Neumann algebra of \(D\), i.e., the strong closure of \(D\) in its universal representation (see [21, p. 3.7.6]).
**Definition 2.9** ([9, Definition 5.1]).: A representation \((\sigma,s)\) of a product system \((A,X)\) on \(D\) is called Nica-covariant if \(p_{s}^{\mathbf{m}}p_{s}^{\mathbf{k}}=p_{s}^{\mathbf{m}\vee\mathbf{k}}\) for all \(\mathbf{m},\mathbf{k}\in\mathbb{N}^{n}\setminus\{\mathbf{0}\}\).
_Remark_.: Fowler defines Nica-covariance for representations of non-degenerate product systems on Hilbert spaces. Let \(\pi_{D}\colon D\to\mathcal{B}(H)\) be the universal representation of \(D\). Since we defined the projections \(p_{s}^{\mathbf{m}}\) as elements of the enveloping von Neumann algebra \(D^{\prime\prime}\subset\mathcal{B}(H)\), our definition is equivalent to the Nica-covariance (in the sense of Fowler) of the representation \((\pi_{D}\circ\sigma,\pi_{D}\circ s)\) of \((A,X)\) on the Hilbert space \(H\). Moreover, Fowler shows in [9, Proposition 5.6] that for non-degenerate compactly-aligned product systems, the Nica-covariance is equivalent to \(\psi_{s}^{\mathbf{m}}(S)\psi_{s}^{\mathbf{k}}(T)=\psi_{s}^{\mathbf{m}\vee \mathbf{k}}(S\lor T)\) for all nonzero \(\mathbf{m},\mathbf{k}\in\mathbb{N}^{n}\) and for all \(S\in\mathcal{K}(X^{\mathbf{m}})\), \(T\in\mathcal{K}(X^{\mathbf{k}})\). However, he never uses the non-degeneracy assumption in the proof, so this fact holds for all compactly aligned product systems and applies to our definition. For the sake of completeness, we include a proof of this fact here.
**Proposition 2.10** ([9, Proposition 5.6]).: _A representation \((\sigma,s)\) is Nica-covariant in the sense of Definition 2.9 if and only if \(\psi_{s}^{\mathbf{m}}(S)\psi_{s}^{\mathbf{k}}(T)=\psi_{s}^{\mathbf{m}\vee \mathbf{k}}(S\lor T)\) for all nonzero \(\mathbf{m},\mathbf{k}\in\mathbb{N}^{n}\) and for all \(S\in\mathcal{K}(X^{\mathbf{m}})\), \(T\in\mathcal{K}(X^{\mathbf{k}})\)._
Proof.: Suppose that \((\sigma,s)\) is Nica-covariant. For \(S\) and \(T\) as in the statement, we have
\[\psi_{s}^{\mathbf{m}}(S)\psi_{s}^{\mathbf{k}}(T)=\psi_{s}^{ \mathbf{m}}(S)p_{s}^{\mathbf{m}}p_{s}^{\mathbf{k}}\psi_{s}^{\mathbf{k}}(T)= \psi_{s}^{\mathbf{m}}(S)p_{s}^{\mathbf{m}\vee\mathbf{k}}\psi_{s}^{\mathbf{k}}( T)=\\ \text{w*-}\!\!\!\!\lim_{\lambda\in\Lambda}\psi_{s}^{\mathbf{m}}( S)\psi^{\mathbf{m}\vee\mathbf{k}}(k_{\lambda}^{\mathbf{m}\vee\mathbf{k}})\psi_{s}^{ \mathbf{k}}(T)=\\ \text{w*-}\!\!\!\!\lim_{\lambda\in\Lambda}\psi_{s}^{\mathbf{m} \vee\mathbf{k}}(\iota_{\mathbf{m}}^{\mathbf{m}\vee\mathbf{k}}(S)k_{\lambda}^{ \mathbf{m}\vee\mathbf{k}}t_{\mathbf{k}}^{\mathbf{m}\vee\mathbf{k}}(T))=\psi_ {s}^{\mathbf{m}\vee\mathbf{k}}(S\lor T).\]
This proves the "only if" direction.
Conversely, suppose that \(\psi_{s}^{\mathbf{m}}(S)\psi_{s}^{\mathbf{k}}(T)=\psi_{s}^{\mathbf{m}\vee \mathbf{k}}(S\lor T)\) for all nonzero \(\mathbf{m},\mathbf{k}\in\mathbb{N}^{n}\) and for all \(S\in\mathcal{K}(X^{\mathbf{m}})\), \(T\in\mathcal{K}(X^{\mathbf{k}})\). We have
\[p_{s}^{\mathbf{m}}p_{s}^{\mathbf{k}}=\text{w*-}\!\!\!\!\lim_{\lambda\in\Lambda} \psi_{s}^{\mathbf{m}}(k_{\lambda}^{\mathbf{m}})\psi_{s}^{\mathbf{k}}(k_{ \lambda}^{\mathbf{k}})=\text{w*-}\!\!\!\!\lim_{\lambda\in\Lambda}\psi_{s}^{ \mathbf{m}\vee\mathbf{k}}(k_{\lambda}^{\mathbf{m}}\lor k_{\lambda}^{\mathbf{k}}),\]
where in the first equality we have used that the multiplication is jointly strongly continuous on bounded sets by [2, p. I.3.2.1]. It is easy to see that \(k_{\lambda}^{\mathbf{m}}\lor k_{\lambda}^{\mathbf{k}}\) is an approximate identity for \(\mathcal{K}(X^{\mathbf{m}\vee\mathbf{k}})\), so we conclude that \(p_{s}^{\mathbf{m}}p_{s}^{\mathbf{k}}=p_{s}^{\mathbf{m}\vee\mathbf{k}}\) and the representation is Nica-covariant.
The _Nica-Toeplitz algebra_\(\mathcal{NT}(X)\) is the universal algebra generated by \(A\) and \(X\) with respect to Nica-covariant representations. Fowler proved its existence in [9, Theorem 6.3]. The representation \((\tau_{X},t_{X})\) of \((A,X)\) on \(\mathcal{NT}(X)\) is then the universal Nica-covariant representation: if \((\sigma,s)\) is another Nica-covariant representation, then there is a unique homomorphism \(\sigma\times_{0}s\colon\mathcal{NT}(X)\to D\) such that \((\sigma,s)=((\sigma\times_{0}s)\circ\tau_{X},(\sigma\times_{0}s)\circ t_{X})\).
We introduce two more families of projections. For \(F\in\mathfrak{F}\), we define
\[Q_{s}^{F} =\prod_{i\in F}(1-p_{s}^{\mathbf{1}_{i}}),\] \[P_{s}^{F} =\prod_{i\in F}(1-p_{s}^{\mathbf{1}_{i}})\prod_{i\notin F}p_{s}^{ \mathbf{1}_{i}}=Q_{s}^{F}\prod_{i\notin F}p_{s}^{\mathbf{1}_{i}}.\]
Here, we mean \(Q_{s}^{\emptyset}=1\) by the empty product.
**Lemma 2.11**.: _Let \((\sigma,s)\) be a Nica-covariant representation of a product system \((A,X)\) on \(D\). Then, the projections introduced above have the following properties. Let \(m\in\mathbb{N}^{n}\setminus\{\mathbf{0}\}\) and \(F\in\mathfrak{F}\) be arbitrary._
1. _Elements of_ \(\sigma(A)\) _commute with_ \(p_{s}^{\mathbf{m}}\)_,_ \(Q_{s}^{F}\) _and_ \(P_{s}^{F}\)_. Additionally, for all_ \(a\in(\varphi^{\mathbf{m}})^{-1}(\mathcal{K}(X^{\mathbf{m}}))\)_, we have_ \(\sigma(a)p_{s}^{\mathbf{m}}=p_{s}^{\mathbf{m}}\sigma(a)=\psi_{s}^{\mathbf{m}} (\varphi^{\mathbf{m}}(a))\in D\)_._
2. _The equality_ \(p_{s}^{\mathbf{m}}s^{\mathbf{m}}(x)=s^{\mathbf{m}}(x)\) _holds for all_ \(x\in X^{\mathbf{m}}\)_. If_ \(p\in D^{\prime\prime}\) _is some other projection with this property, then_ \(p\geq p_{s}^{\mathbf{m}}\)_._
3. _For all_ \(x\in X^{\mathbf{m}}\)_, we have_ \(p_{s}^{\mathbf{1}_{i}}s^{\mathbf{m}}(x)=s^{\mathbf{m}}(x)p_{s}^{\mathbf{1}_{i}}\) _if_ \(i\not\in\operatorname{supp}\mathbf{m}\) _and_ \(p_{s}^{\mathbf{1}_{i}}s^{\mathbf{m}}(x)=s^{\mathbf{m}}(x)\) _otherwise._
4. _For all_ \(F,G\in\mathfrak{F}\) _and_ \(x\in X^{\mathbf{m}}\)_, we have_ \[P_{s}^{F}s^{\mathbf{m}}(x)P_{s}^{G}=\begin{cases}s^{\mathbf{m}}(x)P_{s}^{G}& \text{if $F=G\setminus\operatorname{supp}\mathbf{m}$},\\ 0&\text{otherwise}.\end{cases}\]
5. _For a homomorphism_ \(f\colon D\to D_{2}\)_, we have_ \(f(p_{s}^{\mathbf{m}})=p_{fos}^{\mathbf{m}}\)_,_ \(f(Q_{s}^{F})=Q_{fos}^{F}\)_, and_ \(f(P_{s}^{F})=P_{fos}^{F}\)_. In particular,_ \(f\circ(\sigma,s)\) _is a Nica-covariant representation._
Proof.: To prove (1), we may assume that \(A\) is unital so that it is a linear span of unitary elements. Consider an arbitrary unitary element \(u\in A\). Then, we have
\[\sigma(u)p_{s}^{\mathbf{m}}=\sigma(u)p_{s}^{\mathbf{m}}\sigma(u)^ {*}\sigma(u) =\text{w*-lim}_{\lambda\in\Lambda}\sigma(u)\psi_{s}^{\mathbf{m}}( k_{\lambda}^{\mathbf{m}})\sigma(u)^{*}\sigma(u)=\] \[=\text{w*-lim}_{\lambda\in\Lambda}\psi_{s}^{\mathbf{m}}(\varphi^{ \mathbf{m}}(u)k_{\lambda}^{\mathbf{m}}\varphi^{\mathbf{m}}(u^{*}))\sigma(u).\]
Since \(u\) is unitary, the net \(\varphi^{\mathbf{m}}(u)k_{\lambda}^{\mathbf{m}}\varphi^{\mathbf{m}}(u^{*})\) is also a c.a.i. for \(\mathcal{K}(X^{\mathbf{m}})\). Therefore, the right-hand side of the above equation equals \(p_{s}^{\mathbf{m}}\sigma(u)\). We conclude that \(\sigma(A)\) commutes with \(p_{s}^{\mathbf{m}}\) and hence with \(Q_{s}^{F}\) and \(P_{s}^{F}\).
Let \(a\) be an element of \((\varphi^{\mathbf{m}})^{-1}(\mathcal{K}(X^{\mathbf{m}}))\). Then, we have
\[\sigma(a)p_{s}^{\mathbf{m}}=\text{w*-lim}_{\lambda\in\Lambda}\sigma(a)\psi_{s} ^{\mathbf{m}}(k_{\lambda}^{\mathbf{m}})=\text{w*-lim}_{\lambda\in\Lambda}\psi_{ s}^{\mathbf{m}}(\varphi^{\mathbf{m}}(a)k_{\lambda}^{\mathbf{m}})=\psi_{s}^{ \mathbf{m}}(\varphi^{\mathbf{m}}(a)),\]
where the last equality follows from the definition of approximate identity. This is the second part of (1).
To prove (2), consider an arbitrary \(x\in X^{\mathbf{m}}\). Then, we have
\[p_{s}^{\mathbf{m}}s^{\mathbf{m}}(x)=\text{w*-lim}_{\lambda\in\Lambda}\psi_{s} ^{\mathbf{m}}(k_{\lambda}^{\mathbf{m}})s^{\mathbf{m}}(x)=\text{w*-lim}_{ \lambda\in\Lambda}s^{\mathbf{m}}(k_{\lambda}^{\mathbf{m}}x)=s^{\mathbf{m}}(x).\]
Suppose that \(p\) is some other projection with this property. Observe that each \(\psi_{s}^{\mathbf{m}}(k_{\lambda}^{\mathbf{m}})\) lies in the closed linear span of elements of the form \(s(x)s(y)^{*}\) for \(x,y\in X^{\mathbf{m}}\). Therefore, we have \(p\psi_{s}^{\mathbf{m}}(k_{\lambda}^{\mathbf{m}})=\psi_{s}^{\mathbf{m}}(k_{ \lambda}^{\mathbf{m}})\) and hence \(pp_{s}^{\mathbf{m}}=p_{s}^{\mathbf{m}}\). The latter means \(p\geq p_{s}^{\mathbf{m}}\) by definition.
Let us now prove (3). We have \(p_{s}^{\mathbf{1}_{i}}s^{\mathbf{m}}(x)=p_{s}^{\mathbf{1}_{i}}p_{s}^{\mathbf{m}}s^{ \mathbf{m}}(x)=p_{s}^{\mathbf{1}_{i}\vee\mathbf{m}}s^{\mathbf{m}}(x)\). If \(i\in\operatorname{supp}\mathbf{m}\), then \(\mathbf{1}_{i}\vee\mathbf{m}=\mathbf{m}\) and the above equation gives \(p_{s}^{\mathbf{1}_{i}}s^{\mathbf{m}}(x)=s^{\mathbf{m}}(x)\). If \(i\notin\operatorname{supp}\mathbf{m}\), then \(\mathbf{1}_{i}\vee\mathbf{m}=\mathbf{m}+\mathbf{1}_{i}\). Let \(y,z\in X^{\mathbf{1}_{i}}\) be arbitrary. Then, we have
\[p_{s}^{\mathbf{m}+\mathbf{1}_{i}}s^{\mathbf{m}}(x)\psi_{s}^{ \mathbf{1}_{i}}(\theta_{y,z})=p_{s}^{\mathbf{m}+\mathbf{1}_{i}}s^{\mathbf{m}} (x)s^{\mathbf{1}_{i}}(y)s^{\mathbf{1}_{i}}(z)^{*}=p^{\mathbf{m}+\mathbf{1}_{i} }s^{\mathbf{m}+\mathbf{1}_{i}}(x\cdot y)s^{\mathbf{1}_{i}}(z)^{*}\\ =s^{\mathbf{m}+\mathbf{1}_{i}}(x\cdot y)s^{\mathbf{1}_{i}}(z)^{*} =s^{\mathbf{m}}(x)\psi_{s}^{\mathbf{1}_{i}}(\theta_{y,z}).\]
By linearity and continuity, it follows that \(p_{s}^{\mathbf{1}_{i}}s^{\mathbf{m}}(x)\psi_{s}^{\mathbf{1}_{i}}(T)=s^{ \mathbf{m}}(x)\psi_{s}^{\mathbf{1}_{i}}(T)\) for all \(T\in\mathcal{K}(X^{\mathbf{1}_{i}})\). Therefore, we also have \(p_{s}^{\mathbf{1}_{i}}s^{\mathbf{m}}(x)p_{s}^{\mathbf{1}_{i}}=s^{\mathbf{m}}( x)p_{s}^{\mathbf{1}_{i}}\) by applying the above to \(T=k_{\lambda}^{\mathbf{1}_{i}}\) and taking the limit.
Now, consider arbitrary elements \(y,z\in X^{\mathbf{1}_{i}}\) and \(y^{\prime},z^{\prime}\in X^{\mathbf{m}}\). We have \(y^{\prime}\cdot y,z^{\prime}\cdot z\in X^{\mathbf{m}+\mathbf{1}_{i}}\) and hence
\[\psi_{s}^{\mathbf{m}+\mathbf{1}_{i}}(\theta_{y^{\prime},y,z^{ \prime}\cdot z})s(x)=s(y^{\prime}\cdot y)s(z^{\prime}\cdot z)^{*}s(x)=s(y^{ \prime})s(y)s(z)^{*}s(z^{\prime})^{*}s(x)\\ =s(y^{\prime})\psi_{s}^{\mathbf{1}_{i}}(\theta_{y,z})\sigma( \langle z^{\prime},x\rangle)=s(y^{\prime})\psi_{s}^{\mathbf{1}_{i}}(\theta_{y,z}\varphi^{\mathbf{1}_{i}}(\langle z^{\prime},x\rangle))\in s(y^{\prime})\psi _{s}^{\mathbf{1}_{i}}(\mathcal{K}(X^{\mathbf{1}_{i}})).\]
Since \(p_{s}^{\mathbf{1}_{i}}\) fixes \(\psi_{s}^{\mathbf{1}_{i}}(\mathcal{K}(X^{\mathbf{1}_{i}}))\) with right multiplication, the inclusion implies that for any \(T\in\mathcal{K}(X^{\mathbf{m}+\mathbf{1}_{i}})\) we have \(\psi_{s}^{\mathbf{m}+\mathbf{1}_{i}}(T)s(x)p_{s}^{\mathbf{1}_{i}}=\psi_{s}^{ \mathbf{m}+\mathbf{1}_{i}}(T)s(x)\). Again, we get \(p_{s}^{\mathbf{1}_{i}}s(x)p_{s}^{\mathbf{1}_{i}}=p_{s}^{\mathbf{m}+\mathbf{1} _{i}}s(x)p_{s}^{\mathbf{1}_{i}}=s(x)p_{s}^{\mathbf{1}_{i}}\) by applying the above to \(T=k_{\lambda}^{\mathbf{m}+\mathbf{1}_{i}}\) and taking the limit. This, together with the previous paragraph, gives \(p_{s}^{\mathbf{1}_{i}}s^{\mathbf{m}}(x)=s^{\mathbf{m}}(x)\).
We can prove (4) by expanding
\[P_{s}^{F}s^{\mathbf{m}}(x)P_{s}^{G}=\prod_{i\in F}(1-p_{s}^{\mathbf{1}_{i}}) \prod_{i\notin F}p_{s}^{\mathbf{1}_{i}}s^{\mathbf{m}}(x)\prod_{i\in G}(1-p_{s} ^{\mathbf{1}_{i}})\prod_{i\notin G}p_{s}^{\mathbf{1}_{i}}.\]
If there is \(i\in F\cap\operatorname{supp}\mathbf{m}\), then the factor \(1-p_{s}^{\mathbf{1}_{i}}\) annihilates \(s^{\mathbf{m}}(x)\) by (3) and the above expression is zero. Otherwise, we use (3) iteratively to get
\[P_{s}^{F}s^{\mathbf{m}}(x)P_{s}^{G}=s^{\mathbf{m}}(x)\prod_{i\in F}(1-p_{s}^{ \mathbf{1}_{i}})\prod_{i\in G}(1-p_{s}^{\mathbf{1}_{i}})\prod_{i\notin \operatorname{supp}\mathbf{m}\cup F}p_{s}^{\mathbf{1}_{i}}\prod_{i\in G}p_{s}^{ \mathbf{1}_{i}}.\]
This is nonzero if and only if \(F\subset G\) and \(G\subset\operatorname{supp}\mathbf{m}\cup F\) or, equivalently, \(F=G\setminus\operatorname{supp}\mathbf{m}\). When \(F=G\setminus\operatorname{supp}\mathbf{m}\), this equals \(s^{\mathbf{m}}(x)P_{s}^{G}\).
Finally, (5) follows easily from the fact that \(\psi_{fos}^{\mathbf{m}}=f\circ\psi_{s}^{\mathbf{m}}\). This fact is trivial for rank-one operators and hence holds in general.
We now recall the results of Dor-On and Kakariadis [6]. Suppose that \((A,X)\) is a strongly compactly aligned product system. The _CNP-ideals_\(\mathcal{I}_{X}^{F}\) are defined in two steps. First, we define the family of _pre-CNP-ideals_ for \(F\in\mathfrak{F}\) by
\[\mathcal{J}_{X}^{F}=(\bigcap_{i\in F}\ker\varphi^{\mathbf{1}_{i}})^{\perp}\cap \bigcap_{i=1}^{n}(\varphi^{\mathbf{1}_{i}})^{-1}(\mathcal{K}(X^{\mathbf{1}_{i}}) )\subset A, \tag{1}\]
and then we set
\[\mathcal{I}_{X}^{F}=\mathcal{J}_{X}^{F}\cap\bigcap_{\mathbf{m}\perp\mathbf{1}_ {F}}(X^{\mathbf{m}})^{-1}(\mathcal{J}_{X}^{F}). \tag{2}\]
**Definition 2.12** ([6, Definition 2.8]).: A Nica-covariant representation \((\sigma,s)\) of \((A,X)\) is _Cuntz-Nica-Pimsner_ (CNP-representation) if
\[\sigma(a)\cdot Q_{s}^{F}=\sum_{\mathbf{0}\leq\mathbf{m}\leq\mathbf{1}_{F}}(-1)^{ |\mathbf{m}|}\psi_{s}^{\mathbf{m}}(\varphi^{\mathbf{m}}(a))=0\text{ for all }F\in\mathfrak{F}\text{ and }a\in\mathcal{I}_{X}^{F}.\]
By construction, we have \(\mathcal{I}_{X}^{F}\subset\bigcap_{i=1}^{n}(\varphi^{\mathbf{1}_{i}})^{-1}( \mathcal{K}(X^{\mathbf{1}_{i}}))=\bigcap_{F\in\mathfrak{F}}(\varphi^{\mathbf{1}_ {F}})^{-1}(\mathcal{K}(X^{\mathbf{1}_{F}}))\), where the last equality follows from strong compact alignment. Therefore, the inclusion \(\sigma(\mathcal{I}_{X}^{F})\cdot Q_{t}^{F}\subset\mathcal{NT}(X)\) holds by Lemma 2.11.(1) and these subspaces generate a gauge-invariant ideal \(\mathcal{C}_{\mathcal{I}_{X}}\) in \(\mathcal{NT}(X)\). A Nica-covariant representation \((\sigma,s)\) is CNP if and only if \(\sigma\times_{0}s\) factors through \(\mathcal{NT}(X)/\mathcal{C}_{\mathcal{I}_{X}}\). We use the notation \((\omega_{X},o_{X})\) for the representation of \((A,X)\) on the _Cuntz-Nica-Pimsner_ algebra \(\mathcal{NO}(X)\coloneqq\mathcal{NT}(X)/\mathcal{C}_{\mathcal{I}_{X}}\). By the above, it is universal with respect to CNP-representations. If \((\sigma,s)\) is a CNP-representation on \(D\), then we denote by \(\sigma\times s\) the induced map \(\mathcal{NO}(X)\to D\).
In our proofs, we need the following lemma.
**Lemma 2.13**.: _Let \((\sigma,s)\) be a Nica-covariant representation of \((A,X)\) such that \(\sigma\) is injective. Suppose that \(\sigma(a)Q_{s}^{F}=0\) holds for some \(a\in\bigcap_{i=1}^{n}(\varphi^{\mathbf{1}_{i}})^{-1}(\mathcal{K}(X^{\mathbf{1} _{i}}))\subset A\) and \(F\in\mathfrak{F}\). Then, \(a\) is an element of \(\mathcal{I}^{F}\)._
Proof.: The proof of [6, Proposition 3.4] uses only the assumptions on \(a\) from the lemma. Therefore, it applies to this situation and we conclude that \(a\in\mathcal{I}^{F}\).
**Proposition 2.14** (Gauge-invariant uniqueness theorem, [6, Theorem 4.2]).: _Suppose that \((\sigma,s)\) is a CNP-representation of \((A,X)\). The map \(\sigma\times s\) is a faithful representation of \(\mathcal{NO}(X)\) if and only if \(\sigma\) is faithful and \((\sigma,s)\) admits a gauge action._
**Proposition 2.15** (Co-universal property, [6, Corollary 4.7]).: _Let \((\sigma,s)\) be a Nica-covariant representation of \((A,X)\) on \(D\) such that it admits a gauge action, \(\sigma\) is faithful, and \(\sigma\times_{0}s\) is surjective. Then, there is a unique surjective homomorphism \(\Omega_{\sigma,s}\colon D\to\mathcal{NO}(X)\) such that \((\omega_{X},o_{X})=\Omega_{\sigma,s}\circ(\sigma,s)\)._
The co-universal property was proven in increasing levels of generality for product systems over more general classes of semigroups in [5, 7, 25].
## 3. Invariant ideals
In this section we determine ideals of a base algebra coming from the CNP-algebra. All the definitions and results of this section are a direct generalization of Katsura's results in [13, Section 4].
Consider a strongly compactly aligned \(\mathbb{N}^{n}\)-product system \((B,Y)\). Let \((\omega,o)\) be the representation of \((B,Y)\) on the CNP-algebra \(\mathcal{NO}(Y)\). We use \(\mathbb{I}(B)\) to denote the set of all ideals of \(B\), and \(\mathbb{I}^{\gamma}(\mathcal{NO}(Y))\) to denote the set of all gauge-invariant ideals of \(\mathcal{NO}(Y)\). We define the restriction \(-^{r}\colon\mathbb{I}^{\gamma}(\mathcal{NO}(Y))\to\mathbb{I}(B)\) and induction \(-^{i}\colon\mathbb{I}(B)\to\mathbb{I}^{\gamma}(\mathcal{NO}(Y))\) maps by
\[J^{r}=\omega^{-1}(J)\quad\text{and}\quad I^{i}=\overline{\mathcal{NO}(Y)\omega (I)\mathcal{NO}(Y)}\]
for \(J\in\mathbb{I}^{\gamma}(\mathcal{NO}(Y))\) and \(I\in\mathbb{I}(B)\) (see [10, Section 3] for more properties of these maps).
**Definition 3.1**.: An ideal \(I\in\mathbb{I}(B)\) is said to be \(Y\)_-invariant_ if it can be expressed as \(I=J^{r}\) for some \(J\in\mathbb{I}^{\gamma}(\mathcal{NO}(Y))\). If such an ideal \(J\) is unique, \(I\) is called \(Y\)_-separating_. When the context is clear, we simply use the terms _invariant_ and _separating_ instead of \(Y\)-invariant and \(Y\)-separating. We denote the sets of separating and invariant ideals by \(\mathbb{I}_{Y}^{s}(B)\subset\mathbb{I}_{Y}^{i}(B)\subset\mathbb{I}(B)\).
Our goal is to characterize the sets \(\mathbb{I}_{Y}^{s}(B)\) and \(\mathbb{I}_{Y}^{i}(B)\).
**Definition 3.2**.: An ideal \(I\subset B\) is called _positively \(Y\)-invariant_ if \(IY^{\mathbf{1}_{i}}\subset Y^{\mathbf{1}_{i}}I\) for all \(i\in[n]\).
From positive invariance, it automatically follows that \(IY^{\mathbf{m}}\subset Y^{\mathbf{m}}I\) for all \(\mathbf{m}\in\mathbb{N}^{n}\). It is easy to see that in the rank one case, Definition 3.2 coincides with Katsura's definition of positive invariance [13, Definition 4.8].
**Proposition 3.3**.: _An invariant ideal \(I\in\mathbb{I}_{Y}^{i}(B)\) is positively invariant._
Proof.: Let \(J\in\mathbb{I}^{\gamma}(\mathcal{NO}(Y))\) be such that \(I=J^{r}\). Consider the quotient representation \((\sigma,s)=([-]_{J}\circ\omega,[-]_{J}\circ o)\) of \((B,Y)\) on \(\mathcal{NO}(Y)/J\). Observe that the kernel of \(\sigma\) is \(I\). Then, for all \(i\in[n]\), \(x,y\in Y^{\mathbf{1}_{i}}\), and \(b\in I\), we have
\[0=s(x)^{*}\sigma(b)s(y)=\sigma(\langle x,b\cdot y\rangle).\]
Hence, we have \(\langle x,b\cdot y\rangle\in I\). We obtain \(IY^{\mathbf{1}_{i}}\subset Y^{\mathbf{1}_{i}}I\) by Lemma 2.4. We conclude that \(I\) is positively invariant.
**Proposition 3.4**.: _Let \(I\subset B\) be a positively invariant ideal. The left action of \(B\) on \(Y^{\mathbf{m}}\) descends to an action of \(B_{I}=B/I\) on \(Y^{\mathbf{m}}_{I}\). This turns \(Y_{I}=(Y^{\mathbf{m}}_{I})_{\mathbf{m}\in\mathbb{N}^{n}}\) into a strongly compactly aligned \(\mathbb{N}^{n}\)-product system over \(B_{I}\)._
Proof.: Consider an arbitrary element \(b\in I\). Then, \(\varphi^{\mathbf{m}}_{Y}(b)Y^{\mathbf{m}}\subset Y^{\mathbf{m}}I\) for all \(\mathbf{m}\in\mathbb{N}^{n}\). Hence, we have \([\varphi^{\mathbf{m}}_{Y}(b)]_{I}=0\) by Lemma 2.3. Therefore, \(I\) is in the kernel of the map \([-]_{I}\circ\varphi^{\mathbf{m}}_{Y}\colon B\to\mathcal{L}(Y^{\mathbf{m}}_{I})\), so that it descends to a map \(\varphi^{\mathbf{m}}_{Y_{I}}\colon B_{I}\to\mathcal{L}(Y^{\mathbf{m}}_{I})\). This proves the first claim.
We now show that \(Y_{I}\) is a product system. For any \(\mathbf{m},\mathbf{k}\in\mathbb{N}^{n}\setminus\{\mathbf{0}\}\), where a unitary multiplication map \(\mu=\mu^{\mathbf{m},\mathbf{k}}_{Y}\colon Y^{\mathbf{m}}\otimes_{B}Y^{\mathbf{ k}}\to Y^{\mathbf{m}+\mathbf{k}}\). The map \(\mu\otimes\operatorname{id}_{B_{I}}\colon Y^{\mathbf{m}}\otimes_{B}Y^{\mathbf{ k}}_{I}=Y^{\mathbf{m}}\otimes_{B}Y^{\mathbf{k}}\otimes_{B}B_{I}\to Y^{ \mathbf{m}+\mathbf{k}}_{I}\) is also unitary. Moreover, we have \(Y^{\mathbf{m}}I\otimes_{B}Y^{\mathbf{k}}=Y^{\mathbf{m}}\otimes_{B}IY^{\mathbf{ k}}\subset Y^{\mathbf{m}}\otimes_{B}Y^{\mathbf{k}}I\), so that \(Y^{\mathbf{m}}I\otimes_{B}Y^{\mathbf{k}}\otimes_{B}B_{I}=0\) and hence \(Y^{\mathbf{m}}\otimes_{B}Y^{\mathbf{k}}_{I}\cong Y^{\mathbf{m}}_{I}\otimes_{B _{I}}Y^{\mathbf{k}}_{I}\) as \(B\)-\(B_{I}\)-correspondences. Therefore, \(\mu\otimes\operatorname{id}_{B_{I}}\) defines a unitary multiplication map \(\mu^{\mathbf{m},\mathbf{k}}_{Y_{I}}\colon Y^{\mathbf{m}}_{I}\otimes_{B_{I}}Y^{ \mathbf{k}}_{I}\to Y^{\mathbf{m}+\mathbf{k}}_{I}\). This map is given by \(\mu^{\mathbf{m},\mathbf{k}}_{Y_{I}}([x]_{I}\otimes[y]_{I})=[\mu(x\otimes y)]_{I}\). The associativity of the multiplication is trivial from this formula and the associativity of \(\mu\).
We now use Lemma 2.3 to show that \(Y_{I}\) is strongly compactly aligned. Let \(S\in\mathcal{K}(Y^{\mathbf{m}}_{I})\) and \(T\in\mathcal{K}(Y^{\mathbf{k}}_{I})\). Then, we may find \(S^{\prime}\in\mathcal{K}(Y^{\mathbf{m}})\) and \(T^{\prime}\in\mathcal{K}(Y^{\mathbf{k}})\) such that \(S=[S^{\prime}]_{I}\) and \(T=[T^{\prime}]_{I}\). Then, we have \(S\lor T=[S^{\prime}\lor T^{\prime}]_{I}\in\mathcal{K}(Y^{\mathbf{m}\lor\mathbf{ k}}_{I})\). This proves that \(Y_{I}\) is compactly aligned.
Analogously, for strong compact alignment we consider \(T\in\mathcal{K}(Y^{\mathbf{m}}_{I})\) and choose its preimage \(T^{\prime}\in\mathcal{K}(Y^{\mathbf{m}})\). Then, for all \(i\notin\operatorname{supp}\mathbf{m}\), we have \(\iota^{\mathbf{m}+\mathbf{1}_{i}}_{\mathbf{m}}(T)=[\iota^{\mathbf{m}+\mathbf{1 }_{i}}_{\mathbf{m}}(T^{\prime})]_{I}\in\mathcal{K}(Y^{\mathbf{m}+\mathbf{1}_{ i}}_{I})\). We conclude that \(Y_{I}\) is a strongly compactly aligned \(\mathbb{N}^{n}\)-product system over \(B_{I}\).
Consider a representation \((\sigma,s)\) of \((B_{I},Y_{I})\). The pair \((\sigma\circ[-]_{I},s\circ[-]_{I})\) forms a representation of \((B,Y)\).
**Proposition 3.5**.: _The mapping \((\sigma,s)\mapsto(\sigma\circ[-]_{I},s\circ[-]_{I})\) defines a bijection between the set of representations of \((B_{I},Y_{I})\) and the set of representations \((\sigma^{\prime},s^{\prime})\) of \((B,Y)\) with kernel of \(\sigma^{\prime}\) containing \(I\). Moreover, the following statements hold:_
1. _The representation_ \((\sigma,s)\) _admits a gauge action if and only if_ \((\sigma\circ[-]_{I},s\circ[-]_{I})\) _does._
_._
2. _For all_ \(T\in\mathcal{K}(Y^{\mathbf{m}})\)_, we have_ \(\psi_{s\circ[-]_{I}}(T)=\psi_{s}([T]_{I})\)_. Therefore, the equalities_ \(p^{\mathbf{m}}_{s\circ[-]_{I}}=p^{\mathbf{m}}_{s}\)_,_ \(Q^{F}_{s\circ[-]_{I}}=Q^{F}_{s}\) _and_ \(P^{F}_{s\circ[-]_{I}}=P^{F}_{s}\) _hold._
3. _The representation_ \((\sigma,s)\) _is Nica-covariant if and only if_ \((\sigma\circ[-]_{I},s\circ[-]_{I})\) _is._
Proof.: Obviously, the kernel of \(\pi\circ[-]_{I}\) contains \(I\). Conversely, let \((\sigma^{\prime},s^{\prime})\) be a representation of \((B,Y)\) with kernel containing \(I\). Then, \(\sigma^{\prime}\) and \(s^{\prime}\) descend to maps \(\sigma^{\prime}_{I}\) and \(s^{\prime}_{I}\) on \(B_{I}\) and \(Y_{I}\), respectively. It is routine check that the resulting pair \((\sigma^{\prime}_{I},s^{\prime}_{I})\) is a representation of \((B_{I},Y_{I})\).
The first statement is trivial. It is enough to prove the second statement for a rank-one operator \(T=\theta_{x,y}\), where \(x,y\in Y^{\mathbf{m}}\). We have \(\psi_{\sigma\circ[-]_{I}}(T)=s([x]_{I})s([y]_{I})^{*}=\psi_{s}(\theta_{[x]_{I},[y]_{I}})=\psi_{s}([T]_{I})\). If \((k^{\mathbf{m}}_{\lambda})_{\lambda\in\Lambda}\) is a c.a.i. for \(Y^{\mathbf{m}}\), then \(([k^{\mathbf{m}}_{\lambda}]_{I})_{\lambda\in\Lambda}\) is a c.a.i. for \(Y^{\mathbf{m}}_{I}\). Hence, \(p^{\mathbf{m}}_{s\circ[-]_{I}}=p^{\mathbf{m}}_{s}\) for all \(m\in\mathbb{N}^{n}\) and the equalities for \(Q\) and \(P\) follow from the definitions. The third statement follows immediately from the second one.
We use \((\omega_{Y_{I}},o_{Y_{I}})\) to denote the universal CNP-representation of \((B_{I},Y_{I})\) on \(\mathcal{NO}(Y_{I})\).
**Lemma 3.6**.: _A positively invariant ideal \(I\subset B\) is invariant if and only if \((\omega_{Y_{I}}\circ[-]_{I},o_{Y_{I}}\circ[-]_{I})\) is a CNP-representation of \((B,Y)\)._
Proof.: Suppose that \(I\) is invariant. Then, there exists a gauge-invariant ideal \(J\in\mathbb{T}^{\gamma}(\mathcal{NO}(Y))\) such that \(I=J^{r}\). By Proposition 3.5, the quotient representation \((\sigma,s)\) of \((B,Y)\) on \(\mathcal{NO}(Y)/J\) descends to a faithful representation \((\sigma_{I},s_{I})\) of \((B_{I},Y_{I})\) on \(\mathcal{NO}(Y)/J\). By the co-universal property (Proposition 2.15) of \(\mathcal{NO}(Y_{I})\), this representation defines a canonical epimorphism \(\mathcal{NO}(Y)/J\to\mathcal{NO}(Y_{I})\). Together with the quotient map \(\mathcal{NO}(Y)\to\mathcal{NO}(Y)/J\), this induces a factorization of \((\omega_{Y_{I}}\circ[-]_{I},o_{Y_{I}}\circ[-]_{I})\) through \((\omega,o)\). Since \((\omega,o)\) is a CNP-representation, we conclude that \((\omega_{Y_{I}}\circ[-]_{I},o_{Y_{I}}\circ[-]_{I})\) is also a CNP-representation of \((B,Y)\).
Conversely, suppose that \((\omega_{Y_{I}}\circ[-]_{I},o_{Y_{I}}\circ[-]_{I})\) is a CNP-representation of \((B,Y)\). Since it admits a gauge action, the representation induces a gauge-invariant homomorphism \(\mathcal{NO}(Y)\to\mathcal{NO}(Y_{I})\). Then, the kernel \(J\) of this epimorphism is a gauge-invariant ideal of \(B\) such that \(I=J^{r}\). Hence, \(I\) is invariant.
We define ideals
\[L^{i}_{I}=(Y^{\mathbf{1}_{i}})^{-1}(I)=\{b\in B\colon bY^{\mathbf{1}_{i}} \subset Y^{\mathbf{1}_{i}}I\},\]
and \(L^{F}_{I}=\bigcap_{i\in F}L^{i}_{I}\).
**Definition 3.7**.: An ideal \(I\subset B\) is called _negatively invariant_ if \(L^{F}_{I}\cap\mathcal{I}^{F}_{Y}\subset I\) for all \(F\in\mathfrak{F}\).
Again, in the rank one case, Definition 3.7 transforms to just \(Y^{-1}(I)\cap\mathcal{I}_{Y}^{\{1\}}\subset I\), which is the definition of a negatively invariant ideal in [13, Definition 4.8].
**Lemma 3.8**.: _Let \(\mathcal{J}_{Y_{I}}^{F}\) and \(\mathcal{I}_{Y_{I}}^{F}\) be the (pre)-CNP-ideals defined in (1) and (2) corresponding to \((B_{I},Y_{I})\). Then,_
\[[-]_{I}^{-1}(\mathcal{J}_{Y_{I}}^{F})=\{b\in B\colon[\varphi_{Y}^{\mathbf{1}_{ i}}(b)]_{I}\in\mathcal{K}(Y_{I}^{\mathbf{1}_{i}})\text{ for all }i\in[n]\,\text{ and }bL_{I}^{F}\subset I\},\]
_and_
\[[-]_{I}^{-1}(\mathcal{I}_{Y_{I}}^{F})=\{b\in[-]_{I}^{-1}(\mathcal{J}_{Y_{I}}^{ F})\colon bY^{\mathbf{m}}\subset Y^{\mathbf{m}}([-]_{I}^{-1}(\mathcal{J}_{Y_{I}}^{F }))\text{ for all }\mathbf{m}\perp\mathbf{1}_{F}\}.\]
_Consequently, \([\mathcal{I}_{Y}^{F}]_{I}\subset\mathcal{I}_{Y_{I}}^{F}\) for all \(F\in\mathfrak{F}\) if and only if \(I\) is negatively invariant._
Proof.: Firstly, observe that \([L_{I}^{i}]_{I}=\ker\varphi_{Y_{I}}^{\mathbf{1}_{i}}\). Indeed, \([b]_{I}\) is in the kernel if and only if \(bY^{\mathbf{1}_{i}}\subset Y^{\mathbf{1}_{i}}I\). Analogously, \([b]_{I}\perp\bigcap_{i\in F}\ker\varphi_{Y_{I}}^{\mathbf{1}_{i}}\) if and only if \(bL_{I}^{F}\subset I\). The expression for \([-]_{I}^{-1}(\mathcal{J}_{Y_{I}}^{F})\) then follows from the definition of \(\mathcal{J}_{Y_{I}}^{F}\).
An element \(b\in[-]_{I}^{-1}(\mathcal{J}_{Y_{I}}^{F})\) is in \([-]_{I}^{-1}(\mathcal{I}_{Y_{I}}^{F})\) if and only if \([b]_{I}Y_{I}^{\mathbf{m}}\subset Y_{I}^{\mathbf{m}}\mathcal{J}_{Y_{I}}^{F}\) for all \(\mathbf{m}\perp\mathbf{1}_{F}\). This is equivalent to \(bY^{\mathbf{m}}\subset Y^{\mathbf{m}}([-]_{I}^{-1}(\mathcal{J}_{Y_{I}}^{F})+ I)=Y^{\mathbf{m}}([-]_{I}^{-1}(\mathcal{J}_{Y_{I}}^{F}))\). The last equality follows from the fact that \(I=[-]_{I}^{-1}(0)\subset[-]_{I}^{-1}(\mathcal{J}_{Y_{I}}^{F})\). This proves the expression for the preimage of \(\mathcal{I}_{Y_{I}}^{F}\).
Now, consider an arbitrary \(b\in\mathcal{I}_{Y}^{F}\). Then, the condition \([\varphi_{Y}^{\mathbf{1}_{i}}(b)]_{I}\in\mathcal{K}(Y_{I}^{\mathbf{1}_{i}})\) is always satisfied. Observe that since \(\mathcal{I}_{Y}^{F}Y^{\mathbf{m}}\subset Y^{\mathbf{m}}\mathcal{I}_{Y}^{F}\) for all \(\mathbf{m}\perp\mathbf{1}_{F}\), we have \([\mathcal{I}_{Y}^{F}]_{I}\subset\mathcal{I}_{Y_{I}}^{F}\) and only if \([\mathcal{I}_{Y}^{F}]_{I}\subset\mathcal{J}_{Y_{I}}^{F}\). Therefore, \([\mathcal{I}_{Y}^{F}]_{I}\subset\mathcal{I}_{Y_{I}}^{F}\) if and only if \(bL_{I}^{F}\subset I\) for all \(b\in\mathcal{I}_{Y}^{F}\) or, equivalently, \(\mathcal{I}_{Y}^{F}L_{I}^{F}=L_{I}^{F}\cap\mathcal{I}_{Y}^{F}\subset I\), which is exactly the definition of negative invariance. This proves the second statement.
**Theorem 3.9**.: _An ideal \(I\subset B\) is invariant if and only if it is positively invariant and negatively invariant._
Proof.: Suppose that \(I\) is invariant. Then, \(I\) is positively invariant by Proposition 3.5 and \((\omega_{Y_{I}}\circ[-]_{I},o_{Y_{I}}\circ[-]_{I})\) is a CNP-representation of \((B,Y)\) by Lemma 3.6. To prove that \(I\) is negatively invariant, by Lemma 3.8, it suffices to show that \([\mathcal{I}_{Y}^{F}]_{I}\subset\mathcal{I}_{Y_{I}}^{F}\) for all \(F\in\mathfrak{F}\).
Let \(b\in\mathcal{I}_{Y}^{F}\) be arbitrary. Since \((\omega_{Y_{I}}\circ[-]_{I},o_{Y_{I}}\circ[-]_{I})\) is a CNP-representation, we have \(\omega_{Y_{i}}([b]_{I})Q_{o_{Y_{I}}\circ[-]_{I}}^{F}=0\). By Proposition 3.5, we have \(Q_{o_{Y_{I}}^{F}[-]_{I}}^{F}=Q_{o_{Y_{I}}}^{F}\). Hence, we have \(\omega_{Y_{i}}([b]_{I})Q_{o_{Y_{I}}}^{F}=0\). Since \(b\in\mathcal{I}_{Y}^{F}\subset\bigcap_{i=1}^{n}(\varphi^{\mathbf{1}_{i}})^{ -1}(\mathcal{K}(X^{\mathbf{1}_{i}}))\) by definition, we can apply Lemma 2.13 and get \([b]_{I}\in\mathcal{I}_{Y_{I}}^{F}\). We conclude that \([\mathcal{I}_{Y}^{F}]_{I}\subset\mathcal{I}_{Y_{I}}^{F}\) and thus \(I\) is negatively invariant.
Conversely, suppose that \(I\) is positively and negatively invariant. Then, by Lemma 3.8, \([\mathcal{I}_{Y}^{F}]_{I}\subset\mathcal{I}_{Y_{I}}^{F}\) for all \(F\in\mathfrak{F}\). This implies that any \(b\in\mathcal{I}_{Y}^{F}\) satisfies the equation
\[\omega_{Y_{I}}([b]_{I})Q_{o_{Y_{I}}}^{F}=\omega_{Y_{I}}([b]_{I})Q_{o_{Y_{I}} \circ[-]_{I}}^{F}=0.\]
Hence, \((\omega_{Y_{I}}\circ[-]_{I},o_{Y_{I}}\circ[-]_{I})\) is a CNP-representation of \((B,Y)\). By Lemma 3.6, \(I\) is invariant.
**Proposition 3.10**.: _Let \(I\subset B\) be a positively invariant ideal. Suppose that \([\mathcal{I}_{Y}^{F}]_{I}=\mathcal{I}_{Y_{I}}^{F}\) for all \(F\in\mathfrak{F}\). Then, \(I\) is separating._
Proof.: Let \(J_{\min}\) be the intersection of all ideals \(J\in\mathbb{I}^{\gamma}(\mathcal{N}\mathcal{O}(Y))\) such that \(I=J^{r}\). Then, \(J_{\min}\) is gauge-invariant, \(J_{\min}^{r}=I\) and \(J_{\min}\) is the minimal ideal with these properties. Let \(F\in\mathfrak{F}\) and \(a\in\mathcal{I}_{Y_{I}}^{F}\) be an arbitrary element. By assumption, there exists \(\hat{a}\in\mathcal{I}_{Y}^{F}\) such that \([\hat{a}]_{I}=a\). Therefore, we have
\[0=[\omega_{Y}(\hat{a})Q_{o_{Y}}^{F}]_{J_{\min}}=\omega^{\prime}(a)Q_{o^{\prime} }^{F}\in\mathcal{N}\mathcal{O}(Y)/J_{\min},\]
where \((\omega^{\prime},o^{\prime})\) is the representation of \((B_{I},Y_{I})\) on \(\mathcal{N}\mathcal{O}(Y)/J_{\min}\) induced by \((\omega_{Y},o_{Y})\). Since \(F\) and \(a\) were arbitrary, we conclude that \((\omega^{\prime},o^{\prime})\) is a CNP-representation of \((B_{I},Y_{I})\). Since it is injective and gauge-invariant, it induces an isomorphism \(\mathcal{N}\mathcal{O}(Y)/J_{\min}\cong\mathcal{N}\mathcal{O}(B_{I})\).
Suppose that \(J\in\mathbb{I}^{\gamma}(\mathcal{N}\mathcal{O}(Y))\) is some other ideal such that \(I=J^{r}\). Then, the representation of \((B_{I},Y_{I})\) on \(\mathcal{N}\mathcal{O}(Y)/J\cong(\mathcal{N}\mathcal{O}(Y)/J_{\min})/(J/J_{ \min})\) is also injective and gauge-invariant. By the GIUT, we conclude that \(J/J_{\min}=0\) or \(J=J_{\min}\). Hence, \(I\) is separating.
**Corollary 3.11**.: _Let \((B,Y)\) be a regular (proper and injective) \(\mathbb{N}^{n}\)-product system. Then, any invariant ideal \(I\subset B\) is separating. Consequently, \(I\mapsto I^{i}\) is a bijection \(\mathbb{I}_{Y}^{i}(B)\to\mathbb{I}^{\gamma}(\mathcal{N}\mathcal{O}(Y))\)._
Proof.: Since \(Y\) is regular, we have \(\mathcal{I}_{Y}^{F}=B\) for all \(F\in\mathfrak{F}\). Let \(I\subset B\) be an invariant ideal. Then, \(I\) is negatively invariant and by Lemma 3.8, \(B_{I}=[\mathcal{I}_{Y}^{F}]_{I}\subset\mathcal{I}_{Y_{I}}^{F}\) for all \(F\in\mathfrak{F}\). Hence, we have \([\mathcal{I}_{Y}^{F}]_{I}=\mathcal{I}_{Y_{I}}^{F}\) for all \(F\in\mathfrak{F}\) and by Proposition 3.10, \(I\) is separating. The rest of the proof is straightforward.
_Remark_.: In case of regular \(\mathbb{N}^{n}\)-product systems, the set of invariant ideals has a particularly nice description: an ideal \(I\subset B\) is invariant (and separating) if and only if \(I=(Y^{1_{i}})^{-1}(I)\) for all \(i\in[n]\).
## 4. Gauge-invariant ideals
Let \((A,X)\) be a proper product system of rank \(n\). This means that the left action of \(A\) is given by homomorphisms \(\varphi^{\mathbf{m}}\colon A\hookrightarrow\mathcal{K}(X^{\mathbf{m}})\). Such a system is always strongly compactly aligned. We denote by \((\tau,t)\) the representation of \((A,X)\) on \(\mathcal{NT}(X)\).
### Invariant families, T-families, and O-families of ideals
**Definition 4.1**.: A collection \(K=\{K^{F}\}_{F\in\mathfrak{F}}\) of ideals of \(A\) is an _invariant family_ if it satisfies the condition
\[K^{G}=(X^{1_{i}})^{-1}(K^{G}\cap K^{G\cup\{i\}})\]
for all \(G\in\mathfrak{F}\) and \(i\in[n]\setminus G\). We write \(K_{1}\preceq K_{2}\) if \(K_{1}^{F}\subset K_{2}^{F}\) for all \(F\in\mathfrak{F}\).
**Definition 4.2**.: A collection \(I=\{I^{F}\}_{F\in\mathfrak{F}}\) of ideals of \(A\) is a _T-family_ if it satisfies the condition
\[I^{F}=(X^{1_{i}})^{-1}(I^{F})\cap I^{F\cup\{i\}}\]
for all \(F\in\mathfrak{F}\) and \(i\in[n]\setminus F\). We write \(I_{1}\preceq I_{2}\) if \(I_{1}^{F}\subset I_{2}^{F}\) for all \(F\in\mathfrak{F}\). A T-family \(I\) is called an _O-family_ if \(\mathcal{I}\preceq I\).
It is easy to see that in the case \(n=1\), the notions of T-families and O-families coincide with Katsura's T-pairs and O-pairs [13].
For an invariant family \(K\) we define a family \(I_{K}\) by \(I_{K}^{F}\coloneqq\bigcap_{G\supset F}K^{G}\). Analogously, for a T-family \(I\) we define a family \(K_{I}\) by
\[K_{I}^{F}\coloneqq(X^{\mathbf{1}-\mathbf{1}_{F}})^{-1}(I^{F}).\]
Here and in the following, by \((X^{\mathbf{0}})^{-1}\) we mean the identity map on the ideals of \(A\).
**Lemma 4.3**.: _Let \(F\subset H\) be finite subsets of \([n]\). Then, we have_
1. \[\bigcap_{G,H\supset G\supset F}(X^{\mathbf{1}_{H}-\mathbf{1}_{G}})^{-1}(I^{G} )=I^{F};\]
2. \[(X^{\mathbf{1}_{H}-\mathbf{1}_{F}})^{-1}\left(\bigcap_{G,H\supset G\supset F }K^{G}\right)=K^{F}.\]
Proof.: We prove both statements simultaneously by induction on \(k=|H|-|F|\). If \(k=0\), then \(H=G\) and the statements are trivial.
Suppose that \(k>0\) and the statements hold for all \(H^{\prime}\) with \(|H^{\prime}|-|F|<k\). Let \(i\) be any element in \(H\setminus F\) and let \(H^{\prime}=H\setminus\{i\}\). Then, we have
\[\bigcap_{G,H\supset G\supset F}(X^{\mathbf{1}_{H}-\mathbf{1}_{G}})^{-1}(I^{G })=\bigcap_{G,H^{\prime}\supset G\supset F}(X^{\mathbf{1}_{H^{\prime}}- \mathbf{1}_{G}})^{-1}((X^{\mathbf{1}_{i}})^{-1}(I^{G})\cap I^{G\cup\{i\}})=\]
\[=\bigcap_{G,H^{\prime}\supset G\supset F}(X^{\mathbf{1}_{H^{\prime}}- \mathbf{1}_{G}})^{-1}(I^{G})=I^{F}.\]
Here we used Lemma 2.5 in the first equality and the induction hypothesis in the last one. The second statement is proved analogously.
**Proposition 4.4**.: _The maps \(K\mapsto I_{K}\) and \(I\mapsto K_{I}\) are mutually inverse lattice isomorphisms between the set of invariant families and the set of T-families._
Proof.: We first show that \(I_{K}\) is a T-family. First, we compute
\[(X^{\mathbf{1}_{i}})^{-1}\left(I_{K}^{F}\right)=(X^{\mathbf{1}_{i}})^{-1}( \bigcap_{G\supset F}K^{G})=\bigcap_{G\supset F,i\notin G}(X^{\mathbf{1}_{i}}) ^{-1}(K^{G}\cap K^{G\cup\{i\}})=\bigcap_{G\supset F,i\notin G}K^{G}\]
and
\[(X^{\mathbf{1}_{i}})^{-1}(I_{K}^{F})\cap I_{K}^{F\cup\{i\}}=\bigcap_{G \supset F,i\notin G}K^{G}\cap\bigcap_{G\supset F\cup\{i\}}K^{G}=\bigcap_{G \supset F}K^{G}=I_{K}^{F}.\]
Now, we show that \(K_{I}\) is an invariant family. For all \(i\notin G\), we have
\[(X^{\mathbf{1}_{i}})^{-1}(K_{I}^{G}\cap K^{G\cup\{i\}})=(X^{\mathbf{1}_{i}})^{ -1}((X^{\mathbf{1}-\mathbf{1}_{G}})^{-1}(I^{G})\cap(X^{\mathbf{1}-\mathbf{1}_ {G\cup\{i\}}})^{-1}(I^{G\cup\{i\}}))=\]
\[=(X^{\mathbf{1}-\mathbf{1}_{G}})^{-1}\left((X^{\mathbf{1}_{i}})^{-1}(I^{G}) \cap I^{G\cup\{i\}}\right)=(X^{\mathbf{1}-\mathbf{1}_{G}})^{-1}(I^{G})=K_{I}^{ G}.\]
Here, the second equality follows from Lemma 2.5.
Our next goal is to show that \(I_{K_{I}}=I\). Indeed, we have
\[I_{K_{I}}^{F}=\bigcap_{G\supset F}(X^{\mathbf{1}-\mathbf{1}_{G}})^{-1}(I^{G}),\]
which is equal to \(I\) by the special case \(H=[n]\) of Lemma 4.3.
Finally, we show that \(K_{I_{K}}=K\). Analogously, we have
\[K_{I_{K}}^{F}=(X^{\mathbf{1}-\mathbf{1}_{F}})^{-1}\left(\bigcap_{G\supset F}K^{G} \right)=K^{F}\]
by Lemma 4.3. This completes the proof.
### Extended product system
Let \(K\) be an invariant family. Define a \(C^{*}\)-algebra \(\boldsymbol{A}_{K}\coloneqq\bigoplus_{F\in\mathfrak{F}}A/K^{F}\) together with a diagonal morphism \(\Delta_{K}\colon A\to\boldsymbol{A}_{K}\) given by \(\Delta_{K}(a)=([a]_{K^{F}})_{F\in\mathfrak{F}}\). Consider the induced Hilbert \(\boldsymbol{A}_{K}\)-modules
\[\boldsymbol{X}_{K}^{\mathbf{m}}\coloneqq X^{\mathbf{m}}\otimes_{A}\boldsymbol {A}_{K}\cong\bigoplus_{F\in\mathfrak{F}}X_{K^{F}}^{\mathbf{m}}.\]
The last isomorphism follows from Lemma 2.1.
_Notation_.: In the following, we will frequently face the situation when we have some vector space \(V\) and a family of subspaces \(W^{F}\subset V\) indexed by \(F\in\mathfrak{F}\). We use bold letters like \(\boldsymbol{v}\) for elements of \(\boldsymbol{V}_{W}=\bigoplus_{F\in\mathfrak{F}}V/W^{F}\). For \(\boldsymbol{v}\in\boldsymbol{V}_{W}\), we denote by \(\boldsymbol{v}_{F}\in V/W^{F}\) the component of \(\boldsymbol{v}\) corresponding to \(F\). For an element \(v\in V/W^{F}\), we denote by \(\hat{v}\) an arbitrary lift of \(v\) to \(V\).
Suppose that there are subspaces \(\boldsymbol{U}^{F}\subset V/W^{F}\) for all \(F\in\mathfrak{F}\). We denote by \(\boldsymbol{U}=\bigoplus_{F\in\mathfrak{F}}\boldsymbol{U}^{F}\subset \boldsymbol{V}\) their direct sum. Conversely, if \(\boldsymbol{U}\subset\boldsymbol{V}\) is a subspace of the above form, then we denote by \(\boldsymbol{U}^{F}\) its \(F\)-component.
As an example of the above notation, consider the case \(V=X^{\mathbf{m}}\) and \(W^{F}=X^{\mathbf{m}}K^{F}\). Then, we use \(\boldsymbol{x},\boldsymbol{y}\) for elements of \(\boldsymbol{X}_{K}^{\mathbf{m}}=\bigoplus_{F\in\mathfrak{F}}X_{K^{F}}^{ \mathbf{m}}\) and \(\boldsymbol{\hat{x}}_{F}\) stands for an element of \(X^{\mathbf{m}}\) such that \([\boldsymbol{\hat{x}}_{F}]_{K^{F}}\) is the \(F\)-component of \(\boldsymbol{X}\).
We would like to define a left action of \(\boldsymbol{A}_{K}\) on \(\boldsymbol{X}_{K}\) to obtain a product system \((\boldsymbol{A}_{K},\boldsymbol{X}_{K})\). By Lemma 2.1, we have \(\mathcal{K}(\boldsymbol{X}_{K}^{\mathbf{m}})=\bigoplus_{F\in\mathfrak{F}} \mathcal{K}(X_{K^{F}}^{\mathbf{m}})=\bigoplus_{F\in\mathfrak{F}}\mathcal{K}(X^ {\mathbf{m}})/\mathcal{K}(X^{\mathbf{m}}K^{F})\).
**Lemma 4.5**.: _Let \(F\in\mathfrak{F}\) and \(\mathbf{m}\in\mathbb{N}^{n}\). We have \(\varphi^{\mathbf{m}}(K^{F\setminus\operatorname{supp}\mathbf{m}})\subset \mathcal{K}(X^{\mathbf{m}}K^{F})\) and, hence, the expression_
\[b\mapsto[\varphi^{\mathbf{m}}(\hat{b})]_{K^{F}}\in\mathcal{K}(X_{K^{F}}^{ \mathbf{m}})\]
_is well-defined for any \(b=[\hat{b}]_{K^{F\setminus\operatorname{supp}\mathbf{m}}}\in A/K^{F\setminus \operatorname{supp}\mathbf{m}}\). It defines a homomorphism \(A/K^{F\setminus\operatorname{supp}\mathbf{m}}\to\mathcal{K}(X_{K^{F}}^{ \mathbf{m}})\)._
Proof.: The inclusion \(\varphi^{\mathbf{m}}(K^{F\setminus\operatorname{supp}\mathbf{m}})\subset \mathcal{K}(X^{\mathbf{m}}K^{F})\) is equivalent to \(K^{F\setminus\operatorname{supp}\mathbf{m}}X^{\mathbf{m}}\subset X^{ \mathbf{m}}K^{F}\) and to \((X^{\mathbf{m}})^{-1}(K^{F})\supset K^{F\setminus\operatorname{supp} \mathbf{m}}\). We will show this by induction on \(|\mathbf{m}|\). Suppose that \(\mathbf{m}=\mathbf{1}_{i}\) for some \(i\in[n]\). Let us prove that \((X^{\mathbf{1}_{i}})^{-1}(K^{F})\supset K^{F\setminus\{i\}}\). Indeed, if \(i\in F\), then \((X^{\mathbf{1}_{i}})^{-1}(K^{F})\supset(X^{\mathbf{1}_{i}})^{-1}(K^{F}\cap K^{ F\setminus\{i\}})=K^{F\setminus\{i\}}\) by the definition of invariant family. Analogously, if \(i\notin F\), then \((X^{\mathbf{1}_{i}})^{-1}(K^{F})\supset(X^{\mathbf{1}_{i}})^{-1}(K^{F}\cap K^{ F\cup\{i\}})=K^{F}=K^{F\setminus\{i\}}\). This shows the base of induction.
For induction step, write \(X^{\mathbf{m}}=X^{\mathbf{m}-\mathbf{1}_{i}}\otimes_{A}X^{\mathbf{1}_{i}}\) for some \(i\in\operatorname{supp}\mathbf{m}\). Then, we have
\[(X^{\mathbf{m}})^{-1}(K^{F})=(X^{\mathbf{m}-\mathbf{1}_{i}})^{-1}((X^{\mathbf{1 }_{i}})^{-1}(K^{F}))\supset(X^{\mathbf{m}-\mathbf{1}_{i}})^{-1}(K^{F\setminus \{i\}})\supset\]
\[\supset K^{(F\setminus\{i\})\setminus\operatorname{supp}(\mathbf{m}-\mathbf{1} _{i})}=K^{F\setminus\operatorname{supp}\mathbf{m}}.\]
Here we used Lemma 2.5 and the induction hypothesis twice. We have proved the first claim.
Now, the proved inclusion implies that the kernel of the homomorphism \([-]_{K^{F}}\circ\varphi^{\mathbf{m}}\colon A\to\mathcal{K}(X^{\mathbf{m}})\) contains \(K^{F\setminus\operatorname{supp}\mathbf{m}}\). Therefore, it descends to a homomorphism \(A/K^{F\setminus\operatorname{supp}\mathbf{m}}\to\mathcal{K}(X^{\mathbf{m}}_{K^{ F}})\) given by the formula in the statement.
**Proposition 4.6**.: _For any invariant family \(K\) there is a proper product system \((\boldsymbol{A}_{K},\boldsymbol{X}_{K})\) with a left action of \(\boldsymbol{A}_{K}\) given componentwise by_
\[\varphi^{\mathbf{m}}_{K}(\boldsymbol{a})_{F}=[\varphi^{\mathbf{m}}(\hat{ \boldsymbol{a}}_{F\setminus\operatorname{supp}\mathbf{m}})]_{K^{F}}\text{ for all }\boldsymbol{a}\in\boldsymbol{A}_{K},\text{ and }F\in \mathfrak{F}.\]
_That is, the left action on the \(F\)-component is given by the action of the \(F\setminus\operatorname{supp}\mathbf{m}\)-component._
Proof.: We know by Lemma 4.5 that the formula in the statement gives a well-defined homomorphism \(\varphi^{\mathbf{m}}_{K}\colon\boldsymbol{A}_{K}\to\mathcal{K}(X^{\mathbf{m}}_ {K})\). Hence, it defines a left action of \(\boldsymbol{A}_{K}\) on \(\boldsymbol{X}^{\mathbf{m}}_{K}\).
Let us analyze the tensor product \(X^{\mathbf{k}}_{K}\otimes_{\boldsymbol{A}_{K}}X^{\mathbf{m}}_{K}\) for \(\mathbf{k},\mathbf{m}\in\mathbb{N}^{n}\). By Lemma 2.6, we have
\[(\boldsymbol{X}^{\mathbf{k}}_{K}\otimes_{\boldsymbol{A}_{K}}\boldsymbol{X}^{ \mathbf{m}}_{K})^{F}=\bigoplus_{G\in\mathfrak{F}}X^{\mathbf{k}}_{K^{G}} \otimes_{A_{K^{G}}}X^{\mathbf{m}}_{K^{F}}=X^{\mathbf{k}}_{K^{F\setminus \operatorname{supp}\mathbf{m}}}\otimes_{A_{K^{F}}\setminus\operatorname{supp} \mathbf{m}}X^{\mathbf{m}}_{K^{F}}.\]
The last equality follow from the fact that \(A_{K^{G}}\) acts by zero on \(X^{\mathbf{m}}_{K^{F}}\) unless \(G=F\setminus\operatorname{supp}\mathbf{m}\). Now, define a multiplication map \((\boldsymbol{X}^{\mathbf{k}}_{K}\otimes_{\boldsymbol{A}_{K}}\boldsymbol{X}^{ \mathbf{m}}_{K})^{F}=X^{\mathbf{k}}_{K^{F\setminus\operatorname{supp}\mathbf{ m}}}\otimes_{A_{K^{F}\setminus\operatorname{supp}\mathbf{m}}}X^{\mathbf{m}}_{K^{F}} \to(\boldsymbol{X}^{\mathbf{k}+\mathbf{m}}_{K})^{F}=X^{\mathbf{k}+\mathbf{m}}_ {K^{F}}\) by the formula
\[x\cdot y=[\hat{x}\cdot\hat{y}]_{K^{F}}\]
for \(x\in X^{\mathbf{k}}_{K^{F\setminus\operatorname{supp}\mathbf{m}}}\) and \(y\in X^{\mathbf{m}}_{K^{F}}\). It is well-defined by Lemma 4.5. Moreover, it is unitary since it is induced by the unitary multiplication \(X^{\mathbf{m}}\otimes_{A}X^{\mathbf{k}}\to X^{\mathbf{m}+\mathbf{k}}\). This defines multiplication isomorphism of Hilbert \(\boldsymbol{A}_{K}\)-modules \(\boldsymbol{X}^{\mathbf{k}}_{K}\otimes_{\boldsymbol{A}_{K}}\boldsymbol{X}^{ \mathbf{m}}_{K}\cong\boldsymbol{X}^{\mathbf{k}+\mathbf{m}}_{K}\). It is also easy to see that the isomorphism is compatible with the left actions of \(\boldsymbol{A}_{K}\). This shows that \((\boldsymbol{A}_{K},\boldsymbol{X}_{K})\) is a product system, which is proper by definition.
_Remark_.: The construction of the product system \((\boldsymbol{A}_{K},\boldsymbol{X}_{K})\) above is inspired by Katsura's [13, Definition 6.1]. However, it is different from Katsura's extended product system even for rank 1 product systems and his construction does not have a straightforward generalization to the higher rank case. Katsura works solely with T-pairs of ideals, while the equivalent picture of invariant families turns out to be more useful for the higher rank case. The extended product system is our main tool for the definition of relative CNP-algebras and classification of gauge-invariant ideals.
Our next goal is to compute the CNP-ideals \(\mathcal{I}_{K}\) of \((\boldsymbol{A}_{K},\boldsymbol{X}_{K})\). We have
\[\ker\varphi^{\mathbf{1}_{i}}_{K}=\{\boldsymbol{a}\in\boldsymbol{A}_{K}\colon[ \varphi^{\mathbf{1}_{i}}(\hat{\boldsymbol{a}}_{F\setminus\{i\}})]_{K^{F}}=0 \text{ for all }F\in\mathfrak{F}\}.\]
Therefore, for any \(\boldsymbol{a}\in\ker\varphi^{\mathbf{1}_{i}}_{K}\), there is no condition on \(\boldsymbol{a}_{G}\) with \(i\in G\) and there are two conditions for \(a_{G}\) with \(i\notin G\). These conditions are \([\varphi^{\mathbf{1}_{i}}(\boldsymbol{a}_{G})]_{K^{G}}=0\) and \([\varphi^{\mathbf{1}_{i}}(\boldsymbol{a}_{G})]_{K^{G\cup\{i\}}}=0\). They can be rewritten as \(\hat{\boldsymbol{a}}_{G}X^{\mathbf{1}_{i}}\subset X^{\mathbf{1}_{i}}(K^{G} \cap K^{G\cup\{i\}})\). In its turn, this is equivalent to \(\hat{\boldsymbol{a}}_{G}\in(X^{\mathbf{1}_{i}})^{-1}(K^{G}\cap K^{G\cup\{i\}})= K^{G}\). Therefore, we have \(\hat{\boldsymbol{a}}_{G}\in K^{G}\) and hence \(\boldsymbol{a}_{G}=0\). Finally, we obtain
\[\ker\varphi^{\mathbf{1}_{i}}_{K}=\{\boldsymbol{a}\in\boldsymbol{A}_{K}\colon \boldsymbol{a}_{G}=0\text{ for all }G\in\mathfrak{F}\text{ with }i\notin G\}\]
and
\[\bigcap_{i\in F}\ker\varphi^{\mathbf{1}_{i}}_{K}=\{\boldsymbol{a}\in \boldsymbol{A}_{K}\colon\boldsymbol{a}_{G}=0\text{ for all }G\in\mathfrak{F}\text{ with }F\not\subset G\}\]
for any \(F\in\mathfrak{F}\).
**Lemma 4.7**.: _For any \(F\in\mathfrak{F}\), we have_
\[\mathcal{I}_{K}^{F}=\mathcal{J}_{K}^{F}=\{\boldsymbol{a}\in\boldsymbol{A}_{K} \colon\boldsymbol{a}_{G}=0\text{ for all }G\in\mathfrak{F}\text{ with }F\subset G\}.\]
Proof.: Since the product system is proper, we have
\[\mathcal{J}_{K}^{F}=\left(\bigcap_{i\in F}\ker\varphi_{K}^{\boldsymbol{1}_{i }}\right)^{\perp}=\{\boldsymbol{a}\in\boldsymbol{A}_{K}\colon\boldsymbol{a}_ {G}=0\text{ for all }G\in\mathfrak{F}\text{ with }F\subset G\}.\]
To prove the equality \(\mathcal{I}_{K}^{F}=\mathcal{J}_{K}^{F}\), it is enough to show that \(\mathcal{J}_{K}^{F}\boldsymbol{X}_{K}^{\boldsymbol{1}_{i}}\subset\boldsymbol{ X}_{K}^{\boldsymbol{1}_{i}}\mathcal{J}_{K}^{F}\) for any \(i\notin F\). Indeed, in this case we have \((X^{\boldsymbol{1}_{i}})^{-1}(\mathcal{J}_{K}^{F})\supset\mathcal{J}_{K}^{F}\) for all \(i\notin F\) and hence \((X^{\mathbf{m}})^{-1}(\mathcal{J}_{K}^{F})\supset\mathcal{J}_{K}^{F}\) for all \(\mathbf{m}\perp\boldsymbol{1}_{F}\) by Lemma 2.5. Then, the formula (2) implies \(\mathcal{I}_{K}^{F}=\mathcal{J}_{K}^{F}\).
Let \(\boldsymbol{a}\in\mathcal{J}_{K}^{F}\) and \(\boldsymbol{x}\in X_{K}^{\boldsymbol{1}_{i}}\) be arbitrary. Then, we have
\[(\boldsymbol{a}\cdot\boldsymbol{x})_{G}=\boldsymbol{a}_{G\setminus\{i\}} \cdot\boldsymbol{x}_{G}\text{ for all }G\in\mathfrak{F}.\]
This equals zero if \(F\subset G\setminus\{i\}\), which is equivalent to \(F\subset G\) since \(i\notin F\). Therefore, we have
\[\mathcal{J}_{K}^{F}\boldsymbol{X}_{K}^{\boldsymbol{1}_{i}}\subset\{ \boldsymbol{x}\in\boldsymbol{X}_{K}^{\boldsymbol{1}_{i}}\colon\boldsymbol{x} _{G}=0\text{ for all }G\in\mathfrak{F}\text{ with }F\subset G\}= \boldsymbol{X}_{K}^{\boldsymbol{1}_{i}}\mathcal{J}_{K}^{F}.\]
We conclude that \(\mathcal{I}_{K}^{F}=\mathcal{J}_{K}^{F}\).
We now classify \(\boldsymbol{X}_{K}\)-invariant ideals in \(\boldsymbol{A}_{K}\). Since \(\boldsymbol{A}_{K}\) is a direct sum of \(C^{*}\)-algebras, every ideal in \(\boldsymbol{A}_{K}\) is a direct sum of ideals in \((\boldsymbol{A}_{K})^{F}=A/K^{F}\). Moreover, ideals in \(A/K^{F}\) are in bijective correspondence with ideals in \(A\) containing \(K^{F}\). For a family of ideals \(\{N^{F}\}_{F\in\mathfrak{F}}\) of \(A\) with \(N^{F}\supset K^{F}\), we denote by \(N/K\) the ideal \(\bigoplus_{F\in\mathfrak{F}}N^{F}/K^{F}\) of \(\boldsymbol{A}_{K}\).
**Proposition 4.8**.: _An ideal \(N/K\) of \(\boldsymbol{A}_{K}\) is invariant if and only if \(N\) is an invariant family. There is a canonical isomorphism of product systems \(((\boldsymbol{A}_{K})_{N/K},(\boldsymbol{X}_{K})_{N/K})\cong(\boldsymbol{A}_ {N},\boldsymbol{X}_{N})\) and we identify those. Therefore, there is a lattice isomorphism between invariant ideals in \(\boldsymbol{A}_{K}\) and invariant families \(N\succeq K\). Moreover, every invariant ideal of \(\boldsymbol{A}_{K}\) is separating._
Proof.: Suppose \(N/K\) is an ideal of \(\boldsymbol{A}_{K}\). We will find necessary and sufficient conditions for \(N/K\) to be positively invariant and negatively invariant. Then, we will apply Theorem 3.9 to deduce the conditions for \(N/K\) to be invariant and compare them to the definition of invariant families.
**Positive invariance.** Recall that \(N/K\) is positively invariant if and only if \((N/K)\boldsymbol{X}_{K}^{\boldsymbol{1}_{i}}\subset\boldsymbol{X}_{K}^{ \boldsymbol{1}_{i}}(N/K)\) for all \(i\in[n]\). Componentwise, we have
\[((N/K)\boldsymbol{X}_{K}^{\boldsymbol{1}_{i}})^{G}=(N^{G\setminus\{i\}}/K^{G \setminus\{i\}})X^{\boldsymbol{1}_{i}}/X^{\boldsymbol{1}_{i}}K^{G}=(N^{G \setminus\{i\}}X^{\boldsymbol{1}_{i}})/(X^{\boldsymbol{1}_{i}}K^{G}\cap N^{G \setminus\{i\}}X^{\boldsymbol{1}_{i}})\]
and
\[(\boldsymbol{X}_{K}^{\boldsymbol{1}_{i}}(N/K))^{G}=(X^{\boldsymbol{1}_{i}}/X^ {\boldsymbol{1}_{i}}K^{G})(N^{G}/K^{G})=(X^{\boldsymbol{1}_{i}}N^{G}/X^{ \boldsymbol{1}_{i}}K^{G})\]
for all \(G\in\mathfrak{F}\). The former contains the latter if and only if \(N^{G\setminus\{i\}}X^{\boldsymbol{1}_{i}}\subset X^{\boldsymbol{1}_{i}}N^{G}\). Combining these conditions for \(G=F\) and \(G=F\cup\{i\}\), we conclude that \(N\) is positively invariant if and only if
\[N^{F}\subset(X^{\boldsymbol{1}_{i}})^{-1}(N^{F}\cap N^{F\cup\{i\}})\text{ for all }F\in\mathfrak{F}\text{ and }i\in[n]\setminus F. \tag{3}\]
In particular, we see that \(N/K\) is positively invariant if \(N\) is an invariant family.
**Negative invariance.** We compute the ideals \(L_{N/K}^{F}\). For \(i\in[n]\), we have
\[L_{N/K}^{i}=\{\boldsymbol{a}\in\boldsymbol{A}_{K}\colon\boldsymbol{a}_{F \setminus\{i\}}X^{\boldsymbol{1}_{i}}/X^{\boldsymbol{1}_{i}}K^{F}\subset X^{ \boldsymbol{1}_{i}}N^{F}/X^{\boldsymbol{1}_{i}}K^{F}\text{ for all }F\in\mathfrak{F}\}.\]
The inclusion \(\boldsymbol{a}_{F\setminus\{i\}}X^{\mathbf{1}_{i}}/X^{\mathbf{1}_{i}}K^{F}\subset X ^{\mathbf{1}_{i}}N^{F}/X^{\mathbf{1}_{i}}K^{F}\) is equivalent to the condition \(\hat{\boldsymbol{a}}_{F\setminus\{i\}}\in(X^{\mathbf{1}_{i}})^{-1}(N^{F})\), where \(\hat{\boldsymbol{a}}_{F\setminus\{i\}}\) is an arbitrary lift of \(\boldsymbol{a}_{F\setminus\{i\}}\) to \(A\). Therefore, we have
\[L^{i}_{N/K} =\{\boldsymbol{a}\in\boldsymbol{A}_{K}\colon\hat{\boldsymbol{a}} _{G}\in(X^{\mathbf{1}_{i}})^{-1}(N^{G}\cap N^{G\cup\{i\}})\text{ for all }G\in\mathfrak{F},i\notin G\},\] \[L^{F}_{N/K} =\{\boldsymbol{a}\in\boldsymbol{A}_{K}\colon\hat{\boldsymbol{a}} _{G}\in(X^{\mathbf{1}_{i}})^{-1}(N^{G}\cap N^{G\cup\{i\}})\text{ for all }G\in\mathfrak{F},i\in F \setminus G\},\]
and
\[L^{F}_{N/K}\cap\mathcal{I}^{F}_{K} =\{\boldsymbol{a}\in L^{F}_{N/K}\colon\hat{\boldsymbol{a}}_{G}= 0\text{ for all }G\in\mathfrak{F}\text{ with }F\subset G\}.\]
The condition for negative invariance \(L^{F}_{N/K}\cap\mathcal{I}^{F}_{K}\subset N/K\) is therefore equivalent to
\[(X^{\mathbf{1}_{i}})^{-1}(N^{G}\cap N^{G\cup\{i\}})\subset N^{G}\text{ for all }G\in\mathfrak{F},i\in F\setminus G.\]
Since this condition should hold for all \(F\in\mathfrak{F}\), we conclude that \(N/K\) is negatively invariant if and only if
\[(X^{\mathbf{1}_{i}})^{-1}(N^{G}\cap N^{G\cup\{i\}})\subset N^{G}\text{ for all }G\in\mathfrak{F},i\in[n]\setminus G. \tag{4}\]
**Invariance.** Combining (3) and (4), we see that \(N/K\) is invariant if and only if
\[N^{F}=(X^{\mathbf{1}_{i}})^{-1}(N^{F}\cap N^{F\cup\{i\}})\text{ for all }F\in\mathfrak{F}\text{ and }i\in[n]\setminus F.\]
This is exactly the condition for \(N\) to be an invariant family. Moreover, this shows that there is a lattice isomorphism between the lattice of invariant families containing \(K\) and the lattice of invariant ideals in \((\boldsymbol{A}_{K},\boldsymbol{X}_{K})\).
**Separation.** Let us use Proposition 3.10 to check that \(N/K\) is separating for any invariant family \(N\succeq K\). First, observe that \((\boldsymbol{A}_{K}/(N/K),\boldsymbol{X}_{K}/(N/K))\) is isomorphic to \((\boldsymbol{A}_{N},\boldsymbol{X}_{N})\). By Lemma 4.7, we know exactly how the CNP-ideals of \(\boldsymbol{A}_{K}\) and \(\boldsymbol{A}_{N}\) look like. The equality \([\mathcal{I}^{F}_{K}]_{N/K}=\mathcal{I}^{F}_{N/K}\) is trivial from the description of these ideals.
Proposition 4.8 classifies gauge-invariant ideals in \(\mathcal{N}\mathcal{O}(\boldsymbol{X}_{K})\). We will soon see that \(\mathcal{N}\mathcal{O}(\boldsymbol{X}_{K})\) is isomorphic to a certain gauge-invariant quotient of \(\mathcal{NT}(X)\). This is how we obtain a classification of gauge-invariant ideals in \(\mathcal{NT}(X)\) and \(\mathcal{N}\mathcal{O}(X)\).
### Relative Cuntz-Nica-Pimsner algebra
We assume henceforth that our product systems are proper. By Lemma 2.11.(1), for any representation \((\sigma,s)\) of a proper product system \((B,Y)\) in a \(C^{*}\)-algebra \(D\), the elements \(\sigma(a)\cdot p_{s}^{\mathbf{m}},\sigma(a)\cdot Q_{s}^{F}\), and \(\sigma(a)\cdot P_{s}^{F}\) are in \(D\) for any \(a\in B\), nonzero \(\mathbf{m}\in\mathbb{N}^{n}\), and \(F\in\mathfrak{F}\). We will use this fact without further elaboration.
**Definition 4.9**.: Let \(I\) be a T-family of ideals of \(A\). We define a gauge-invariant ideal \(\mathcal{C}_{I}\subset\mathcal{NT}(X)\) to be the one generated by the elements
\[\tau(a)\cdot Q_{t}^{F}=\sum_{\mathbf{0}\leq\mathbf{m}\leq\mathbf{1}_{F}}(-1)^{ |\mathbf{m}|}\psi_{t}^{\mathbf{m}}(\varphi^{\mathbf{m}}(a))\]
for all \(F\in\mathfrak{F}\) and \(a\in I^{F}\). We call the algebra \(\mathcal{N}\mathcal{O}(X,I)\coloneqq\mathcal{NT}(X)/\mathcal{C}_{I}\) the _\(I\)-relative Cuntz-Nica-Pimsner algebra of \((A,X)\)_.
A Nica-covariant representation \((\sigma,s)\) is called _\(I\)-Cuntz-Nica-Pimsner-covariant_ if \(\tau(a)\cdot Q_{s}^{F}=0\) for all \(F\in\mathfrak{F}\) and \(a\in I^{F}\). Equivalently, it is \(I\)-relative if and only if the induced representation \(\sigma\times_{0}s\) of \(\mathcal{NT}(X)\) factors through \(\mathcal{N}\mathcal{O}(X,I)\). We denote the corresponding representation of \(\mathcal{N}\mathcal{O}(X,I)\) by \(\sigma\times_{I}s\).
From the definition, it is obvious that \(\mathcal{NT}(X)\cong\mathcal{NO}(X,0)\) and \(\mathcal{NO}(X)\cong\mathcal{NO}(X,\mathcal{I})\), where \(\mathcal{I}\) is the CNP-familiy of ideals from Definition 2.12. Also, observe that \(\mathcal{C}_{I}\subset\mathcal{C}_{I^{\prime}}\) if \(I\preceq I^{\prime}\).
**Lemma 4.10**.: _The ideal \(\mathcal{C}_{I}\) is also the ideal generated by the elements \(\tau(a)\cdot P_{t}^{F}\) for all \(F\in\mathfrak{F}\) and \(a\in K_{I}^{F}\)._
Proof.: Denote by \(\mathcal{C}_{I}^{\prime}\) the ideal generated by the subspaces \(\tau(K_{I}^{F})\cdot P_{t}^{F}\). We first show that \(\mathcal{C}_{I}\subset\mathcal{C}_{I}^{\prime}\). Proposition 4.4 implies
\[I^{F}=I_{K_{I}}^{F}=\bigcap_{G\supset F}K^{G}.\]
Therefore, for any \(a\in I^{F}\) and \(G\supset F\), we have \(a\in K^{G}\) and the element \(\tau(a)\cdot P_{t}^{G}\) is in \(\mathcal{C}_{I}^{\prime}\). Thus,
\[a\cdot Q_{t}^{F}=a\cdot\sum_{G\supset F}P_{t}^{G}=\sum_{G\supset F}a\cdot P_{ t}^{G}\]
is also an element of \(\mathcal{C}_{I}^{\prime}\). We conclude that \(\tau(I^{F})\cdot Q_{t}^{F}\subset\mathcal{C}_{I}^{\prime}\) for all \(F\in\mathfrak{F}\) and, hence, the ideal \(\mathcal{C}_{I}\) lies inside \(\mathcal{C}_{I}^{\prime}\).
To prove the other inclusion, consider arbitrary elements \(a\in I^{F}\), \(x,y\in X^{\mathbf{1-1}_{F}}\). We have
\[t(x)\tau(a)Q_{t}^{F}t(y)^{*}=t(x)\tau(a)t(y)^{*}Q_{t}^{F}=\psi_{\tau}^{ \mathbf{1-1}_{F}}(\rho_{xa,y})Q_{t}^{F}\in\mathcal{C}_{I}.\]
We have used that \(t(y)\) commutes with \(Q_{t}^{F}\) by Lemma 2.11. Therefore, we have an inclusion \(\psi_{\tau_{\tau}}^{\mathbf{1-1}_{F}}(\mathcal{K}(X^{\mathbf{1-1}_{F}}I^{F}) )\cdot Q_{t}^{F}\subset\mathcal{C}_{I}\). By definition, we have \(K^{F}X^{\mathbf{1-1}_{F}}\subset X^{\mathbf{1-1}_{F}}I^{F}\), so \(\varphi^{\mathbf{1-1}_{F}}(K^{F})\subset\mathcal{K}(X^{\mathbf{1-1}_{F}}I^{F})\). It follows that
\[\tau(K^{F})P_{t}^{F}=\tau(K^{F})\prod_{i\notin F}p_{\tau}^{\mathbf{1}_{i}}Q_ {t}^{F}=\psi_{\tau}^{\mathbf{1-1}_{F}}(\varphi^{\mathbf{1-1}_{F}}(K^{F})) \prod_{i\in F}Q_{t}^{F}\subset\mathcal{C}_{I}\]
and \(\mathcal{C}_{I}^{\prime}\subset\mathcal{C}_{I}\). This shows that \(\mathcal{C}_{I}=\mathcal{C}_{I}^{\prime}\) and the lemma is proved.
Lemma 4.10 is useful, since covariance conditions coming from invariant families are easier to work with. We denote the representation of \((A,X)\) on \(\mathcal{NO}(X,I_{K})\) by \((\rho_{K},r_{K})\). We extend this representation to a representation \((\bar{\rho}_{K},\bar{r}_{K})\) of \((\boldsymbol{A}_{K},\boldsymbol{X}_{K})\) on \(\mathcal{NO}(X,I_{K})\) by
\[\begin{split}\bar{\rho}_{K}(\boldsymbol{a})&=\sum_{F \in\mathfrak{F}}\rho_{K}(\boldsymbol{\hat{a}}_{F})\cdot P_{r_{K}}^{F},\\ \bar{r}_{K}^{\mathbf{m}}(\boldsymbol{x})&=\sum_{F \in\mathfrak{F}}r_{K}^{\mathbf{m}}(\boldsymbol{\hat{x}}_{F})\cdot P_{r_{K}}^{ F}.\end{split} \tag{5}\]
Recall that we have defined in Section 4.2 the diagonal homomorphism \(\Delta_{K}\colon A\to\boldsymbol{A}_{K}\) as \(\Delta_{K}(a)=([a]_{K^{F}})_{F\in\mathfrak{F}}\). We use the same notation for the diagonal map \(\Delta_{K}\colon X\to\boldsymbol{X}_{K}\) given by \(\Delta_{K}(x)=([x]_{K^{F}})_{F\in\mathfrak{F}}\).
**Lemma 4.11**.: _Formula (5) defines a gauge-equivariant CNP-representation of \((\boldsymbol{A}_{K},\boldsymbol{X}_{K})\) on \(\mathcal{NO}(X,I_{K})\). Moreover, we have \((\rho_{K},r_{K})=(\bar{\rho}_{K}\circ\Delta_{K},\bar{r}_{K}\circ\Delta_{K})\)._
Proof.: First, we show that the formula (5) does not depend on the choice of the lift \(\boldsymbol{\hat{a}}_{F}\). For this, it is enough to show that \(\rho_{K}(b)\cdot P_{r_{K}}^{F}=0\) for any \(b\in K^{F}\) and \(r_{K}^{\mathbf{m}}(x)\cdot P_{r_{K}}^{F}=0\) for any \(x\in X^{\mathbf{m}}K^{F}\). The first one is trivial, since \(\rho_{K}(b)\cdot P_{r_{K}}^{F}=[\tau(b)\cdot P_{\tau}^{F}]_{\mathcal{C}_{I_{K}}}\) and the latter
is zero by Lemma 4.10. For the second one, write \(x=y\cdot b\) for some \(y\in X^{\mathbf{m}}\) and \(b\in K^{F}\). Then, we have \(r_{K}^{\mathbf{m}}(x)\cdot P_{r_{K}}^{F}=r_{K}^{\mathbf{m}}(y)\cdot\rho_{K}(b) \cdot P_{r_{K}}^{F}=0\).
The map \(\bar{\rho}_{K}\) is a \(*\)-homomorphism, since the projections \(P_{r_{K}}^{F}\) are pairwise orthogonal and \(\rho_{K}\) is a \(*\)-homomorphism. Let us check whether \((\bar{\rho}_{K},\bar{r}_{K})\) agrees with the scalar product and the right action of \(\boldsymbol{A}_{K}\). Let \(\boldsymbol{a}\in\boldsymbol{A}_{K}\), \(\boldsymbol{x},\boldsymbol{y}\in\boldsymbol{X}_{K}^{\mathbf{m}}\) be arbitrary. We have
\[\bar{r}_{K}(\boldsymbol{x})\bar{\rho}_{K}(\boldsymbol{a})=\sum_{ F,G\in\mathfrak{F}}r_{K}(\boldsymbol{\hat{x}}_{F})\cdot P_{r_{K}}^{F}\cdot \rho_{K}(\boldsymbol{\hat{a}}_{G})\cdot P_{r_{K}}^{G}=\sum_{F,G\in\mathfrak{F }}r_{K}(\boldsymbol{\hat{x}}_{F})\rho_{K}(\boldsymbol{\hat{a}}_{G})\cdot P_{r _{K}}^{F}P_{r_{K}}^{G}\\ =\sum_{F\in\mathfrak{F}}r_{K}(\bar{x}_{F}\cdot\bar{a}_{F})\cdot P _{r_{K}}^{F}=\bar{r}_{K}(\boldsymbol{x}\cdot\boldsymbol{a}),\]
and
\[\bar{r}_{K}(\boldsymbol{x})^{*}\bar{r}_{K}(\boldsymbol{y})= \sum_{F,G\in\mathfrak{F}}P_{r_{K}}^{F}\cdot r_{K}(\boldsymbol{\hat {x}}_{F})^{*}\cdot r_{K}(\boldsymbol{\hat{y}}_{G})\cdot P_{r_{K}}^{G}=\sum_{F,G\in\mathfrak{F}}P_{r_{K}}^{F}\cdot\rho_{K}(\langle\boldsymbol{\hat{x}}_{F}, \boldsymbol{\hat{y}}_{G}\rangle)\cdot P_{r_{K}}^{G}\] \[=\sum_{F,G\in\mathfrak{F}}\rho_{K}(\langle\bar{x}_{F},\bar{y}_{G }\rangle)\cdot P_{r_{K}}^{F}\cdot P_{r_{K}}^{G}=\sum_{F,G\in\mathfrak{F}}\rho _{K}(\langle\bar{x}_{F},\bar{y}_{F}\rangle)\cdot P_{r_{K}}^{F}=\bar{\rho}_{K}( \langle\boldsymbol{x},\boldsymbol{y}\rangle).\]
In both cases, we have used the pairwise orthogonality of the \(P\)-projections and Lemma 2.11.(1).
The left action of \(\boldsymbol{A}\) is more involved. For arbitrary \(\boldsymbol{a}\in\boldsymbol{A}_{K}\), \(\boldsymbol{x}\in\boldsymbol{X}_{K}^{\mathbf{m}}\), we have
\[\bar{\rho}_{K}(\boldsymbol{a})\bar{r}_{K}(\boldsymbol{x})=\sum_{F,G\in \mathfrak{F}}\rho_{K}(\boldsymbol{\hat{a}}_{F})\cdot P_{r_{K}}^{F}\cdot r_{K} (\boldsymbol{\hat{x}}_{F})\cdot P_{r_{K}}^{G}.\]
By Lemma 2.11.(4), we have \(P_{r_{K}}^{F}\cdot r_{K}(\boldsymbol{\hat{x}}_{G})P_{r_{K}}^{G}=r_{K}( \boldsymbol{\hat{x}}_{G})P_{r_{K}}^{G}\) if \(F=G\setminus\operatorname{supp}(\mathbf{m})\) and \(0\) otherwise. Therefore, \(\bar{r}_{K}\) is a left \(\boldsymbol{A}_{K}\)-module homomorphism. A similar calculation shows that \(\bar{r}_{K}\) preserves multiplication. We conclude that \((\bar{\rho}_{K},\bar{r}_{K})\) is a representation.
To prove that it is Nica-covariant and CNP we need to determine the projections \(p_{\bar{r}_{K}}^{\mathbf{m}}\). We claim that \(p_{\bar{r}_{K}}^{\mathbf{m}}=p_{r_{K}}^{\mathbf{m}}\). Indeed, observe that
\[p_{r_{K}}^{\mathbf{m}}\cdot\bar{r}_{K}^{\mathbf{m}}(\boldsymbol{x})=\sum_{F\in \mathfrak{F}}p_{r_{K}}^{\mathbf{m}}r_{K}^{\mathbf{m}}(\boldsymbol{\hat{x}}_{F} )\cdot P_{r_{K}}^{F}=\bar{r}_{K}^{\mathbf{m}}(\boldsymbol{x})\text{ for any }\boldsymbol{x}\in\boldsymbol{X}^{\mathbf{m}}.\]
By Lemma 2.11.(2), \(p_{\bar{r}_{K}}^{\mathbf{m}}\) is the minimal projection fixing \(\rho_{K}(X^{\mathbf{m}})\) with left multiplication, so \(p_{\bar{r}_{K}}^{\mathbf{m}}\leq p_{r_{K}}^{\mathbf{m}}\) holds. On the other hand, we have \(r_{K}(X^{\mathbf{m}})\subset\bar{r}_{K}(X_{K}^{\mathbf{m}})\), so \(p_{\bar{r}_{K}}^{\mathbf{m}}\) fixes \(r_{K}(X^{\mathbf{m}})\) with left multiplication and, hence, \(p_{r_{K}}^{\mathbf{m}}\leq p_{\bar{r}_{K}}^{\mathbf{m}}\). We conclude that \(p_{\bar{r}_{K}}^{\mathbf{m}}=p_{r_{K}}^{\mathbf{m}}\) and \((\bar{\rho}_{K},\bar{r}_{K})\) is Nica-covariant since \((\rho_{K},r_{K})\) is Nica-covariant.
To prove that \((\bar{\rho}_{K},\bar{r}_{K})\) is CNP, we need to show that \(\bar{\rho}_{K}(a)\cdot Q_{\bar{r}_{K}}^{F}=\bar{\rho}_{K}(a)\cdot Q_{r_{K}}^{F}=0\) for all \(a\in\mathcal{I}_{K}^{F}\) and \(F\in\mathfrak{F}\). By Lemma 4.7, an element \(a\in\boldsymbol{A}_{K}\) lies in \(\mathcal{I}_{K}^{F}\) if and only if \(a_{G}=0\) for all \(G\supset F\). Therefore, we have
\[\bar{\rho}_{K}(a)\cdot Q_{r_{K}}^{F}=\sum_{G\supset F}\rho_{K}(\hat{a}_{G}) \cdot P_{r_{K}}^{G}\cdot Q_{r_{K}}^{F}=0.\]
The second equality follows from the fact that \(P_{r_{K}}^{G}Q_{r_{K}}^{F}=0\) for \(G\not\supset F\). Indeed, we have the factor \(p_{r_{K}}^{\mathbf{1}_{i}}\) in the definition of \(P_{r_{K}}^{G}\) and an orthogonal factor \((1-p_{r_{K}}^{\mathbf{1}_{i}})\) in \(Q_{r_{K}}^{F}\) for \(i\in F\setminus G\). Hence, the representation is CNP. The last statement is trivial.
We define maps \(\gamma_{K}\coloneqq\omega_{\boldsymbol{X}_{K}}\circ\Delta_{K}\colon A\to \mathcal{N}\mathcal{O}(\boldsymbol{X}_{K})\) and \(g_{K}^{\mathbf{m}}\coloneqq\omega_{\boldsymbol{X}_{K}}\circ\Delta_{K}\colon X^{ \mathbf{m}}\to\mathcal{N}\mathcal{O}(\boldsymbol{X}_{K})\).
**Lemma 4.12**.: _The pair \((\gamma_{K},g_{K})\) is an \(I_{K}\)-relative Nica-covariant representation of \((A,X)\) on \(\mathcal{NO}(\mathbf{X}_{K})\)._
Proof.: It is obvious that the pair \((\gamma_{K},g_{K})\) forms a representation. Since \(g_{K}(X^{\mathbf{m}})\cdot\omega_{\mathbf{X}_{K}}(\mathbf{A}_{K})=o_{\mathbf{X}_{K}}(\mathbf{X }_{K}^{\mathbf{m}})\), a projection \(p\) satisfies \(p\cdot g_{K}^{\mathbf{m}}(x)=g_{K}^{\mathbf{m}}(x)\) for all \(x\in X^{\mathbf{m}}\) if and only if \(p\cdot o_{\mathbf{X}_{K}}^{\mathbf{m}}(\mathbf{x})\) for all \(\mathbf{x}\in\mathbf{X}^{\mathbf{m}}\). We deduce from Lemma 2.11.(2) that \(p_{g_{K}}^{\mathbf{m}}=p_{o_{\mathbf{X}_{K}}}^{\mathbf{m}}\) for all \(\mathbf{m}\in\mathbb{N}^{k}\). In particular, the representation is Nica-covariant.
To show that it is \(I_{K}\)-relative, it is enough to show that \(\Delta_{K}(I_{K}^{F})\subset\mathcal{I}_{K}^{F}\) for all \(F\in\mathfrak{F}\). Indeed, in this case we have \(\gamma_{K}(a)\cdot Q_{g_{K}}^{F}=\omega_{\mathbf{X}_{K}}(\Delta_{K}(a))\cdot Q_{o _{\mathbf{X}_{K}}}^{F}=0\) for all \(a\in I_{K}^{F}\), since \((\omega_{\mathbf{X}_{K}},o_{\mathbf{X}_{K}})\) is CNP.
To prove the inclusion, recall that \(I_{K}^{F}=\bigcap_{G\supset F}K^{G}\). Therefore, for any element \(a\in I_{K}^{F}\), we have \(a\in K^{G}\) and \((\Delta_{K}(a))_{G}=[a]_{K^{G}}=0\) for all \(G\supset F\). But this is exactly the condition that \(\Delta_{K}(a)\in\mathcal{I}_{K}^{F}\) by Lemma 4.7. This proves that \((\gamma_{K},g_{K})\) is \(I_{K}\)-relative.
**Proposition 4.13**.: _The induced map \(\bar{\rho}_{K}\times\bar{r}_{K}\colon\mathcal{NO}(\mathbf{X}_{K})\to\mathcal{NO}(X,I_{K})\) is an isomorphism with inverse map \(\gamma_{K}\times_{I_{K}}g_{K}\colon\mathcal{NO}(X,I_{K})\to\mathcal{NO}(\mathbf{X }_{K})\). It fits into the commutative diagram_
\[\begin{CD}\mathcal{NO}(\mathbf{X}_{K})@>{\bar{\rho}_{K}\times\bar{r}_{K}}>{}> \mathcal{NO}(X,I_{K})\\ @V{}V{}V@V{}V{}V\\ \mathcal{NO}(\mathbf{X}_{K^{\prime}})@>{\bar{\rho}_{K^{\prime}}\times\bar{r}_{K^{ \prime}}}>{}>\mathcal{NO}(X,I_{K^{\prime}})\end{CD}\]
_for any invariant family \(K^{\prime}\succeq K\)._
Proof.: The map \(\bar{\rho}_{K}\times\bar{r}_{K}\) is surjective and gauge-equivariant. By the GIUT (Proposition 2.14), it is an isomorphism if and only if \(\bar{\rho}_{K}\) is injective.
By Proposition 4.8, the kernel of \(\bar{\rho}_{K}\) is described by an invariant family \(N\succeq K\), i.e., \(\ker\bar{\rho}_{K}=N/K=\bigoplus_{F\in\mathfrak{F}}N^{F}/K^{F}\subset\mathbf{A}_ {K}\). Suppose that the kernel is nontrivial, so that \(K\) is strictly contained in \(N\). By the lattice isomorphism between invariant families and T-families, we also have that \(I_{K}\) is strictly contained in \(I_{N}\). Hence, there is \(G\in\mathfrak{F}\) such that \(I_{K}^{G}\subsetneq I_{N}^{G}\).
Consider an arbitrary element \(a\in I_{N}^{G}\setminus I_{K}^{G}\). Recall that \(I_{N}^{G}=\bigcap_{H\supset G}N^{G}\) and \(I_{K}^{G}=\bigcap_{H\supset G}K^{G}\). Therefore, \(a\in N^{F}\) for all \(F\supset G\) and there is at least one \(H\supset G\) such that \(a\notin K^{H}\). Define an element \(\mathbf{b}\in\mathbf{A}_{K}\) by
\[\mathbf{b}_{F}=\begin{cases}[a]_{K^{F}}&\text{if }F\supset G,\\ 0&\text{otherwise.}\end{cases}\]
By the discussion above, \(\mathbf{b}\subset N/K\) and \(\mathbf{b}_{H}\neq 0\) for some \(H\supset G\).
We calculate
\[0=\bar{\rho}_{K}(\mathbf{b})=\sum_{H\supset G}\rho_{K}(a)\cdot P_{r_{K}}^{H}=\rho _{K}(a)\cdot Q_{r_{K}}^{G}.\]
Furthermore, we have
\[0=(\gamma_{K}\times_{I_{K}}g_{K})(\rho_{K}(a)\cdot Q_{r_{K}}^{G})=\gamma_{K}(a )\cdot Q_{g_{K}}^{G}=\omega_{\mathbf{X}_{K}}(\Delta_{K}(a))\cdot Q_{o_{\mathbf{X}_{K}} }^{G},\]
which implies that \(\Delta_{K}(a)\in\mathcal{I}_{K}^{G}\) by Lemma 2.13. This means that \(\Delta_{K}(a)_{H}=[a]_{K^{H}}=0\) for all \(H\supset G\). This is a contradiction, since we have showed above that \(a\notin K^{H}\) for some \(H\supset G\). Therefore, \(\bar{\rho}_{K}\) is injective and \(\bar{\rho}_{K}\times\bar{r}_{K}\) is an isomorphism.
To show that \(\gamma_{K}\times_{I_{K}}g_{K}\) is the inverse map, it is enough to show that \(\bar{\rho}_{K}\times\bar{r}_{K}\circ\gamma_{K}=\rho_{K}\) and \(\bar{\rho}_{K}\times\bar{r}_{K}\circ g_{K}=r_{K}\). This is because \(\rho_{K}\times_{I_{K}}r_{K}=\operatorname{id}_{\mathcal{NO}(X,I_{K})}\). We calculate
\[\bar{\rho}_{K}\times\bar{r}_{K}(\gamma_{K}(a))=\bar{\rho}_{K}\times\bar{r}_{K}( \omega_{\boldsymbol{X}_{K}}(\Delta_{K}(a)))=\bar{\rho}_{K}(\Delta_{K}(a))=\rho_ {K}(a)\]
for all \(a\in A\). The last equality follows from Lemma 4.11. The equality for \(g_{K}\) is proved similarly. The commutativity of the diagram is straightforward from the definitions.
Compare the statements of the following corollary with Proposition 2.14.
**Corollary 4.14** (\(I\)-relative Giumt).: _Let \((A,X)\) be a proper product system and let \(I\subset A\) be a T-family. Suppose that \((\sigma,s)\) is an \(I\)-Cuntz-Nica-Pimsner covariant representation on the \(C^{*}\)-algebra \(D\). The map \(\sigma\times_{I}s\colon\mathcal{NO}(X,I)\to D\) is faithful if and only if \(\sigma(a)Q_{s}^{F}=0\) implies \(a\in I^{F}\) and \((\sigma,s)\) admits a gauge action._
Proof.: By Proposition 4.13, the map \(\sigma\times_{I}s\colon\mathcal{NO}(X,I)\to D\) is equivalent to the CNP-representation \((\bar{\sigma},\bar{s})=((\sigma\times_{I}s)\circ\bar{\rho}_{k},(\sigma\times_ {I}s)\circ\bar{r}_{K})\) of \((\boldsymbol{A}_{K},\boldsymbol{X}_{K})\), where \(K=K_{I}\). This reduces the statement to the ordinary GIT, stated in Proposition 2.14.
We are now ready for the main result of the paper.
**Theorem 4.15**.: _Let \((A,X)\) be a proper product system. The map \(I\mapsto\mathcal{C}_{I}\) defines a lattice isomorphism between:_
_(1) the set of T-families and the set of gauge-invariant ideals of \(\mathcal{NT}(X)\);_
_(2) the set of O-families and the set of gauge-invariant ideals of \(\mathcal{NO}(X)\)._
Proof.: We know that \(\mathcal{NT}(X)=\mathcal{NO}(X,0)\). Proposition 4.13 shows that it is then isomorphic to \(\mathcal{NO}(\boldsymbol{X}_{K_{0}})\), where \(K_{0}\) is the smallest invariant family. By Proposition 4.8 we know that gauge-invariant ideals of \(\mathcal{NO}(\boldsymbol{X}_{K_{0}})\) are in bijection with T-families. If \(K\) is an invariant family, then the corresponding gauge-invariant ideal is the kernel of \(\mathcal{NO}(\boldsymbol{X}_{K_{0}})\twoheadrightarrow\mathcal{NO}( \boldsymbol{X}_{K})\). By Proposition 4.13, this surjection fits into the commutative diagram
This shows that \(K\mapsto C_{I_{K}}\) is a lattice isomorphism between invariant families and gauge-invariant ideals of \(\mathcal{NT}(X)\). Since \(K\mapsto I_{K}\) is a lattice isomorphism between invariant families and T-families by Proposition 4.4, we get the first part of the theorem.
The second part follows from the fact that \(\mathcal{C}_{I}\subset\mathcal{C}_{\mathcal{I}}\) if and only if \(\mathcal{I}\preceq I\). This is exactly the condition that \(I\) is an O-family.
## 5. Higher-rank graphs
Higher-rank graphs and their \(C^{*}\)-algebra were introduced by Kumjian and Pask in [14]. A graph of rank \(n\) is a countable category \(\Gamma\) of paths with a degree functor \(\mathbf{d}\colon\Gamma\to\mathbb{N}^{n}\), which satisfies the factorization property: for any path \(\gamma\in\Gamma\) and any element \(\mathbf{m}\in\mathbb{N}^{n}\) with \(\mathbf{m}\leq\mathbf{d}(\gamma)\), there are unique paths \(\gamma(0,\mathbf{m}),\gamma(\mathbf{m},\mathbf{d}(\gamma))\in\Gamma\) such that \(\mathbf{d}(\gamma(0,\mathbf{m}))=\mathbf{m}\), \(\mathbf{d}(\gamma(\mathbf{m},\mathbf{d}(\gamma)))=\mathbf{d}(\gamma)-\mathbf{m}\), and \(\gamma=\gamma(0,\mathbf{m})\gamma(\mathbf{m},\mathbf{d}(\gamma))\). We denote by \(s,r\colon\Gamma\to\Gamma^{0}\) the source and range maps, respectively.
We can associate a product system \((c_{0}(\Gamma^{0}),X(\Gamma))\) to an \(n\)-graph \(\Gamma\) as follows. The base algebra \(c_{0}(\Gamma^{0})\) is just the algebra of functions that vanish on infinity on the discrete set of vertices \(\Gamma^{0}\). Analogously, \(X(\Gamma^{\mathbf{m}})\) is the vector space of functions \(x\colon\Gamma^{\mathbf{m}}\to\mathbb{C}\) on the set of paths of degree \(\mathbf{m}\) such that \(\sum_{\alpha\in\Gamma^{\mathbf{m}},s(\alpha)=v}\left|\xi(\alpha)\right|^{2}<\infty\) for all \(v\in\Gamma^{0}\). If \(\alpha\) is a path of degree \(\mathbf{m}\) and \(v\in\Gamma^{0}\) is a vertex, then we denote by \(\delta_{\alpha}\in X(\Gamma^{\mathbf{m}})\) and \(\delta_{v}\in c_{0}(\Gamma^{0})\) the characteristic functions of \(\alpha\) and \(v\), respectively.
The multiplication \(\mu^{\mathbf{m},\mathbf{k}}\) is given by concatenation of paths:
\[\delta_{\alpha}\cdot\delta_{\beta}=\begin{cases}\delta_{\alpha\beta}&\text{ if }s(\alpha)=r(\beta),\\ 0&\text{otherwise}\end{cases}\]
for all paths \(\alpha\in\Gamma^{\mathbf{m}}\) and \(\beta\in\Gamma^{\mathbf{k}}\). The action of \(c_{0}(\Gamma^{0})\) is defined analogously and the scalar product on \(X(\Gamma^{\mathbf{m}})\) is given by
\[\langle\delta_{\alpha},\delta_{\beta}\rangle=\begin{cases}\delta_{s(\alpha)}& \text{if }\alpha=\beta,\\ 0&\text{otherwise}\end{cases}\]
for all paths \(\alpha,\beta\in\Gamma^{\mathbf{m}}\).
Raeburn and Sims [23] introduced a class of higher-rank graphs called _finitely aligned_ graphs. It was shown in [24, Theorem 5.4] that the product system \((c_{0}(\Gamma^{0}),X(\Gamma))\) is compactly aligned if and only if \(\Gamma\) is finitely aligned. In this case, the algebra \(C^{*}(\Gamma)\) of a higher-rank graph \(\Gamma\) can be defined as the Cuntz-Nica-Pimsner algebra \(\mathcal{N}\mathcal{O}(X(\Gamma))\). Sims and Yeend showed in [27, Propsition 5.4] that this definition is equivalent to the earlier definition in terms of Cuntz-Krieger families. Dor-On and Kakariadis defined a subclass of _strongly finitely aligned_ graphs in [6, Definition 7.2]. They proved that the product system is strongly compactly aligned if and only if \(\Gamma\) is strongly finitely aligned. This fact can be used to describe the higher-rank graph \(C^{*}\)-algebra using simpler covariance conditions (see [6, Theorem 7.6]).
Finally, a graph \(\Gamma\) is called _row-finite_ if for every vertex \(v\in\Gamma^{0}\) there are only finitely many paths with range \(v\) of any given degree. It is straightforward to show that the graph is row-finite if and only if the associated product system is proper.
Sims classified gauge-invariant ideals of \(C^{*}(\Gamma)\) for finitely aligned graphs in [26]. We will show how our results can be used to recover this classification in the case of row-finite graphs. Moreover, we will give an alternative descriptions of the ideal lattices.
From now on, assume that \(\Gamma\) is a row-finite graph. If \(V\subset\Gamma^{0}\) is a subset, then we denote by \(c_{0}(V)\) the closed ideal generated by \(\{\delta_{v}\colon v\in V\}\). This defines a bijection between subsets of \(\Gamma^{0}\) and ideals of \(c_{0}(\Gamma^{0})\).
For a subset \(V\in\Gamma^{0}\) and \(\mathbf{m}\in\mathbb{N}^{n}\), we define
\[(\Gamma^{\mathbf{m}})^{-1}(V)=\{v\in\Gamma^{0}\colon\forall\alpha\in\Gamma^{ \mathbf{m}}\text{ with }r(\alpha)=v\text{ we have }s(\alpha)\in V\}.\]
In particular, \((\Gamma^{\mathbf{m}})^{-1}(\emptyset)\) is the set of \(\mathbf{m}\)-sources, i.e., the set of vertices not receiving any path of degree \(\mathbf{m}\). The following equalities are trivial:
\[(X(\Gamma^{\mathbf{m}}))^{-1}(c_{0}(V)) =c_{0}((\Gamma^{\mathbf{m}})^{-1}(V)), \tag{6}\] \[\ker\varphi_{X(\Gamma)}^{\mathbf{m}} =c_{0}((\Gamma^{\mathbf{m}})^{-1}(\emptyset)),\] \[(c_{0}(V))^{\perp} =c_{0}(\Gamma^{0}\setminus V).\]
We can use these formulas to describe the (pre-)CNP ideals \(\mathcal{J}^{F}\) and \(\mathcal{I}^{F}\) of \((c_{0}(\Gamma^{0}),X(\Gamma))\). For \(F\in\mathfrak{F}(\Gamma)\), define the subset \(\mathcal{W}^{F}\) of \(\Gamma^{0}\) by
\[\mathcal{W}^{F} =\Gamma^{0}\setminus\bigcap_{i\in F}(\Gamma^{\mathbf{1}_{i}})^{- 1}(\emptyset)=\] \[=\{v\in\Gamma^{0}\colon v\text{ receives at least one edge of degree }\mathbf{1}_{i}\text{ for some }i\in F\}.\]
It is easy to see that \(\mathcal{J}^{F}=c_{0}(\mathcal{W}^{F})\). We can further define subsets \(\mathcal{U}^{F}\subset\Gamma^{0}\) by
\[\mathcal{U}^{F} =\mathcal{W}^{F}\cap\bigcap_{i\notin F}(\Gamma^{\mathbf{1}_{i}}) ^{-1}(W^{F})=\] \[=\{v\in\mathcal{W}^{F}\colon\forall\gamma\in v\Gamma^{\mathbf{m} }\text{ with }\mathbf{m}\perp\mathbf{1}_{F}\text{ we have }s(\gamma)\in W^{F}\}=\] \[=\{v\in\Gamma^{0}\colon\forall\gamma\in v\Gamma^{\mathbf{m}} \text{ with }\mathbf{m}\perp\mathbf{1}_{F},\ |s(\gamma)\Gamma^{\mathbf{1}_{i}}|\neq 0 \text{ for some }i\in F\}.\]
Here, the notation \(v\Gamma^{\mathbf{m}}\) means the set of paths \(\gamma\) with \(r(\gamma)=v\) and \(\mathbf{d}(\gamma)=\mathbf{m}\in\mathbb{N}^{n}\). Dor-On and Kakariadis defined these sets in [6, Definition 7.5] and called them sets of _\(F\)-tracing vertices_. With formulas (6) in mind, it is straightforward that \(\mathcal{I}^{F}=c_{0}(\mathcal{U}^{F})\).
We now want to describe \(X(\Gamma)\)-invariant ideals of \(C^{*}(\Gamma)\). For this, we define two properties of subsets of \(\Gamma^{0}\).
**Definition 5.1**.: Let \(\Gamma\) be a row-finite \(n\)-graph and \(V\subset\Gamma^{0}\) be a subset.
1. We say that \(V\) is _hereditary_ if every path with range in \(V\) has source in \(V\). More precisely, \(V\) is hereditary if and only if \(V\subset(\Gamma^{\mathbf{m}})^{-1}(V)\) for all \(\mathbf{m}\in\mathbb{N}^{n}\).
2. We say that \(V\) is _\(\mathfrak{F}\)-saturated_ if for every \(F\)-tracing vertex \(v\in V\) such that for all \(i\in F\) and all \(\gamma\in v\Gamma^{\mathbf{1}_{i}}\) we have \(s(\gamma)\in V\), we have \(v\in V\).
Our definition of a hereditary set coincides with the definition of Sims in [26, Definition 3.1]. However, the notion of \(\mathfrak{F}\)-saturated sets is different from saturated sets of Sims. We do not know if these two properties are equivalent. The following result can shed some light on it.
**Theorem 5.2**.: _Let \(\Gamma\) be a row-finite \(n\)-graph and let \(V\subset\Gamma^{0}\) be a subset. Then \(c_{0}(V)\) is positively \(X(\Gamma)\)-invariant if and only if \(V\) is hereditary and negatively invariant if and only if it is \(\mathfrak{F}\)-saturated. Therefore, \(c_{0}(V)\) is \(X(\Gamma)\)-invariant if and only if \(V\) is both hereditary and \(\mathfrak{F}\)-saturated._
Proof.: By Definition 3.2, the ideal \(c_{0}(V)\) is positively \(X(\Gamma)\)-invariant if and only if \(c_{0}(V)\subset(X(\Gamma^{\mathbf{m}}))^{-1}(c_{0}(V))\) for all \(\mathbf{m}\in\mathbb{N}^{n}\). By (6), this is equivalent to \(V\subset(\Gamma^{\mathbf{m}})^{-1}(V)\) for all \(\mathbf{m}\in\mathbb{N}^{n}\), which is the definition of a hereditary subset.
The ideal \(c_{0}(V)\) is negatively \(X(\Gamma)\)-invariant if and only if \(\bigcap_{i\in F}(X(\Gamma)^{\mathbf{1}_{i}})^{-1}(c_{0}(V))\cap\mathcal{I}^{F} \subset c_{0}(V)\) for all \(F\in\mathfrak{F}\). By (6), this is equivalent to \(\bigcap_{i\in F}(\Gamma^{\mathbf{1}_{i}})^{-1}(V)\cap\mathcal{U}^{F}\subset V\) for all \(F\in\mathfrak{F}\). A vertex \(v\) is in \(\bigcap_{i\in F}(\Gamma^{\mathbf{1}_{i}})^{-1}(V)\cap\mathcal{U}^{F}\) if and only if it is \(F\)-tracing and for all \(i\in F\) and all \(\gamma\in v\Gamma^{\mathbf{1}_{i}}\) we have \(s(\gamma)\in V\). Therefore, \(c_{0}(V)\) is negatively invariant if and only if every such vertex is in \(V\), which is the definition of an \(\mathfrak{F}\)-saturated subset.
The last statement follows from the fact that \(c_{0}(V)\) is \(X(\Gamma)\)-invariant if and only if it is both positively and negatively invariant by Theorem 3.9.
**Corollary 5.3**.: _A hereditary subset \(V\subset\Gamma^{0}\) of row-finite \(n\)-graph \(\Gamma\) is \(\mathfrak{F}\)-saturated if and only if it is saturated in the sense of Sims (see [26, Definition 3.1])._
Proof.: Section 3 of [26] shows that \(c_{0}(V)\) is invariant if and only if \(V\) is hereditary and saturated. On the other hand, we have just shown that \(c_{0}(V)\) is invariant if and only if \(V\) is hereditary and \(\mathfrak{F}\)-saturated. We conclude that these two properties are equivalent, whenever \(V\) is hereditary.
_Remark_.: Theorem 5.2 and Corollary 5.3 only require results from Section 3. We have only assumed there that the product systems are strongly compactly aligned but not necessarily proper. Therefore, it is possible to extend these results to the case of strongly finitely aligned graphs.
We now want to use the results of Section 4 to describe ideals in \(C^{*}(\Gamma)=\mathcal{NO}(X(\Gamma))\) and \(\mathcal{T}(\Gamma)\coloneqq\mathcal{NT}(X(\Gamma))\).
**Definition 5.4**.: A collection \(V=\{V^{F}\}_{F\in\mathfrak{F}}\) of subsets of \(\Gamma^{0}\) is called a _T-family of vertices_ if the following condition holds: for every \(F\in\mathfrak{F}\) and \(i\in[n]\setminus F\), a vertex \(v\in V^{F\cup\{i\}}\) is in \(V^{F}\) if and only if for all \(\gamma\in v\Gamma^{\mathbf{1}_{i}}\) we have \(s(\gamma)\in V^{F}\). It is further called an _O-family of vertices_ if \(\mathcal{U}^{F}\subset V^{F}\) for all \(F\in\mathfrak{F}\).
A collection \(W=\{W^{F}\}_{F\in\mathfrak{F}}\subset\Gamma^{0}\) of vertices is called an _invariant family_ if \(W^{G}=(\Gamma^{\mathbf{1}_{i}})^{-1}(W^{G}\cap W^{G\cup\{i\}})\) for all \(G\in\mathfrak{F}\) and \(i\in[n]\setminus G\).
We can rewrite Definition 5.4 as follows: \(V=\{V^{F}\}_{F\in\mathfrak{F}}\) is a T-family of vertices if and only if \((\Gamma^{\mathbf{1}_{i}})^{-1}(V^{F})\cap V^{F\cup\{i\}}=V^{F}\) for all \(F\in\mathfrak{F}\) and \(i\in[n]\setminus F\). This is equivalent to \((X(\Gamma)^{\mathbf{1}_{i}})^{-1}(c_{0}(V^{F}))\cap c_{0}(V^{F\cup\{i\}})=c_{ 0}(V^{F})\) for all \(F\in\mathfrak{F}\) and \(i\in[n]\setminus F\) by (6), which is the definition of a T-family of ideals. A T-family of ideals is an O-family of ideals if \(\mathcal{I}^{F}\subset c_{0}(V^{F})\) for all \(F\in\mathfrak{F}\). This is the same as \(\mathcal{U}^{F}\subset V^{F}\) for all \(F\in\mathfrak{F}\), which is the definition of an O-family of vertices.
Therefore, a collection of vertices is a T-family (resp. O-family) if and only if the corresponding collection of ideals is a T-family (resp. O-family). It is also obvious that \(V\) is an invariant family of vertices if and only if \(c_{0}(W)\) is an invariant family of ideals. Moreover, by Proposition 4.4, there is an inclusion-preserving bijection between T-families and invariant families of vertices given by
\[W^{F}_{V}\coloneqq(\Gamma^{\mathbf{1}-\mathbf{1}_{F}})^{-1}(V^{F}).\]
Let \(W\) be an invariant family of vertices. We construct a higher-rank graph \(\mathbf{\Gamma}_{W}\) as follows. Let \(\mathbf{\Gamma}_{W}^{0}\coloneqq\bigsqcup_{F\in\mathfrak{F}}\Gamma^{0} \setminus W^{F}\). For a vertex \(v\) not in \(W^{F}\), we denote by \(v^{F}\in\mathbf{\Gamma}_{W}^{0}\) the corresponding vertex in the \(F\)-component. Furthermore, we define
\[\mathbf{\Gamma}_{W}^{\mathbf{m}}\coloneqq\bigsqcup_{F\in\mathfrak{F}}\Gamma ^{\mathbf{m}}\setminus(\Gamma^{\mathbf{m}}W^{F})=\bigsqcup_{F\in\mathfrak{F} }\{\gamma\in\Gamma^{\mathbf{m}}\mid s(\gamma)\notin W^{F}\}.\]
v Analogously, we write \(\gamma^{F}\) to denote the vertex in the \(F\)-component of \(\mathbf{\Gamma}_{W}^{\mathbf{m}}\) corresponding to \(\gamma\in\Gamma^{\mathbf{m}}\). Finally, we set \(s(\gamma^{F})=s(\gamma)^{F}\) and \(r(\gamma^{F})=r(\gamma)^{F\setminus\operatorname{supp}\mathbf{m}}\). With the obvious path composition map, this defines a higher-rank graph \(\mathbf{\Gamma}_{W}\).
**Theorem 5.5**.: _Let \(\Gamma\) be a row-finite higher-rank graph. There is a lattice isomorphism between T-families (resp. O-families) of vertices and gauge-invariant ideals in \(\mathcal{T}(\Gamma)\) (resp. \(C^{*}(\Gamma)\))._
_Moreover, the quotient of \(\mathcal{T}(\Gamma)\) by the ideal corresponding to \(V\) is isomorphic to \(\mathcal{T}(\mathbf{\Gamma}_{W_{V}})\). Consequently, the algebra \(\mathcal{T}(\Gamma)\) as well as all it gauge-invariant quotients are higher-rank graph algebras._
Proof.: We have established above that T-families and O-families of vertices correspond bijectively to T-families and O-families of ideals. Therefore, the first claim follows immediately from Theorem 4.15.
For the second claim, it is easy to see that the product system \((c_{0}(\boldsymbol{\Gamma}_{W_{V}}),X(\boldsymbol{\Gamma}_{W_{V}}))\) is isomorphic to the product system \((\boldsymbol{c_{0}}(\Gamma)_{c_{0}(W_{V})},\boldsymbol{X}(\Gamma)_{c_{0}(W_{V})})\) constructed in Proposition 4.6. Then, the claim follows from Proposition 4.13.
The second claim of Theorem 5.5 is a generalization of [1, Corollary 3.5], where the authors described quotients of rank-1 graph algebras as graph algebras of extended graphs. To our knowledge, in case of higher-rank graphs, such extended graph was constructed only for the Toeplitz algebra by Pangalela in [20] but not for their quotients.
|
2305.08064 | Ehresmann-Schein-Nambooripad theorems for classes of biunary semigroups | We obtain an ESN theorem for a very general class of biunary semigroups with
idempotent-valued domain and range operations, representing them in terms of
small categories equipped with a suitable biaction of the identities on the
category. Our results generalise the recent work of Fitzgerald and Kinyon
connecting localisable semigroups to transcription categories, as well as that
of Lawson linking Ehresmann semigroups to categories with Ehresmann biaction.
In contrast to most approaches to ESN theorems, we do not require the
categories to be ordered or for their sets of identities to possess any
particular structure. Throughout, the biunary semigroups are represented using
categories rather than generalised categories of any kind, and we obtain
category isomorphisms between the clesses of semigroups and their associated
enriched categories, rather than category equivalences. Our results cover the
class of DRC-semigroups considered by Jones and Shoufeng Wang, but they also
cover cases where not both congruence conditions hold, including examples such
as the semigroup of binary relations on a set under demonic composition
equipped with domain and range operations. | Tim Stokes | 2023-05-14T04:20:54Z | http://arxiv.org/abs/2305.08064v2 | # A most general ESN type theorem for biunary semigroups?
###### Abstract
We obtain an ESN type theorem for a very general class of biunary semigroups with idempotent-valued domain and range operations, representing them in terms of small categories equipped with a suitable biaction of the identities on the category. Our results generalize the recent work of Fitzgerald and Kinyon connecting localisable semigroups to transcription categories, as well as that of Lawson linking Ehresmann semigroups to categories with Ehresmann biaction. In contrast to most approaches to ESN type theorems, we do not require the categories to be ordered or for their sets of identities to possess any particular structure. Moreover, in all cases we use categories to represent the semigroups rather than generalized categories, and we obtain category isomorphisms between the classes of semigroups and categories rather than equivalences. Our results cover the class of DRC-semigroups considered by Jones and Shoufeng Wang. But they also cover cases where not both congruence conditions hold, admitting examples such as the semigroup of binary relations on a set under demonic composition equipped with domain and range.
**Keywords:** ESN type theorem, category with biaction, biunary semigroup.
**2010 Mathematics Subject Classification:** 20M50, 20M30.
## 1 Introduction
Throughout, if \(X\) is a non-empty set then \(PT(X)\) denotes the semigroup of partial functions \(X\to X\) and \(I(X)\) the inverse semigroup of one-to-one partial functions on \(X\). Function application is always written on the right (with the exception of unary operation application), and composition is to be read left-to-right.
For current purposes, a (small) category \(C\) is a set equipped with a partial binary operation \(\circ\) and two unary operations we here denote by \(D,R\), satisfying, for all \(x,y\in C\),
1. \(D(x)\circ x=x\), \(x\circ R(x)=x\)
2. \(R(D(x))=D(x)\), \(D(R(x))=R(x)\)
3. \(x\circ y\) exists if and only if \(R(x)=D(y)\)
4. if \(R(x)=D(y)\) then \(D(x\circ y)=D(x)\) and \(R(x\circ y)=R(y)\)
5. \(x\circ(y\circ z)=(x\circ y)\circ z\) whenever the two products are defined.
This "object-free" formulation (or some equivalent of it) is frequently used in algebra; it is the definition used in [3], and is equivalent to that used in [10].
If \(C\) is a category in the above sense, denote by \(D(C)\) the identities of \(C\) - elements of the form \(D(s)\) where \(s\in C\). (By the second law, we could just as well have called this set "\(R(C)\)" instead.)
We next define the types of semigroups we are interested in.
**Definition 1.1**: _A biunary semigroup \(S\), with unary operations \(D\) and \(R\), is a precat-semigroup if for all \(x\in S\),_
1. \(D(x)^{2}=D(x)\)__
2. \(D(R(x))=R(x)\)__
3. \(R(D(x))=D(x)\)__
4. \(D(x)x=x\)__
5. \(xR(x)=x\)_._
_Elements of \(D(S)=\{D(s)\mid s\in S\}\) are called projections._
If \(S\) is a precat-semigroup, note that \(D(S)\subseteq E(S)\) (the set of idempotents of \(S\)), and for all \(e\in D(S)\), \(D(e)=R(e)=e\); hence \(R(S)\), defined dually to \(D(S)\), is equal to it. Put simply then, a precat-semigroup is a biunary semigroup with a distinguished set of idempotents \(D(S)\) consisting of the elements fixed by the unary operations \(D\) and \(R\), and which are such that for each \(s\in S\), \(D(s)\) is a left identity for \(s\) and \(R(s)\) is a right identity for \(s\).
An important possible property of a precat-semigroup is the following.
**Definition 1.2**: _A precat-semigroup satisfying the law \(D(xy)=D(xD(y))\) (respectively \(R(xy)=R(R(x)y)\)) is said to satisfy the left (resp. right) congruence condition; if it satisfies both the left and right congruence conditions, it is said to satisfy the congruence conditions._
Numerous classes of biunary semigroups consist of precat-semigroups satisfying the congruence conditions. These include the class of Ehresmann semigroups as in [10], hence in particular all inverse semigroups if one defines \(D(s)=ss^{\prime}\) and \(R(s)=s^{\prime}s\) for all \(s\) (where \(s^{\prime}\) is the inverse of \(s\)), as well as several generalizations of Ehresmann semigroups such as the DRC-semigroups of [9] on the one hand and the localisable semigroups of [3] on the other. These biunary semigroups are called "dr-semigroups" in [19], in the final section of which an ESN type theorem for them is given that involves constellations (which are asymmetric generalizations of categories).
In some of these classes, the projections form a subsemigroup and are therefore a band. It follows immediately from what is noted in Section 2.1 of [14] that localisable semigroups are nothing but precat-semigroups satisfying the congruence conditions in which the projections form a band, and Ehresmann semigroups are localisable semigroups in which the projections
form a semilattice. Indeed, any band \(S\) can be turned into a precat-semigroup satisfying the congruence conditions if we define \(D(s)=R(s)=s\) for all \(s\in S\).
On the other hand, there are naturally occurring examples of precat-semigroups for which at least one of the congruence ocnditions fails; we return to these in Subsection 2.3. The scope of the current work is sufficient to include such examples.
The general approach to one major stream of "ESN type theorems" is that one has some class of precat-semigroups and wishes to capture the semigroup operation by limited knowledge of the product together with some further (often order-theoretic) information. Indeed in any precat-semigroup, a partial operation may be defined as follows.
**Definition 1.3**: _Let \(S\) be a precat-semigroup. For all \(s,t\in S\), define the partial binary operation \(\circ\) by setting_
\[s\circ t=st\mbox{ providing }R(s)=D(t)\mbox{,}\]
_and undefined otherwise; this is the restricted product. Define \({\cal C}(S)=(S,\circ,D,R)\)._
If \(S\) is a precat-semigroup, the partial algebra \({\cal C}(S)\) may be a category, and one seeks to capture \(S\) entirely in terms of \({\cal C}(S)\) plus some additional, often order-theoretic, structure. Moreover one might seek to characterise the categories with additional structure that arise in this way from the precat-semigroups in a given class of interest. This is precisely the nature of the original ESN-theorem linking inverse semigroups to inductive\({}_{1}\) groupoids, and indeed the variant of it given in [10] linking Ehresmann semigroups to so-called Ehresmann categories.
Sometimes a variation on the partial operation given in Definition 1.3 is used, which is defined'more often" and gives rise to a generalized category structure. In [8], Jones developed the notion of a P-Ehresmann semigroup as a common generalization of Ehresmann semigroups and regular \(*\)-semigroups; these are biunary semigroups in which the projections do not form a band but do form a so-called projection algebra, and the author was able to characterise such algebras of projections. In [9], he generalized this notion along with many of the results in [8], to so-called DRC-semigroups, which are generalizations of \(*\)-regular semigroups. Very recently, in [18], Wang obtained an ESN type theorem for DRC-semigroups; his theorem was in similar spirit to the earlier work in [10] as well as his own work in [16] and [17] in which ESN type theorems for certain classes of P-Ehresmann semigroups were given. His approach makes use of generalized categories over a projection algebra.
In all these cases, the two classes being related themselves form categories, the first of which consists of a certain class of precat-semigroups (equipped with homomorphisms preserving \(D,R\)), and the second of which consists of a class of enriched (possibly generalized) categories (equipped with suitable functors between them). In each case, the two categories are shown to be isomorphic.
In related work on (non-biunary) semigroups, in one of the most important contributions to the theory of regular semigroups, Nambooripad connected regular semigroups to inductive\({}_{2}\) groupoids (see [13]). Following this, Armstrong in [1] connected concordant semigroups to inductive\({}_{2}\) cancellative categories. In these two cases, the corresponding categories of semigroups and of categories are equivalent but not isomorphic: in particular, the underlying set of the category is generally different to that of the semigroup to which
it is equivalent. Similarly, Gould and Wang [6] obtained a category isomorphism between the class of weakly B-orthodox semigroups (which are more general than localisable semigroups as semigroups) and suitable generalized categories, and then Wang in [15] obtained an equivalence between the same class of semigroups and a class of actual categories.
In all of the above approaches, a key notion is that of orderings in the (possibly generalized) categories, and the notions of restriction and corestriction of elements of the ordered (generalized) categories by suitable identities. These restriction and corestriction notions have order-theoretic definitions (although they are required to satisfy further algebraic properties). One then defines a "pseudoproduct" from within such an ordered (possibly generalized) category in order to obtain a semigroup. Another crucial feature of these approaches is that the (generalized) category identities are assumed to have some algebraic structure consistent with the order(s) on the (generalized) category (for example, that of a semilattice in the original ESN-theorem and in [10], or of projection algebra in [18], or indeed of regular biordered set in Nambooripad's work).
In contrast to these approaches is the work of Fitzgerald and Kinyon in [3], in which the authors obtain an "order-free" ESN type theorem in which localisable semigroups are shown to correspond to a certain type of unordered category equipped with a biaction of the identities on general category elements. A very similar approach is taken to Ehresmann semigroups by Lawson in [11], although there the identities are assumed to have commutative semigroup structure. (Lawson is responsible for the term "biaction" which we use in the current work.) Rather than restriction and corestriction being defined order-theoretically and only existing for some choices of identity and category element, in the approach of [3] and [11] they are assumed to be defined universally and to satisfy some purely algebraic laws. Thus, proper categories are used rather than generalizations of them, no order information is needed on these categories, and no algebraic structure on the category identities is assumed. It is this approach that we generalize here.
Let us now again be very general. It is easy to see that if \(S\) is a precat-semigroup, then \({\cal C}(S)\) is a category if and only if \(S\) satisfies the following law:
* for all \(x,y\), \(R(x)=D(y)\Rightarrow(D(xy)=D(x)\ \&\ R(xy)=R(y))\).
**Definition 1.4**: _A precat-semigroup \(S\) satisfying (CS6) is a cat-semigroup. If \(S\) is cat-semigroup then we call \({\cal C}(S)\) the derived category of \(S\)._
It is immediate that if a precat-semigroup satisfies the congruence conditions then it is a cat-semigroup and so \({\cal C}(S)\) is a category. This helps explain why the congruence conditions often appear in earlier work in this area; however, they are not strictly necessary - the cat-semigroup laws are strictly more general, as follows from our next result.
**Proposition 1.5**: _The class of cat-semigroups is a proper quasivariety._
**Proof.** Let \(S=\{a,g,e,1\}\subseteq I(X)\) where \(X=\{w,x,y,z\}\) and
\[a=\{(w,x),(x,w)\},\ g=\{(w.w),(x,x)\},\ e=\{(w.w),(x,x),(y,y)\},\]
and \(1\) is the identity function on \(X\). Then \(S\) is a subsemigroup of \(I(X)\), with multiplication table as follows:
\[\begin{array}{c|cccc}\cdot&a&g&e&1\\ \hline a&g&a&a&a\\ g&a&g&g&g\\ e&a&g&e&e\\ 1&a&g&e&1\end{array}.\]
Clearly, \(S\) is commutative, and indeed is an inverse subsemigroup of \(I(X)\) (since \(a^{\prime}=a\) with all other elements idempotent), and therefore comes equipped with in-built notions of domain and range. However, it can also be viewed as a cat-semigroup in which \(D(a)=e\), \(R(a)=1\), and \(D(s)=R(s)=s\) for all other \(s\in S\). When checking this, only (CS6) is not obvious. But for this, if \(R(s)=D(t)\) and \(s,t\in D(S)\), then \(s=t\) and so \(D(st)=D(s^{2})=D(s)\), and similarly \(R(st)=R(t)\); if \(R(s)=D(a)\) for some \(s\in S\) then \(R(s)=e\) so \(s=e\) and so \(D(sa)=D(ea)=D(a)=e=D(e)\); and if \(R(a)=D(s)\) for some \(s\in S\) then \(D(s)=1\) so \(s=1\), and so \(D(as)=D(a)\) and \(R(as)=R(a)=1=R(s)\).
Now \(S\) has a semigroup congruence collapsing \(e,1\) and respecting \(D\) and \(R\), as is easily seen. The resulting quotient \(S^{\prime}=\{\{a\},\{g\},\{e,1\}\}\) is a precat-semigroup (since these form a variety), but has \(D(\{a\})=R(\{a\})=\{e,1\}\), yet \(R(\{a\}^{2})=R(\{g\})=\{g\}\neq R(\{a\})\), so (CS6) fails and so \(S^{\prime}\) is not a cat-semigroup. \(\Box\)
By contrast, the class of cat-semigroups satisfying the congruence conditions is indeed a variety since the congruence conditions imply (CS6).
Following [3] and [11], one may enrich the derived category \({\cal C}(S)\) of the cat-semigroup \(S\) by retaining arbitrary products of projections with semigroup elements, so we define \(e|s=es\) and \(s|e=se\) for all \(e\in D(S)\) and \(s\in S\). Note that we use the same notation for both actions, since there is only ambiguity when both arguments are from \(D(S)\), and then the interpretation does not matter! This general approach led the authors of [3] to define a _transcription category_ to be a category \(C\) equipped with left and right actions of the identities of the category \(D(C)\) on the entire category, here denoted \(e|s,s|e\) for all \(e\in D(C)\) and \(s\in C\), and satisfying the following.
* For \(e,f\in D(C)\), \(e|f\) does not depend on which way the action is interpreted.
* For all \(a\in C\), \(D(a)|a=a\) and \(a|R(a)=a\).
* For all \(a\in C\) and \(e,f\in D(C)\), \(e|(f|a)=(e|f)|a\) and \(a|(e|f)=(a|e)|f\).
* For all \(a,b\in C\), if \(a\circ b\) exists then for all \(e\in D(C)\),
* so does \((e|a)\circ R(e|a)|b\), and \(e|(a\circ b)=(e|a)\circ R(e|a)|b\);
* so does \(a|D(b|e)\circ b|e\), and \((a\circ b)|e=a|D(b|e)\circ b|e\).
* For all \(e\in D(C)\) and \(a\in C\),
* For all \(e,f\in D(C)\) and \(a\in C\), \((e|a)|f=e|(a|f)\).
The properties above were labelled (3.1a)-(3.1f) at the beginning of the third section of [3]. It follows from (TC5a) or (TC5b) that \(e|f\in D(C)\) for all \(e,f\in D(C)\); hence the two laws in (TC3) make sense. Likewise, it follows from the laws other than (TC4) that if \(a\circ b\) exists, then so do \((e|a)\circ R(e|a)|b\) and \(a|D(b|e)\circ b|e\), and so (TC4a) does not strictly require the assumption that \((e|a)\circ R(e|a)|b\) exist, and dually for (TC4b). However, in the more general settings considered in what follows, the form stated above is required.
It was shown in [3] that if \(S\) is a localisable semigroup, then \({\cal C}(S)\) is a transcription category when equipped with the biaction as described above, and conversely that, given a transcription category \(C\), one can turn it into a localisable semigroup by retaining \(D,R\) but defining a pseudoproduct via \(s\otimes t=s|D(t)\circ R(s)|t\) for all \(s,t\in C\) (noting that it follows from the laws of localisable semigroups that this pseudoproduct always exists). These constructions are shown to be mutually inverse in [3] and indeed one can obtain an isomorphism of categories with morphisms defined in the natural ways, as follows easily from Theorem 4.8 in [3].
Lawson's definition of Ehresmann biactions on categories given in [11] uses slightly different but equivalent defining laws in place of (TC1)-(TC6), with the law \(e|f=f|e\) for all \(e,f\in D(S)\) added. Lawson in [11] showed that Ehresmann semigroups correspond to categories with Ehresmann biaction, a result which can be viewed as a special case of the main result of [3] that links localisable semigroups to transcription categories.
When a cat-semigroup is equipped with the left and right actions given by \(e|s=es\) and \(s|e=se\) for all \(e\in D(S)\) and \(s\in S\), as defined above, not all of the transcription category laws will be satisfied, but some always are.
**Definition 1.6**: _A category equipped with a left and right action of \(D(C)\) on \(C\), denoted by \(e|s\) and \(s|e\) for all \(s\in C\) and \(e\in D(C)\), is said to be a category with biaction if it satisfies (TC1), (TC2) and (TC6) in the definition of a transcription category._
Transcription categories arise as the derived categories of localisable semigroups. More generally, we have the following easily checked observation.
**Proposition 1.7**: _If \(S\) is a cat-semigroup, then its derived category \({\cal C}(S)\) is a category with biaction if we define \(e|s=es\) and \(s|e=se\) for all \(s\in C\) and \(e\in D(C)\)._
As we shall see, there are cat-semigroups of interest in which (TC4) fails in the derived category with biaction, but in which one of (TC4a) and (TC4b) does hold. Note also that (TC6) allows us to write "\(e|s|f\)" without ambiguity, where \(e,f\in D(C)\) and \(s\in C\), and we sometimes do this in what follows.
Recall that for a localisable semigroup \(S\), the semigroup operation can be recovered from \({\cal C}(S)\) via \(s\otimes t=(s|D(t))\circ(R(s)|t)\) for all \(s,t\in S\). The same process of recovery of the orginal cat-semigroup from its derived category with biaction can take place as long as \(s\otimes t\) as just defined exists in \({\cal C}(S)\) and correctly calculates \(st\) in \(S\). It is easy to write down laws in \(S\) that are necessary and sufficient for this to happen - they are evidently as follows.
**Definition 1.8**: _A cat-semigroup \(S\) satisfies the strong match-up conditions if for all \(s,t\in S\), \(R(sD(t))=D(R(s)t)\) and \(st=sD(t)R(s)t\)._
As we shall see in Section 2, there are non-localisable cat-semigroups that satisfy the strong match-up conditions, although many interesting cat-semigroups do not satisfy the strong match-up conditions. However, note that in any precat-semigroup, we have
\[st=(sD(t))(R(sD(t))t)=(sD(R(s)t))(R(s)t).\]
Moreover although not holding in all cat-semigroups, the laws \(R(sD(t))=D(R(sD(t))t)\) and \(R(sD(R(s)t))=D(R(s)t)\) do indeed hold in many examples of interest, and if (and only if) they do, we may express \(st\) within \(\mathcal{C}(S)\) in two ways as
\[st=s|D(t)\circ R(s|D(t))|t=s|D(R(s)|t)\circ R(s)|t.\]
**Definition 1.9**: _The precat-semigroup \(S\) satisfies the match-up conditions if it satisfies both_
* _the law_ \(R(sD(t))=D(R(sD(t))t)\)_, the_ left match-up condition_, and_
* _the law_ \(D(R(s)t)=R(sD(R(s)t))\)_, the_ right match-up condition_._
We shall show in the next section that the strong match-up conditions imply the match-up conditions, so our nomenclature is consistent.
In Section 2, we show that the cat-semigroup \(S\) satisfies the match-up conditions if and only if its derived category with biaction satisfies (TC4). Some cat-semigroups satisfy one of the match-up conditions but not the other, and this corresponds (though less precisely) to the derived category with biaction satisfying only one of (TC4a) and (TC4b).
For any class of precat-semigroups, the natural morphism notion is of course semigroup homomorphism that respects \(D\) and \(R\). For the categories with biaction, the natural notion is as follows (generalizing [3]).
**Definition 1.10**: _A biaction functor is a functor \(\psi:C_{1}\to C_{2}\) between categories with biaction that satisfies, for all \(e\in D(C_{1})\) and \(s\in C_{1}\), \(\psi(e|s)=\psi(e)|\psi(s)\) and \(\psi(s|e)=\psi(s)|\psi(e)\)._
In Section 3, our most general ESN type theorems are obtained. We identify the categories with biaction that arise from cat-semigroups equipped with (in order of decreasing generality)
* the left match-up condition only,
* both match-up conditions, and
* the strong match-up conditions.
Each of these classes of cat-semigroups is a variety of precat-semigroups, and for each we obtain an isomorphism between the class of precat-semigroups (viewed as a category) and a particular class of categories with biaction (equipped with biaction functors and hence also viewed as a category).
The description of the categories with biaction arising from Case (i) given in Section 3 includes the condition that the relevant pseudoproduct be associative; although a first-order condition expressible solely in the language of categories with biaction, this is a somewhat
unsatisfactory description. Consequently, a special case is considered in Subsection 3.2, generalizing localisable semigroups so that the projections form a band, but in which not both of the congruence conditions are required. This class includes examples such as the biunary semigroup of binary relations on any set under demonic composition equipped with domain and range operations. The corresponding categories with biaction are simply transcription categories with some laws missing or strictly weakened.
For cat-semigroups satisfying both match-up conditions as in Case (ii), we shall show that the corresponding class of categories with biaction has a description in terms of (TC4) and a variant of it - associativity of the pseudoproduct follows even in the general case. Case (iii) then builds on this case.
## 2 Condition (TC4) and the match-up conditions
### The left and right match-up conditions
As mentioned earlier, there is a very close connection between the match-up conditions on a cat-semigroup and extensibility of its associated derived category with biaction.
**Proposition 2.1**: _For a cat-semigroup \(S\), its derived category with biaction \(\mathcal{C}(S)\) satisfies (TC4) if and only if \(S\) satisfies the match-up conditions._
**Proof.** Suppose \(S\) satisfies the match-up conditions. In particular, it satisfies the left match-up condition, and so if \(R(s)=D(t)\) and \(e\in D(S)\), then
\[D(R(es)t)=D(R(esR(s))t)=D(R(esD(t))t)=R(esD(t))=R(esR(s))=R(es).\]
So \((e|s)\circ(R(e|s)|t)\) exists in \(\mathcal{C}(S)\), and obviously equals \(e|(s\circ t)\), so \(\mathcal{C}(S)\) satisfies (TC4a). By dualising, we infer that the right match-up condition implies (TC4b), and hence that the match-up conditions together imply (TC4).
Conversely, suppose \(\mathcal{C}(S)\) satisfies (TC4); hence, for all \(s,t\in S\) for which \(R(s)=D(t)\) and for all \(e\in D(S)\), \(R(es)=D(R(es)t)\) and dually, \(D(te)=R(sD(te))\). But \(R(D(t))=D(t)\) for any \(t\in S\), so for all \(y\in S\) and \(e\in D(S)\),
1. \(D(R(eD(y))y)=R(eD(y))\), and dually,
2. \(R(yD(R(y)e))=D(R(y)e)\).
By the first cat-semigroup law applied to (1) above,
1. \(D(eD(y))=D(eD(y)R(eD(y))y)=D(eD(y)y)=D(ey)\).
In (2) above, let \(e=D(x)\) to give \(R(yD(R(y)D(x)))=D(R(y)D(x))\), and so from (3), \(R(yD(R(y)x))=D(R(y)x)\), which is the right match-up condition. We dualise to obtain the result. \(\Box\)
It follows from the above proof that the left match-up condition holding on the cat-semigroup \(S\) implies (TC4a) holds on \(\mathcal{C}(S)\), but the converse fails, as we shall soon see.
In Section 3, we shall see that not all categories with biaction satisfying (TC4) arise from cat-semigroups satisfying the match-up conditions, and we characterise those that
do so arise. But first, we turn our attention to cat-semigroups that satisfy only one of the match-up conditions and possibly not the other. For these, the derived category with biaction will of course not satisfy (TC4), but it is again possible to characterise them.
The match-up conditions have alternative descriptions in terms of the congruence conditions and a certain generalization of them.
**Definition 2.2**: _A precat-semigroup satisfies the left (resp. right) weak congruence condition if it obeys the law \(D(xy)=D(xD(R(x)y)))\) (resp. \(R(xy)=R(R(xD(y))y)\)); if both, it satisfies the weak congruence conditions._
Clearly, the congruence conditions imply the weak congruence conditions within precat-semigroups. It turns out that the weak congruence conditions are sufficient for (CS6).
**Proposition 2.3**: _If a precat-semigroup satisfies the weak congruence conditions then it is a cat-semigroup. Hence, the class of cat-semigroups satisfying the (weak) congruence conditions is the finitely based variety of precat-semigroups satisfying them._
**Proof.** Suppose \(R(x)=D(y)\) in the precat-semigroup \(S\) satisfying the weak congruence conditions. Then
\[D(xy)=D(xD(R(x)y)))=D(xD(D(y)y)))=D(xD(y))=D(xR(x))=D(x).\]
Dually, \(R(xy)=R(y)\). \(\Box\)
The main significance of the weak congruence conditions is that they can be used to equationally characterise those cat-semigroups satisfying only one of the match-up conditions.
**Proposition 2.4**: _The class of cat-semigroups satisfying the left match-up condition is the variety of precat-semigroups satisfying the left congruence and right weak congruence conditions plus the law \(R(st)=D(R(st)R(t))\)._
**Proof.** Suppose \(S\) is a cat-semigroup satisfying the left match-up condition. Then for all \(s,t\in S\), \(R(sD(t))=D(R(sD(t))t)\), so letting \(x=sD(t)\) and \(y=R(sD(t))t\), we see that \(xy=sD(t)R(sD(t))t=sD(t)t=st\). But \(R(x)=D(y)\), and so using the first cat-semigroup quasiequation, \(D(xy)=D(x)\), so \(D(st)=D(sD(t))\); using the second cat-semigroup law, \(R(xy)=R(y)\), so \(R(st)=R(R(sD(t))t)\). So \(S\) satisfies the left congruence and right weak congruence conditions. Hence, for all \(s,t\in S\),
\[R(st) = R(stR(t))\] \[= R(stD(R(t)))\] \[= D(R(stD(R(t)))R(t))\mbox{ by the left match-up condition}\] \[= D(R(stR(t))R(t))\] \[= D(R(st)R(t)).\]
Conversely, suppose \(S\) is a precat-semigroup satisfying the left congruence and right weak congruence conditions plus the law \(R(st)=D(R(st)R(t))\) for all \(s,t\in S\). Then it is a cat-semigroup by Proposition 2.3. Moreover,
\[R(sD(t)) = D(R(sD(t))R(D(t)))\mbox{ by the first additional law}\] \[= D(R(sD(t))D(t))\] \[= D(R(sD(t))t)\mbox{ by the left congruence condition.}\]
Hence the left match-up condition is satisfied. \(\Box\)
From Proposition 2.4 and its dual, and the fact that the congruence conditions imply the weak congruence conditions, we obtain the following.
**Corollary 2.5**: _The class of cat-semigroups satisfying the match-up conditions is the variety of precat-semigroups satisfying the congruence conditions and the two laws_
* \(R(st)=D(R(st)R(t))\) _and_
* \(D(st)=R(D(s)D(st))\)_._
It follows from this result that DRC-semigroups as defined in [8] and considered from an ESN type theorem viewpoint in [18] are cat-semigroups satisfying the match-up conditions, since these are precat-semigroups satisfying the congruence conditions and some further laws that obviously imply the laws \(R(st)=D(R(st)R(t))\) and \(D(st)=R(D(s)D(st))\).
We have already seen that in a cat-semigroup, the match-up conditions hold if and only if \({\cal C}(S)\) satisfies (TC4). We can now obtain something similar when only one match-up condition holds.
**Proposition 2.6**: _The cat-semigroup \(S\) satisfies the left match-up condition if and only if it satisfies \(R(st)=D(R(st)R(t))\) and \({\cal C}(S)\) satisfies (TC4a)._
**Proof.** If \(S\) satisfies the left match-up condition, then the argument given in the proof of Proposition 2.1 establishes that \({\cal C}(S)\) satisfies (TC4a).
Conversely, suppose \(S\) satisfies the law \(R(st)=D(R(st)R(t))\) and \({\cal C}(S)\) satisfies (TC4a). Now for \(x,y\in S\), because \(D(y)=R(D(y))\), we get \(R(eD(y))=D(R(eD(y))y)\) for any \(e\in D(S)\) upon using (TC4a), so by the first cat-semigroup law, \(D(eD(y))=D(eD(y)R(eD(y))y)=D(ey)\). So letting \(e=R(xD(y))\), we get
\[D(R(xD(y))y)=D(R(xD(y))D(y))=D(R(xD(y))R(D(y)))=D(R(xD(y)))=R(xD(y)).\]
Hence the left match-up condition is satisfied. \(\Box\)
Let \(S=\{0,e,1\}\) be the three-element semilattice with zero \(0\) and identity \(1\), and define \(D(0)=e\) with \(R(0)=1\), and \(D(x)=x\) for \(x\neq 0\). It is tedious but routine to check that \(S\) is a cat-semigroup with (TC4a) holding on \({\cal C}(S)\), and \(R(0e)=R(0)=1\), yet \(D(R(0e)R(e))=D(1e)=D(e)=e\), so the additional condition cannot be dispensed with in Proposition 2.6.
### The strong match-up conditions
We first establish that the strong match-up conditions do indeed strengthen the match-up conditions.
**Proposition 2.7**: _Suppose a cat-semigroup \(S\) satisfies the first law in the strong match-up conditions - \(R(sD(t))=D(R(s)t)\). Then \(S\) satisfies the match-up conditions. Hence the strong match-up conditions imply the match-up conditions._
**Proof.** For all \(x,y\in S\), \(R(xD(y))=R((xD(y))D(y))=D(R(xD(y))y)\), upon using the assumed law with \(s=xD(y)\) and \(t=y\). The right match-up condition follows dually. \(\Box\)
**Lemma 2.8**: _Let \(S\) be a cat-semigroup. Then \(S\) satisfies the law \(R(sD(t))=D(R(s)t)\) if and only if it satisfies the congruence conditions and the law \(D(ef)=R(ef)\) for all \(e,f\in D(S)\)._
**Proof.** Suppose \(S\) is a cat-semigroup satisfying the law \(R(sD(t))=D(R(s)t)\). By Proposition 2.7, it satisfies the match-up conditions, and by Corollary 2.5, it satisfies the congruence conditions. Hence, for \(e,f\in D(S)\), \(R(ef)=R(eD(f))=D(R(e)f)=D(ef)\).
Conversely, if \(S\) satisfies the congruence conditions and \(D(ef)=R(ef)\) for all \(e,f\in D(S)\), then for all \(s,t\in S\), \(R(sD(t))=R(R(s)D(t))=D(R(s)D(t))=D(R(s)t)\). \(\Box\)
**Corollary 2.9**: _The class of cat-semigroups \(S\) satisfying the strong match-up conditions is the variety of precat-semigroups satisfying the congruence conditions together with the laws \(D(ef)=R(ef)\) for all \(e,f\in D(S)\) and the law \(sD(t)R(s)t=st\)._
We have already noted that every localisable semigroup satisfies the strong match-up conditions. The converse fails.
**Example 2.10**: _A non-localisable cat-semigroup satisfying the strong match-up conditions._
Let \(S=\{e,f,a\}\subseteq T(X)\), where \(X=\{x,y,z\}\) and \(e=\{(x,x),(y,z),(z,z)\}\), \(f=(x,y),(y,y),(z,y)\}\) and \(a=\{(x,z),(y,z),(z,z)\}\). Then \(S\) is a band with multiplication as follows:
\[\begin{array}{c|cccc}\cdot&e&f&a\\ \hline e&e&f&a\\ f&a&f&a\\ a&a&f&a\end{array}.\]
Define unary operations \(D\) and \(R\) on \(S\) by setting \(D(a)=e=R(a)\), with \(D(s)=R(s)=s\) for \(s\neq a\). Then it is routine to check the laws for precat-semigroups are satisfied, and \(D(S)=\{e,f\}\). For the left congruence condition, to check that \(D(st)=D(sD(t))\), only cases where \(t\not\in D(S)\) are non-trivial, that is, where \(t=a\). But then \(D(sa)=D(a)=e\), while \(D(sD(a))=D(se)=e\) also. Similarly, \(R(at)=R(et)\) for each \(t\in S\), so the right congruence condition holds. Also, \(D(ef)=D(f)=f=R(f)=R(ef)\), and \(D(fe)=D(a)=e=R(a)=R(fe)\). Finally, if \(s,t\in D(S)\), \(sD(t)R(s)t=stst=st\) since
is a band, while if \(s=a\), \(sD(t)R(s)t=aD(t)et=aet=at=st\), and if \(t=a\), then \(sD(t)R(s)t=seR(s)a=sea=sa=st\). So by Corollary 2.9, \(S\) satisfies the strong match-up conditions. But \(fe=a\) so \(D(S)\) is not a band, and so \(S\) is not localisable.
If \(S\) is a cat-semigroup satisfying the strong match-up conditions, then (TC3) does not in general make sense in \({\cal C}(S)\) since \(D(S)\) may not be a band, and (TC5) fails in general. However, (TC4) does hold, by Propositions 2.6 and 2.7.
**Proposition 2.11**: _Suppose a cat-semigroup \(S\) satisfies the first law in the strong match-up conditions - \(R(sD(t))=D(R(s)t)\). Then_
\[st=s|D(t)\circ R(s|D(t))|t=s|D(R(s)|t)\circ R(s)|t\mbox{ in }{\cal C}(S).\]
_Moreover, the corresponding arguments in each of these category products coincide if and only if \(S\) is localisable._
**Proof.** Suppose \(S\) satisfies the law \(R(sD(t))=D(R(s)t)\). We saw in Proposition 2.7 that the match-up conditions are satisfied, and so \(st\) may be expressed in \({\cal C}(S)\) in the two ways indicated. If \(S\) is localisable, then it satisfies the congruence conditions and \(D(S)\) is a band, so \(sD(R(s)t)=sD(R(s)D(t))=sR(s)D(t)=sD(t)\), and similarly, \(R(sD(t))t=R(s)t\), so the arguments in the two category products coincide. Conversely, if \(sD(R(s)t)=sD(t)\) and \(R(sD(t))t=R(s)t\) for all \(s,t\), then because \(S\) satisfies the congruence conditions and \(D(ef)=R(ef)\) for all \(e,f\in D(S)\) by Lemma 2.8, we have that for all \(e,f\in D(S)\),
\[D(ef)=D(eef)D(ef)=D(eD(ef))D(ef)=R(eD(ef))D(ef)\]
\[=R(eD(D(ef)))D(ef)=R(e)D(ef)=eD(ef)=eD(R(e)f)=eD(f)=ef,\]
so \(D(S)\) is a band, and so \(S\) is localisable. \(\Box\)
### When \(D(s)\) is a band - left semi-localisable semigroups
As previously noted, localisable semigroups are nothing but precat-semigroups in which \(D(S)\) is a band and the congruence conditions hold. We now generalize this concept.
**Definition 2.12**: _We say a precat-semigroup \(S\) is left semi-localisable if it satisfies the left congruence and right weak congruence conditions and \(D(S)\) is a band; we define right semi-localisability dually._
It follows that a precat-semigroup \(S\) is localisable if and only if it is both left semi-localisable and right semi-localisable. (Note that the term "left localisable" has already been used, in [3], for a unary semigroup satisfying only the defining laws for localisable semigroups that involve \(D\).)
One-sided semi-localisability is natural because of the following.
**Proposition 2.13**: _The class of cat-semigroups satisfying the left match-up condition and in which the projections form a band is the variety of left semi-localisable preact-semigroups._
**Proof.** Let \(S\) be a cat-semigroup in which \(D(S)\) is a band. If \(S\) is left semi-localisable, then for all \(s\in S\) and \(e\in D(S)\), \(R(se)=R(R(se)e)=R(se)e\) since \(D(S)\) is a band. Hence, for \(s,t\in S\),
\[D(R(sD(t))t)=D(R(sD(t))D(t))=D(R(sD(t)))=R(sD(t)),\]
so the left match-up condition holds. The converse follows from Proposition 2.4. \(\Box\)
From the above and Proposition 2.7, we obtain the following.
**Corollary 2.14**: _For a cat-semigroup \(S\), the following are equivalent:_
* \(S\) _is localisable;_
* \(S\) _satisfies the strong match-up conditions and_ \(D(S)\) _is a band;_
* \(S\) _satisfies the match-up conditions and_ \(D(S)\) _is a band._
There are natural examples of non-localisable, left semi-localisable semigroups. One such is the semigroup \(Rel^{d}(X)\) of binary relations on the set \(X\), equipped with domain, range and demonic relational composition, to be contrasted with the Ehresmann (hence localisable) semigroup \(Rel(X)\) having the same underlying set but equipped with domain, range and the usual or "angelic" composition. Roughly speaking, angelic composition is important when modelling partial correctness of "non-deterministic" programs, and demonic for total correctness (which considers program termination); see [2] and [7] for more details.
The example \(Rel^{d}(X)\) leads to consideration of the following class of precat-semigroups.
**Definition 2.15**: _We say a precat-semigroup \(S\) is D-ample if it satisfies \(sD(t)=D(st)s\) for all \(s,t\in S\)._
The left (and right) ample laws have been considered by many authors in a range of settings, but originated in the work of Fountain [4], where they were defined slightly differently but in a way equivalent to that given here if the left congruence condition is assumed.
**Proposition 2.16**: _If \(S\) is a D-ample precat-semigroup, then \(D(S)\) is a band._
**Proof.** For all \(e,f\in D(S)\), \(ef=eD(f)=D(ef)e=D(ef)D(e)=D(D(ef)e)D(ef)=D(eD(f))D(ef)=D(ef)D(ef)=D(ef)\). \(\Box\)
Note that the above proof only requires the precat-semigroup properties of \(D\), not \(R\). In [5], a _left restriction semigroup_ is defined to be a unary semigroup with unary operation \(D\) satisfying the following laws:
* \(D(x)x=x\);
* \(D(x)D(y)=D(y)D(x)\);
* \(D(D(x)y)=D(x)D(y)\);
* \(xD(y)=D(xy)x\).
This is the axiomatisation appearing in [5], although other (equivalent) formulations have been used. Other laws follow easily, such as the law \(D(xy)D(x)=D(xy)\), and all of the precat-semigroup laws involving only \(D\), the left congruence condition \(D(st)=D(sD(t))\), along with the fact that \(D(S)\) is a band (from the proof of Proposition 2.16) and hence a semilattice.
Here we are particularly concerned with left restriction semigroups equipped with a sufficiently well-behaved range operation.
**Definition 2.17**: _A left restriction semigroup with range is a precat-semigroup which is a left restriction semigroup and where \(R\) satisfies \(R(xy)R(y)=R(xy)\)._
We note that the semigroup \(PT(X)\) of partial functions on \(X\) is a left restriction semigroup with range, but satisfies the right congruence condition and is therefore localisable. A further example is \(Rel^{d}(X)\), which does not satisfy the right congruence condition (as simple examples show) and hence is not localisable.
Under the usual semilattice order on \(D(S)\) given by \(e\leq f\) when \(e=ef\), it can be shown that \(R(s)\) is the smallest \(e\in D(S)\) such that \(se=s\) (since if \(se=s\) then \(R(s)e=R(se)e=R(se)R(e)=R(se)\)); we use this in the proof below.
**Theorem 2.18**: _A left restriction semigroup with range is nothing but a left semi-localisable semigroup which is D-ample and in which the band \(D(S)\) is a semilattice._
**Proof.** Suppose \(S\) is a left restriction semigroup with range. We must show \(S\) is left semi-localisable. But \(D(S)\) is a band and the left congruence condition is satisfied, so it remains to show that the right weak congruence condition holds in \(S\).
First note that for all \(s,t\in S\), \(stR(R(s)t)=s(R(s)t)R(R(s)t)=sR(s)t=st\), so \(R(st)\leq R(R(s)t)\) (since for all \(x\in S\), \(R(x)\) is the smallest \(e\in D(S)\) for which \(xe=x\)).
Now for \(s,t\in S\), we have that \(sD(t)=D(st)s=D(stR(st))s=sD(tR(st))\), and so \(sD(t)=sD(t)D(tR(st))\) (since \(D(tR(st))\leq D(t)\)), from which it follows that \(R(sD(t))\leq D(tR(st))\). Hence,
\[R(sD(t))tR(st)=R(sD(t))D(tR(st))t=R(sD(t))t,\]
so again we must have that \(R(R(sD(t))t)\leq R(st)=R((sD(t))t)\leq R(R(sD(t))t)\) from what was shown initially, so all are equal and so \(R(st)=R(R(sD(t))t)\).
Conversely, suppose \(S\) is left semi-localisable, D-ample, and the band \(D(S)\) is a semilattice. Then it is a left restriction semigroup with respect to multiplication and \(D\), the third law following from the left congruence condition and the fact that \(D(S)\) is a band. Finally, for all \(x,y\in S\), we have that
\[R(xy) = R((xy)R(y))\] \[= R(R(xyD(R(y)))R(y))\mbox{ by the right weak congruence condition}\] \[= R(R(xy)R(y))R(y))\] \[= R(R(xy)R(y))\] \[= R(xy)R(y)\mbox{ since }D(S)\mbox{ is a band,}\]
as required. \(\Box\)
The ESN type theorems
We now determine which categories with biaction arise from cat-semigroups satisfying (in order of decreasing generality): (i) the left match-up condition, (ii) both match-up conditions, and (iii) the strong match-up conditions. We obtain category isomorphisms betwen the relevant category of cat-semigroups on the one hand and the associated category of categories with biaction on the other in these general settings.
If \(S\) is any of the above three types of cat-semigroups, at least the left match-up condition holds, and so it is clear that the derived category with biaction \(\mathcal{C}(S)\) satisfies the following property:
1. \(R(s|D(t))=D(R(s|D(t))|t)\) for all \(s,t\).
There is an obvious dual condition:
1. \(D(R(s)|t)=R(s|D(R(s)|t))\) for all \(s,t\).
**Definition 3.1**: _If a category with biaction \(C\) satisfies (LMU) then we define the left pseudoproduct \(\otimes_{l}\) on \(C\) by setting, for all \(s,t\in C\),_
\[s\otimes_{l}t=s|D(t)\circ R(s|D(t))|t,\]
_and we let \(\mathcal{S}(C)\) be the algebra \((S,\otimes_{l},D,R)\), the left extension of \(S\). The right pseudoproduct is defined dually, if \(C\) satisfies (RMU)._
Clearly, if \(S\) is a cat-semigroup satisfying the left match-up condition, then \(\otimes_{l}\) in \(\mathcal{C}(S)\) agrees with the semigroup product on \(S\).
Regarding Case (i), we obtain relatively simple axioms for the categories with biaction \(C\) satisfying (LMU), necessary and sufficient to ensure that \(\mathcal{S}(C)\) satisfies all the defining laws for cat-semigroups satisfying the left match-up condition aside perhaps from associativity, which must be imposed as a further condition. We then obtain a category isomorphism. Following this we consider the special case in which the projections form a band, since these admit a more satisfying description of the corresponding categories with biaction.
In light of Proposition 2.1, one might hope that the categories with biaction corresponding to cat-semigroups satisfying both match-up conditions as in Case (ii) above might be those satisfying (TC4), but this turns out to be too big a class, associativity of the pseudoproduct again being the snag. However, we find a relatively simple additional condition that yields the hoped-for correspondence. Case (iii) follows this same path.
### Categories and the left match-up condition
We begin by generalizing the notion of extensibility for categories with biaction.
**Definition 3.2**: _Let \(C\) be a category with biaction. We say it satisfies condition (TC4L) if it satisfies (TC4a), and_
1. _for all_ \(a,b\in C\)_, if_ \(a\circ b\) _exists then for all_ \(e\in D(C)\)_, so does_ \((a|D(b|e))\circ R(a|D(b|e))|(b|e)\)_, and it equals_ \((a\circ b)|e\)
_We define condition (TC4R) dually in terms of (TC4a\({}^{\prime}\)) (defined dually to (TC4b\({}^{\prime}\)) in the obvious way) and (TC4b)._
Condition (LMU) may be viewed as a replacement for (TC5) in the axioms of transcription categories, with (TC4b\({}^{\prime}\)) replacing (TC4b). A nine-element counterexample was found using _Mace4_[12], showing that (LMU) is not redundant in the presence of the other laws.
**Proposition 3.3**: _If \(S\) is a cat-semigroup satisfying the left match-up condition, then \({\cal C}(S)\) satisfies (LMU) and (TC4L), and \(\otimes_{l}\) coincides with the product on \(S\)._
**Proof.** From Proposition 2.6, \(C={\cal C}(S)\) is a category with biaction satisfying (TC4a), and (LMU) follows from the left match-up condition for \(S\) upon applying the biaction definition. For (TC4b\({}^{\prime}\)), first note that if \(a\circ b\) exists then \(R(a)=D(b)\) and so for all \(e\in D(C)\), \(R(a|D(b|e))=D(R(a|D(b|e))|(b|e))\) by (LMU), and so the category product \((a|D(b|e))\circ R(a|D(b|e))|(b|e)\) exists and must equal \(aD(be)R(aD(be))be=aD(be)be=abe=(a\circ b)|e\). That \(s\otimes_{l}t=st\) is immediate. \(\Box\)
Next is a useful result for what follows.
**Lemma 3.4**: _If \(C\) is a category with biaction and (TC4a) holds, then for all \(s\in C\) and \(e\in D(C)\), \(D(e|s)=D(e|D(s))\)._
**Proof.** From (TC4a), \(e|s=e|(D(s)\circ s)=e|D(s)\circ R(e|D(s))|s\), so \(D(e|s)=D(e|D(s))\). \(\Box\)
**Corollary 3.5**: _If \(C\) is a category with biaction satisfying (TC4), then for all \(s\in C\) and \(e\in D(C)\), \(D(e|s)=D(e|D(s))\) and \(R(s|e)=R(R(s)|e)\)._
Obviously, if a category with biaction satisfies (TC4L) and (TC4R), then it satisfies (TC4), but the converse holds also.
**Proposition 3.6**: _A category with biaction satisfies (TC4) if and only if it satisfies (TC4L) and (TC4R), and in this case it satisfies both (LMU) and (RMU) as well._
**Proof.** Let \(C\) be a category with biaction satisfying (TC4). From (TC4a), for \(s,t\in C\), we have that \(R(s)|(D(t)\circ t)=R(s)|D(t)\circ R(R(s)|D(t))|t\) exists and so \(R(R(s)|D(t))=D(R(R(s)|D(t))|t)\), so by Corollary 3.5, \(R(s|D(t))=D(R(s|D(t))|t)\), and so (LMU) holds. Dually, (RMU) holds also.
It remains to show that (TC4b\({}^{\prime}\)) holds. But if \(R(a)=D(b)\) and \(e\in D(C)\), using (RMU) we have
\[R(a|D(R(a)|(b|e)))=D(R(a)|(b|e))=D(D(b)|(b|e))=D((D(b)|b)|e)=D(b|e).\]
Hence, \(R(a|D(R(a)|(b|e))|(b|e)=D(b|e)|(b|e)=b|e\). So from (TC4b),
\[(a\circ b)|e=a|D(b|e)\circ b|e=a|D(b|e)\circ R(a|D(R(a)|(b|e))|(b|e).\]
Dually, (TC4a\({}^{\prime}\)) also holds.
The converse has already been noted. \(\Box\)
It follows that each of (TC4L) and (TC4R) generalizes (TC4). Shortly, we prove a kind of converse to Proposition 3.3, but first we need some preliminary results pertaining to categories satisfying the conditions of that result.
**Lemma 3.7**: _If \(C\) is a category with biaction satisfying (LMU) and (TC4L), then for all \(x\in C\) and \(e\in D(C)\), \(x|e=(x|e)|e\)._
**Proof.** For \(x\in C\) and \(e\in D(C)\), and letting \(A=R(x|D(R(x)|e))|R(x)\), we shall prove the following in turn:
1. \(x|e=x|D(R(x)|e)\circ A|e\);
2. \(e|e=e\);
3. \(A|e=(A|e)|e\);
4. \(R(x|e)=R(A|e)\);
5. \((x|e)|e=(x|e)\circ R(x|e)|e\).
First, using (TC4b\({}^{\prime}\)) we have that
\[x|e=(x\circ R(x))|e=x|D(R(x)|e)\circ R(x|D(R(x)|e))|(R(x)|e),\]
and then applying (TC6) to the second term above gives (1). For (2), \(e|e=e|R(e)=e\) by (TC2). For (3), letting \(s=x|D(R(x)|e)\), we have that \(A=R(s)|R(x)\), and so, using (TC6) and (2) gives
\[(A|e)|e = ((R(s)|R(x))|e)|e\] \[= (R(s)|(R(x)|e))|e\] \[= R(s)|((R(x)|e)|e)\] \[= R(s)|(R(x)|(e|e))\] \[= R(s)|(R(x)|e)\] \[= (R(s)|R(x))|e\] \[= A|e.\]
Applying (Cat4) to (1) gives (4). Finally, for \(x\in C\) and \(e\in D(C)\),
\[(x|e)|e = (x|e)|D(R(x|e)|e)\circ R((x|e)|D(R(x|e)|e))|(R(x|e)|e)\mbox{ by (TC4b\({}^{\prime}\))}\] \[= (x|e)|R(x|e)\circ R((x|e)|R(x|e))|(R(x|e)|e)\mbox{ by (LMU)}\] \[= (x|e)\circ R(x|e)|(R(x|e)|e)\mbox{ by (TC2)}\] \[= (x|e)\circ(R(x|e)|R(x|e))|e\mbox{ by (TC6)}\] \[= (x|e)\circ R(x|e)|e\mbox{ by (2)},\]
establishing (5).
Hence, for \(x\in C\) and \(e\in D(C)\),
\[(x|e)|e = (x|e)\circ R(x|e)|e\mbox{ by (\ref{eq:1}) and (\ref{eq:2})}\] \[= x|D(R(x)|e)\circ A|e\circ R(A|e)|e\mbox{ by (\ref{eq:1}) with $x$ replaced by $A$}\] \[= x|D(R(x)|e)\circ A|e\mbox{ by (\ref{eq:2})}\] \[= x|e\mbox{ by (\ref{eq:1})},\]
as required. \(\Box\)
**Proposition 3.8**: _Let \(C\) be a category with biaction satisfying (LMU) and (TC4L). Then \({\cal S}(C)\) satisfies the following laws: for all \(x,y\in C\) and \(e\in D(C)\), \(e\otimes_{l}x=e|x\), \(x\otimes_{l}e=x|e\), \(x\otimes_{l}y=x\circ y\) whenever \(R(x)=D(y)\), laws (CS1) to (CS6) for cat-semigroups, and the left match-up law \(R(s\otimes_{l}D(t))=D(R(s\otimes_{l}D(t))\otimes_{l}t)\)._
**Proof.** For \(x\in C\) and \(e\in D(C)\), using (TC4a) we obtain
\[e|x=e|(D(x)\circ x)=e|D(x)\circ R(e|D(x))|x=e\otimes_{l}x.\]
Again, for \(x\in C\) and \(e\in D(C)\),
\[x|e = (x|e)|e\mbox{ by Lemma \ref{lem:2}}\] \[= (x|e\circ R(x|e))|e\] \[= A\circ R(A)|(R(x|e)|e)\mbox{ by (TC4b}^{\prime}),\]
where \(A=(x|e)|D(R(x|e)|e)\), which equals \((x|e)|R(x|e)=x|e\) by (LMU), and so
\[x|e = x|e\circ R(x|e)|(R(x|e)|e)\] \[= x|e\circ(R(x|e)|R(x|e))|e\mbox{ by (TC6)}\] \[= x|e\circ R(x|e)|e\mbox{, by (\ref{eq:2}) in the proof of Lemma \ref{lem:2}}\] \[= (x|D(e))\circ R(x|D(e))|e\] \[= x\otimes_{l}e.\]
If \(R(x)=D(y)\), then
\[x\otimes_{l}y=x|D(y)\circ R(x|D(y))|y=x|R(x)\circ R(x|R(x))|y=x\circ R(x)|y=x \circ D(y)|y=x\circ y.\]
We turn to the precat-semigroup laws (CS1)-(CS5). By (2) in the proof of Lemma 3.7, for \(e\in D(C)\), we have
\[e\otimes_{l}e=e|D(e)\circ R(e|D(e))|e=e|R(e)\circ R(e|e)|e=e\circ R(e)|e=R(e)| e=e|e=e,\]
establishing (CS1). Now (CS2) and (CS3) are immediate, and for (CS4) and (CS5), observe that for all \(x\in C\), \(D(x)\otimes_{l}x=D(x)|x=x\), and similarly \(x\otimes_{l}R(x)=x\).
For the cat-semigroup laws in (CS6), if \(x,y\in C\) are such that \(R(x)=D(y)\) then
\[D(x\otimes_{l}y)=D(x|D(y))=D(x|R(x))=D(x),\]
and
\[R(x\otimes_{l}y)=R(R(x|D(y))|y)=R(R(x|R(x))|y)=R(R(x)|y)=R(D(y)|y)=R(y).\]
From (LMU) and what we have already shown, we have that
\[R(xD(y))=R(x|D(y))=D(R(x|D(y))|y)=D(R(xD(y))y),\]
proving the left match-up condition. \(\Box\)
**Corollary 3.9**: _If \(C\) is a category with biaction satisfying (LMU) and (TC4L), and \(\otimes_{l}\) is associative, then \(\mathcal{S}(C)\) is a cat-semigroup satisfying the left match-up condition._
Categories with biaction satisfying (LMU) and (TC4L) do not always yield cat-semigroups.
**Example 3.10**: _A category with biaction satisfying (LMU) and (TC4L) in which \(\otimes_{l}\) is not associative._
Let \(C=\{s,e,f\}\) be the category in which \(D(C)=\{e,f\}\) and \(D(s)=R(s)=f\), with \(s\circ s=f\). (This fully specifies \(C\) as a category.) Define a biaction on \(C\) as follows: \(s|e=s|f=s\), \(e|s=e|f=e\), \(f|s=s\), \(f|e=f\), \(e|e=e\) and \(f|f=f\). It is a tedious but routine exercise to verify that \(C\) is category with biaction and satisfies (LMU) and (TC4L). But \((s\otimes_{l}e)\otimes_{l}s=(s|e)\otimes_{l}s=s\otimes_{l}s=s\circ s=f\), whereas \(s\otimes_{l}(e\otimes_{l}s)=s\otimes_{l}e=s|e=s\), so \(\otimes_{l}\) is not associative.
Recall the definition of a biaction functor as in Definition 1.10.
**Theorem 3.11**: _The category of cat-semigroups satisfying the left match-up condition is isomorphic to the category of categories with biaction satisfying (LMU), (TC4L) and associativity of \(\otimes_{l}\), with morphisms being biaction functors._
**Proof.** Let \(S\) be a cat-semigroup satisfying the left match-up condition. Then by Proposition 3.3, \(\mathcal{C}(S)\) satisfies the stated conditions, and \(\mathcal{SC}(S)=S\).
Conversely, if \(C\) is a category with biaction satisfying the stated conditions, then by Proposition 3.8, \(\mathcal{S}(C)\) is a cat-semigroup satisfying the left match-up condition, in which \(s\circ t=st\) whenever \(R(s)=D(t)\), \(e|s=e\otimes_{l}s\) and \(s|e=s\otimes_{l}e\) for all \(s\in C\) and \(e\in D(C)\). These agree with the definitions of the category product and biaction operations in \(\mathcal{CS}(C)\), which therefore equals \(C\).
We move to the functorial properties. Suppose \(f:S_{1}\to S_{2}\) is a semigroup homomorphism respecting \(D,R\). It is immediate that \(f:\mathcal{C}(S_{1})\rightarrow\mathcal{C}(S_{2})\) (defined as for \(f\) on the underlying sets, so we use the same name for it) is a biaction functor, since the category product and the biactions are simply special cases of semigroup product. Conversely, if \(F:C_{1}\to C_{2}\) is a category with biaction functor and \(C_{1},C_{2}\) satisfy (LMU), (TC4L) and
associativity of \(\otimes_{l}\), then for \(s,t\in{\cal S}(C_{1})\), we have \(F(D(t))=D(F(t))\) and similarly for \(R\), and then
\[F(s\otimes_{l}t) = F(s|D(t)\circ R(s|D(t))|t)\] \[= F(s|D(t))\circ F(R(s|D(t))|t)\] \[= F(s)|F(D(t))\circ F(R(s|D(t))|F(t)\] \[= F(s)|D(F(t))\circ R(F(s)|D(F(t)))|F(t)\] \[= F(s)\otimes_{l}F(t),\]
so \(F\) determines a \(D\)- and \(R\)-respecting homomorphism \({\cal S}(C_{1})\to{\cal S}(C_{2})\). \(\Box\)
The above isomorphism restricts to the one between the category of localisable semigroups and transcription categories implicit in [3]. Hence it also restricts to one between the category of Ehresmann semigroups and categories with Ehresmann biaction as in [11].
The description of the categories with biaction corresponding to cat-semigroups satisfying the left match-up condition given in Theorem 3.11 is slightly unsatisfactory, since it simply imposes the requirement of associativity of the left pseudoproduct; on the other hand, this does at least yield a finite first-order axiomatisation expressible in the language of categories with biaction. Of course, we prefer simpler and more natural laws not specifically referencing the left pseudoproduct yet which force its associativity. This is possible in an important special case to which we return in the final section.
As far as we know, all previous ESN type theorems involving biunary semigroups, both the left and right congruence conditions have been assumed. Even in settings where biunary semigroups are replaced by semigroups with distinguished idempotents (with multiple candidates for \(D(s)\) or \(R(s)\) for each \(s\)), such as that considered in [6] and [15], versions of the left and right congruence conditions are assumed. But the above result applies to cat-semigroups that need not satisfy both congruence conditions, such as \(Rel^{d}(X)\). In the final section, we give a version of Theorem 3.11 applying to left semi-localisable semigroups such as \(Rel^{d}(X)\), in which the corresponding categories with biaction may be defined without explicitly assuming associativity of the pseudoproduct.
### Special case: left semi-localisable semigroups
Before turning to the categories corresponding to cat-semigroups satisfying the match-up conditions, we consider in some detail a special case of the correspondence presented in Subsection 3.1, applying to left semi-localisable semigroups. It turns out that the corresponding categories with biaction have an easy description in this case: they satisfy the transcription category laws but with (TC4b) replaced by the more general (TC4b\({}^{\prime}\)), and (TC5b) dropped entirely.
**Theorem 3.12**: _For left semi-localisable semigroups, the corresponding categories with biaction are those satisfying (TC3), (TC4L) and (TC5a)._
**Proof.** Suppose \(S\) is left semi-localisable. By Proposition 3.3, (TC4L) holds. That (TC3) and (TC5a) hold in \({\cal C}(S)\) is clear, because \(D(S)\) is a band in \(S\), the left congruence condition is satisfied, and the biaction in \({\cal C}(S)\) is semigroup multiplication.
Conversely, suppose \(C\) is a category with biaction satisfying (TC3), (TC4L) and (TC5a); in particular then, \(D(e|f)=e|f\) for all \(e,f\in D(C)\). We first show that (LMU) follows.
For \(x\in C\) and \(e\in D(C)\), by (TC4b\({}^{\prime}\)), we have that
\[x|e=(x\circ R(x))|e=x|(R(x)|e)\circ R(x|(R(x)|e))|(R(x)|e),\]
and so because \(R(x|(R(x)|e))|(R(x)|e)\in D(C)\), we obtain \(x|e=x|(R(x)|e)\) and since \(e|e=D(e)|e=e\),
\[R(x|e)=R(x|(R(x)|e))|(R(x)|e)=R(x|e)|R(R(x)|e)=R(x|e)|(R(x)|e)=(R(x|e)|R(x))|e\]
\[=(R(x|e)|R(x))|(e|e)=((R(x|e)|R(x))|e)|e=(R(x|e)|(R(x)|e))|e=R(x|e)|e,\]
so by Lemma 3.4, for all \(x,y\in C\), we have
\[D(R(x|D(y))|y)=D(R(x|D(y))|D(y))=D(R(x|D(y)))=R(x|D(y)),\]
establishing (LMU).
Now suppose \(s,t,u\in C\). Then
\[(s\otimes_{l}t)\otimes_{l}u=((s\otimes_{l}t)|D(u))\circ R((s\otimes_{l}t)|D(u ))|u.\]
But
\[(s\otimes_{l}t)|D(u) = (s|D(t)\circ R(s|D(t))|t)|D(u)\] \[= (s|D(t))|(D(R(s|D(t))|t|D(u))\circ R((s|D(t))|(D(R(s|D(t))|t|D(u) ))|(R(s|D(t))|t|D(u)))\] \[= A\circ R(A)|(R(s|D(t))|t|D(u))\]
where
\[A = (s|D(t))|(D(R(s|D(t))|t|D(u)))\] \[= (s|D(t))|(D(R(s|D(t))|D(t|D(u))))\] \[= (s|D(t))|(R(s|D(t))|D(t|D(u)))\] \[= (s|D(t))|D(t|D(u))\qquad\qquad(*)\] \[= s|(D(t)|D(t|D(u)))\] \[= s|D(D(t)|D(t|D(u)))\] \[= s|D(D(t)|(t|D(u))\] \[= s|D((D(t)|t)|D(u))\] \[= s|D(t|D(u)),\]
and so
\[R(A)|(R(s|D(t))|t|D(u))\] \[= R(s|D(t|D(u)))|(R(s|D(t))|t|D(u))\] \[= R(s|D(t)|D(t|D(u)))|(R(s|D(t))|t|D(u))\mbox{ by }(*)\mbox{ above}\] \[= R((s|D(t)|R(s|D(t)))|D(t|D(u)))|(R(s|D(t))|(D(t|D(u))|(t|D(u)))\] \[= R((s|D(t)|R(s|D(t)))|D(t|D(u)))|t|D(u)\] \[\mbox{ since }e|f\in D(C)\mbox{ and }R(x|e)|e=R(x|e)\mbox{ for all }x\in C \mbox{ and }e,f\in D(C)\mbox{ as above}\] \[= R((s|D(t))|D(t|D(u)))|t|D(u)\] \[= R(A)|t|D(u)\mbox{ by }(*)\mbox{ above}\] \[= R(s|D(t|D(u)))|t|D(u).\]
Hence \((s\otimes_{l}t)|D(u)=A\circ R(A)|(R(s|D(t))|t|D(u))=s|D(t|D(u))\circ R(s|D(t|D (u)))|t|D(u)\), and so
\[R((s\otimes_{l}t)|D(u))|u = R(R(s|D(t|D(u)))|t|D(u))|u\] \[= R(R(s|D(t|D(u)))|t|D(u)|R(t|D(u)))|u\] \[= R(R(s|D(t|D(u)))|t|D(u)|R(t|D(u)))|R(t|D(u))|u\] \[= R(R(s|D(t|D(u))|t|D(u))|R(t|D(u))|u.\]
Pulling all this together,
\[(s\otimes_{l}t)\otimes_{l}u = ((s\otimes_{l}t)|D(u))\circ R((s\otimes_{l}t)|D(u))|u\] \[= s|D(t|D(u))\circ R(s|D(t|D(u)))|t|D(u)\circ R(R(s|D(t|D(u))|t|D (u))|R(t|D(u))|u.\]
On the other hand,
\[s\otimes_{l}(t\otimes_{l}u) = (s|D(t\otimes_{l}u))\circ(R(s|D(t\otimes_{l}u))|(t\otimes_{l}u))\] \[= s|D(t|D(u))\circ R(s|D(t\otimes_{l}u))|((t|D(u))\circ(R(t|D(u))| u))\] \[= s|D(t|D(u))\circ R(s|D(t|D(u)))|t|D(u)\circ R(R(s|D(t|D(u))|t|D (u))|R(t|D(u))|u\] \[= (s\otimes_{l}t)\otimes_{l}u,\]
and \(\otimes_{l}\) is associative. Hence by Corollary 3.9, \({\cal S}(C)\) is a cat-semigroup satisfying the left match-up condition. Then because \(e\otimes_{l}f=e|f\) for all \(e,f\in D(C)\), it is immediate from the law \(D(e|f)=e|f\) for all \(e,f\in D(C)\) that \(D({\cal S}(C))\) is a band. \(\Box\)
Again, the category isomorphism of Theorem 3.11 restricts to one between the classes mentioned in Theorem 3.12.
**Corollary 3.13**: _For left restriction semigroups with range, the corresponding categories with biaction are those as in Theorem 3.12 additionally satisfying \(e|f=f|e\) for all \(e,f\in D(C)\), and \(s|e=D(s|e)|s\) for all \(s\in C\) and \(e\in D(C)\)._
**Proof.** If \(S\) is a left restriction semigroup with range, the fact that \({\cal C}(S)\) satisfies the stated conditions is immediate, since the biaction is semigroup multiplication in \(S\). Conversely, if
\(C\) is a category with biaction in which the two given laws plus those in Theorem 3.12 hold, then \(e\otimes_{l}f=e|f=f|e=f\otimes_{l}e\), so \(D({\cal S}(C))\) is a commutative band, and for all \(s,t\in C\), \(s\otimes_{l}D(t)=s|D(t)=D(s|D(t))|s=D(s\otimes_{l}D(t))\otimes_{l}s=D(s\otimes_ {l}t)\otimes_{l}s\) by the left congruence condition in \({\cal S}(C)\), so it is D-ample. \(\Box\)
We note that the derived categories with biaction \({\cal C}(Rel^{d}(X))\) and \({\cal C}(Rel_{X})\) are identical as categories, and the left actions of identities also coincide, but the right actions are different, giving rise to the different cat-semigroup structures.
### Categories and the match-up conditions
By Proposition 3.6, a category with biaction satisfying (TC4) also satisfies (TC4L), (TC4R), (LMU) and (RMU), and so both the left and right pseudoproducts may be defined.
**Proposition 3.14**: _In a category with biaction satisfying (TC4), for all \(s,t\in C\),_
\[s\otimes_{l}t=s\otimes_{r}t=s|D(R(s)|D(t))\circ R(s)|D(t)\circ R(R(s)|D(t))|t.\]
**Proof.** Now for \(s,t\in C\), upon using Corollary 3.5 and the defining laws, we have that
\[s\otimes_{l}t = s|D(t)\circ R(s|D(t))|t\] \[= (s\circ R(s))|D(t)\circ R(R(s)|D(t))|t\] \[= s|(D(R(s)|D(t))\circ R(s)|D(t)\circ R(R(s)|D(t))|t,\]
whhich by symmetry must also equal \(s\otimes_{r}t\). \(\Box\)
It therefore makes sense to refer only to "the pseudoproduct" when (TC4) holds, and henceforth we use the notation "\(\otimes\)" for this one operation. Indeed the above result makes evident that we could from the outset instead have defined the pseudoproduct in a category with biaction satisfying (TC4) to equal the entirely symmetric category _triple_ product
\[s\otimes t=s|(D(R(s)|D(t))\circ R(s)|D(t)\circ R(R(s)|D(t))|t.\]
Note that Example 3.10 is left/right symmetric and hence satisfies (TC4), showing that (TC4) is not sufficient to imply the associativity of the pseudoproduct.
By dualising Proposition 3.3 and then using Proposition 3.6, we obtain the following.
**Corollary 3.15**: _If \(S\) is a cat-semigroup satisfying the match-up conditions, then \({\cal C}(S)\) satisfies (TC4) and \(\otimes\) coincides with the product on \(S\)._
From Proposition 3.6 and Corollary 3.9, we obtain the following.
**Corollary 3.16**: _If \(C\) is a category with biaction satisfying (TC4) and \(\otimes\) is associative, then \({\cal S}(C)=(C,\otimes,D,R)\) is a cat-semigroup satisfying the match-up conditions._
By Theorem 3.11 as well as Corollaries 3.15 and 3.16, we obtain the following.
**Theorem 3.17**: _The category of cat-semigroups satisfying the match-up conditions is isomorphic to the category of categories with biaction satisfying (TC4) and in which \(\otimes\) is associative._
There is a more elegant way to describe the categories with biaction in the last result, that does not make reference to associativity of \(\otimes\), although it is equivalent to two special cases of it in which one of the arguments is in \(D(C)\).
**Proposition 3.18**: _Suppose \(C\) is a category with biaction satisfying (TC4). Then \(\otimes\) is associative if and only if \(C\) satisfies (TC7), which consists of the following two conditions for all \(a,b\in C\) and \(e\in D(C)\):_
* \(e|(a|D(b)\circ R(a|D(b))|b)=e|a|D(b)\circ R(e|a|D(b))|b\) _and_
* \((a|D(R(a)|b)\circ R(a)|b)|e=a|D(R(a)|b|e)\circ R(a)|b|e\)_._
**Proof.** First note that the category products in (TC7) both exist, because (LMU) and (RMU) hold by Proposition 3.6.
Suppose \(\otimes\) is associative in \(C\). Then in particular, for all \(a,b\in C\) and \(e\in D(C)\), we have that \(e\otimes(a\otimes b)=(e\otimes a)\otimes b\), so by Propositions 3.8 and 3.14, \(e|(a\otimes_{l}b)=(e|a)\otimes_{l}b\), which yields (TC7a); we argue dually to give (TC7b).
Conversely, let \(C\) be a category with biaction satisfying (TC4) and (TC7). Then for \(a,b,c\in C\) we have
\[(a\otimes b)\otimes c = (a\otimes_{r}b)\otimes_{l}c\mbox{ by Proposition \ref{prop:c-1}}\] \[= (a\otimes_{r}b)|D(c)\circ R((a\otimes_{r}b)|D(c))|c\] \[= (a|D(R(a)|b)\circ R(a)|b)|D(c)\circ R((a\otimes b)|D(c))|c\mbox{ by (TC7b)}\] \[= a|D(R(a)|b|D(c))\circ R(a)|b|D(c)\circ R(R(a)|b|D(c))|c,\]
and by symmetry (using (TC7a), this must also equal \(a\otimes(b\otimes c)\). So \(\otimes\) is associative. \(\Box\)
**Corollary 3.19**: _The category of cat-semigroups satisfying the match-up conditions is isomorphic to the category of categories with biaction satisfying (TC4) and (TC7)._
In fact, a slightly strengthened form of (TC7) implies (TC4).
**Lemma 3.20**: _Let \(C\) be a category with biaction. Then \(C\) satisfies both (TC4) and associativity of \(\otimes\) if and only if it satisfies (TC7), given by the following conditions: for all \(a,b\in C\) and \(e\in D(C)\),_
* \((a|D(b)\circ R(a|D(b))|b\mbox{ exists and }e|(a|D(b)\circ R(a|D(b))|b)=e|a|D(b) \circ R(e|a|D(b))|b\)_;_
* \(a|D(R(a)|b)\circ R(a)|b\mbox{ exists and }(a|D(R(a)|b)\circ R(a)|b)|e=a|D(R(a)|b|e) \circ R(a)|b|e\)_._
**Proof.** Evidently (TC7a\({}^{\prime}\)) and (TC7b\({}^{\prime}\)) hold in categories with biaction satisfying (TC4) and associativity of \(\otimes\), by Proposition 3.6 (ensuring that (LMU) and (RMU) hold) and Proposition 3.18.
Conversely, assume (TC7a\({}^{\prime}\)) and (TC7b\({}^{\prime}\)) hold in \(C\). Then assuming that \(R(a)=D(b)\) in (TC7a\({}^{\prime}\)) yields (TC4a) as a consequence, and dually for (TC7b\({}^{\prime}\)) and (TC4b). Clearly, (TC7a) and (TC7b) follow easily. \(\Box\)
**Corollary 3.21**: _The category of cat-semigroups satisfying the match-up conditions is isomorphic to the category of categories with biaction satisfying (TC\({}^{\prime}\))._
Theorem 3.17 and its two corollaries can be specialised further as desired, for example to give a description of the categories with biaction arising from DRC-semigroups in the sense of [9] and [18]: one need only build in analogs of the reduced property of the DRC-semigroup to the category with biaction axioms.
Indeed, it is of interest to contrast our approach to obtaining an ESN type result to the one used by Wang in [18] for DRC-semigroups. There, the author took the more traditional order-theoretic approach, in which notions of restriction and corestriction are defined order-theoretically in a generalized category equipped with a suitably well-behaved partial order consistent with an assumed projection algebra structure on the identities. In this approach, one only defines the "restriction" \(e|s\) (\(e\in D(C),s\in C\)) when \(e\leq D(s)\) (under the given order), and then \(e|s\) is defined to be the unique \(t\leq s\) such that \(D(t)=e\); dually for the "corestriction" \(s|e\). (However, some additional purely algebraic laws involving restriction and corestriction are needed.) Hence, fewer products of projections with arbitrary semigroup elements are retained in the derived partial algebra compared to the current approach in which the "biaction" is defined for arbitrary \(e,s\).
However, in the approach of [18], when obtaining the generalized category corresponding to a given DRC-semigroup, the partial product \(s\cdot t\) is defined to exist (and be \(st\)) if and only if \(R(s)=D(R(s)D(t))\) and \(D(t)=R((R(s)D(t))\), for which it is sufficient but certainly not necessary that \(R(s)=D(t)\). Hence, a greater number of general products is retained in this generalized category than in the derived category. Moreover, the approach in [18] requires the initial specification of both a partial order on the entire generalized category, and a projection algebra structure on its identities; our approach requires neither. Indeed, in our setting such order structure does not even seem definable in general.
We note that Axiom (Y4) for generalized categories over projection algebras as in [18] effectively builds in general associativity to the pseudoproduct, whereas in our approach, the simpler laws (TC7a) and (TC7b) are required. We therefore suspect that an alternative equivalent axiomatisation of Wang's DRC-generalized categories over projection algebras exists, in which (Y4) may be replaced by one or two simpler laws.
### Categories and the strong match-up conditions
We conclude this section by considering the category with biaction analog of the strong match-up condition.
**Definition 3.22**: _We define (SMU) to consist of the following two conditions on the category with biaction \(C\):_
1. \(D(R(s)|t)=R(s|D(t))\) _for all_ \(s,t\in C\)_, and_
_._
* \((e|f)\circ(e|f)=e|f\) _for all_ \(e,f\in D(C)\)_._
Note that the product in (SMU2) exists, if we assume (SMU1): \(R(e|f)=R(e|D(f))=D(R(e)|f)=D(e|f)\) for all \(e,f\in C\).
In this case, the pseudoproduct may be expressed in the form used in [3].
**Proposition 3.23**: _Suppose a category with biaction \(C\) satisfies (TC4) and (SMU). Then_
\[s\otimes t=s|D(t)\circ R(s)|t\mbox{ for all }s,t\in C.\]
**Proof.** For all \(x,y\in C\), \(x|D(y)\circ R(x)|y\) exists by (SMU1), and
\[x|D(y)\circ R(x)|y = (x\circ R(x))|D(y)\circ R(x)|(D(y)\circ y)\] \[= x|D(R(x)|D(y))\circ R(x)|D(y)\circ R(x)|D(y)\circ R(R(x)|D(y))|y\mbox { by (TC4)}\] \[= x|D(R(x)|D(y))\circ R(x)|D(y)\circ R(R(x)|D(y))|y\mbox{ by ( SMU2)}\] \[= x\otimes y\mbox{ by Proposition \ref{prop:2.2},}\]
as claimed. \(\Box\)
Observe that Example 3.10 satisfies (SMU), showing that (TC4) and (SMU) are together not sufficient for the associativity of \(\otimes\); for this, (TC7) is still required.
**Proposition 3.24**: _If \(S\) is a cat-semigroup satisfying the strong match-up conditions, then \({\cal C}(S)\) satisfies (TC4) and (SMU), and \(\otimes\) coincides with the product on \(S\)._
**Proof.** By Proposition 2.7, \(S\) satisfies the match-up conditions; hence \({\cal C}(S)\) satisfies (TC4) by Corollary 3.15, and then (SMU) follows easily from the strong match-up conditions for \(S\). \(\Box\)
If \(C\) is a category with biaction satisfying (TC4) and (SMU), it follows from Propositions 2.11 and 3.24 that the terms \(R(x)|y\) and \(D(R(x)|y)|y\) occurring in the pseudoproduct and left pseudoproduct are not necessarily equal, even though the two pseudoproducts are equal; these terms are equal when \(C\) is a transcription category.
**Proposition 3.25**: _If \(C\) is a category with biaction satisfying (TC4), (TC7) and (SMU), then \({\cal S}(C)=(C,\otimes,D,R)\) is a cat-semigroup satisfying the strong match-up conditions._
**Proof.** From Corollary 3.16 as well as Propositions 3.18 and 3.23, \((C,\otimes,D,R)\) satisfies everything that is claimed except perhaps the strong match-up conditions, although it does satisfy the match-up conditions. For \(s,t\in C\), \(D(R(s)\otimes t)=D(R(s)|t)=R(s|D(t))=R(s\otimes D(t))\), and
\[(s\otimes D(t))\otimes(R(s)\otimes t) = (s|D(t))\otimes(R(s)|t)\] \[= (s|D(t))|D(R(s)|t)\circ R(s|D(t))|(R(s)|t)\] \[= (s|D(t))|R(s|D(t))\circ D(R(s)|t)|(R(s)|t)\mbox{ by (SMU1)}\] \[= s|D(t)\circ R(s)|t\] \[= s\otimes t\mbox{ by Proposition \ref{prop:2.2},}\]
so the strong match-up conditions hold in \(\mathcal{S}(C)\). \(\Box\)
By Corollary 3.19 and Propositions 3.23, 3.24 and 3.25, we obtain the following.
**Theorem 3.26**: _The category of cat-semigroups satisfying the strong match-up conditions is isomorphic to the category of categories with biaction satisfying (TC4), (TC7) and (SMU)._
From the proof of Proposition 3.18 we note that (TC7) can be written in slightly simpler form here since \(s\otimes t=s|D(t)\circ R(s)|t\) in this case.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.